survey_title
stringlengths 19
197
| section_num
int64 3
56
| references
stringlengths 4
1.34M
| section_outline
stringlengths 531
9.08k
|
---|---|---|---|
A survey of fault localization techniques in computer networks | 11 | ---
paper_title: Alarm correlation
paper_content:
The authors discuss the development of an alarm correlation model and a corresponding software support system that allow efficient specification of alarm correlation by the domain experts themselves. Emphasis is placed on the end-user orientation of IMPACT, the intelligent management platform for alarm correlation tasks which implements the proposed model. The desire was to lower the barrier between the network management application development process and the end user of the application, the network management personnel. IMPACT is a step towards this goal. The proposed alarm correlation model was used for three purposes: intelligent alarm filtering, alarm generalization and fault diagnosis. >
---
paper_title: High speed and robust event correlation
paper_content:
The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.
---
paper_title: A coding approach to event correlation
paper_content:
This paper describes a novel approach to event correlation in networks based on coding techniques. Observable symptom events are viewed as a code that identifies the problems that caused them; correlation is performed by decoding the set of observed symptoms. The coding approach has been implemented in SMARTS Event Management System (SEMS), as server running under Sun Solaris 2.3. Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems. In addition, it is resilient to high rates of symptom loss and false alarms. Finally, the coding approach scales well to very large domains involving thousands of problems.
---
paper_title: Composite events for network event correlation
paper_content:
With the increasing complexity of enterprise networks and the Internet, event correlation is playing an increasingly important role in network as well as integrated system management systems. Even though the timing of events often reveals important diagnostic information about event relationships and should therefore be represented in event correlation rules or models, most extant approaches lack a formal mechanism to define complex temporal relationships among correlated events. In this paper, we discuss the formal use of composite events for event correlation and present a composite event specification approach that can precisely express complex timing constraints among correlated event instances, for which efficient compilation and detection algorithms have been developed in Mok et al., (1997). A Java implementation of this approach, called Java Event Correlator (JECTOR), is described, and some preliminary experimental results of using JECTOR in an experimental network management environment are also discussed in the paper.
---
paper_title: The alarm information base: a repository for enterprise management
paper_content:
In a typical enterprise operations center, the effective processing of alarms is crucial to the successful management of an information technology environment. Service-affecting alarms can originate from many sources, including customer-premise and carrier networking elements, computing systems (hardware and software) and other managed equipment. Although alarms are often viewed simply as text strings and colored icons on an operations console, the definition and maintenance of alarm-related information within an enterprise has a distinct lifecycle which needs to be managed. One way of managing this lifecycle is through a repository called an alarm information base, which centrally captures information about the alarms raised and processed in an enterprise. In this paper, the motivation for such a repository and the benefits of its use are discussed. An alarm information base can be maintained in an open and extensible format, and should be viewed as a valuable corporate asset for enterprise management.
---
paper_title: Distributed fault identification in telecommunication networks
paper_content:
Telecommunications networks are often managed by a large number of management centers, each responsible for a logically autonomous part of the network. This could be a small subnetwork such as an Ethernet, a Token Ring or an FDDI ring, or a large subnetwork comprising many smaller networks. In response to a single fault in a telecommunications network, many network elements may raise alarms, which are typically reported only to the subarea management center that contains the network element raising the alarm. As a result, a particular management center has a partial view of the status of the network. Management Centers must therefore cooperate in order to correctly infer the real cause of the failure. The algorithms proposed in this paper outline the way these management centers could collaborate in correlating alarms and identifying faults.
---
paper_title: Schemes for fault identification in communication networks
paper_content:
A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.
---
paper_title: A Probabilistic Approach to Fault Diagnosis in Linear Lightwave Networks
paper_content:
The application of probabilistic reasoning to fault diagnosis in linear lightwave networks (LLNs) is investigated. The LLN inference model is represented by a Bayesian network (or causal network). An inference algorithm is proposed that is capable of conducting fault diagnosis (inference) with incomplete evidence and on an interactive basis. Two belief updating algorithms are presented which are used by the inference algorithm for performing fault diagnosis. The first belief updating algorithm is a simplified version of the one proposed by Pearl (1988) for singly connected inference models. The second belief updating algorithm applies to multiply connected inference models and is more general than the first. The authors also introduce a t-fault diagnosis system and an adaptive diagnosis system to further reduce the computational complexity of the fault diagnosis process. >
---
paper_title: Towards a practical alarm correlation system
paper_content:
A single fault in a telecommunication network frequently results in a number of alarms being reported to the network operator. This multitude of alarms can easily obscure the real cause of the fault. In addition, when multiple faults occur at approximately the same time, it can be difficult to determine how many faults have occurred, thus creating the possibility that some may be missed. A variety of solution approaches have been proposed in the literature, however, practically deployable, commercial solutions remain elusive. The experiences of the Network Fault and Alarm Correlator and Tester (NetFACT) project, carried out at IBM Research and described in this paper, provide some insight as to why this is the case, and what must be done to overcome the barriers encountered. Our observations are based on experimental use of the NetFACT system to process a live, continuous alarm stream from a portion of the Advantis physical backbone network, one of the largest private telecommunications networks in the world.
---
paper_title: Centralized vs Distributed Fault Localization
paper_content:
In this paper we compare the performance of fault localization schemes for communication networks. Our model assumes a number of management centers, each responsible for a logically autonomous part of the whole telecommunication network. We briefly present three different fault localization schemes: namely, “Centralized”, “Decentralized” and “Distributed” fault localization, and, we compare their performance with respect to the computational effort each requires and the accuracy of the solution that each provides.
---
paper_title: A scheduling-based event correlation scheme for fault identification in communications network
paper_content:
Communications network has increased dramatically in size and complexity in the past few years. A typical network may consist of hundreds of nodes from various manufacturers with different traffic and bandwidth requirements. The increasing complexity poses serious problems to network management and control. As faults are inevitable, quick detection, identification and recovery are crucial to make the systems more robust and their operation more reliable. This article proposes a novel event correlation scheme for fault identification in communications network. This scheme is based on the bi-level feedback queues scheduling policy. The causality graph model is used to describe the cause-and-effect relationships between network events. The use of scheduling policy makes the correlation process simple and fast. A simulation model is developed to verify the effectiveness and efficiency of the proposed scheme. From simulation results, we notice that this scheme not only identifies multiple problems at one time but also is insensitive to noise. As the time complexity of the correlation procedure is close to a function of n, where n is the number of observed symptoms, with order O(n); therefore, the on-line fault identification is easy to achieve.
---
paper_title: Alarm correlation and fault identification in communication networks
paper_content:
Presents an approach for modeling and solving the problem of fault identification and alarm correlation in large communication networks. A single fault in a large network may result in a large number of alarms, and it is often very difficult to isolate the true cause of the fault. This appears to be one of the most important difficulties in managing faults in today's networks. The problem may become worse in the case of multiple faults. The authors present a general methodology for solving the alarm correlation and fault identification problem. They propose a new alarm structure, propose a general model for representing the network, and give two algorithms which can solve the alarm correlation and fault identification problem in the presence of multiple faults. These algorithms differ in the degree of accuracy achieved in identifying the fault, and in the degree of complexity required for implementation. >
---
paper_title: Real-time telecommunication network management: extending event correlation with temporal constraints
paper_content:
Event correlation is becoming one of the most central techniques in managing the high volume of event messages. Practically, no network management system can ignore network surveillance and control procedures which are based on event correlation. The majority of existing network management systems use relatively simple ad hoc additions to their software to perform alarm correlation. In these systems, alarm correlation is handled as an aggregation procedure over sets of alarms exhibiting similar attributes. In recent years, several more sophisticated alarm correlation models have been proposed. In this paper, we will expand our knowledge-based event correlation model to capture temporal constraints.
---
paper_title: Dependency Analysis in Distributed Systems using Fault Injection: Application to Problem Determination in an e-commerce Environment
paper_content:
──────────────────────────────────────── Distributed networked applications that are being deployed in enterprise settings, increasingly rely on a large number of heterogeneous hardware and software components for providing end-to-end services. In such settings, the issue of problem diagnosis becomes vitally important, in order to minimize system outages and improve system availability. This motivates interest in dependency characterization among the different components in distributed application environments. A promising approach for obtaining dynamic dependency information is the Active Dependency Discovery technique in which a dependency graph of e-commerce transactions on hardware and software components in the system is built by individually “perturbing” the system components during a testing phase and collecting measurements corresponding to the external behavior of the system. In this paper, we propose using fault injection as the perturbation tool for dynamic dependency discovery and problem determination. We describe a method for characterizing dependencies of transactions on the system resources in a typical e-commerce environment, and show how it can aid in problem diagnosis. The method is applied to an application server middleware platform, running end-user activity composed of TPC-W transactions. Representative fault models for such an environment, that can be used to construct the fault injection campaign, are also presented.
---
paper_title: Artificial Life and the Animat Approach to Artificial Intelligence
paper_content:
Publisher Summary This chapter discusses the topic of artificial life (AL), with emphasis on animats. AL is a novel scientific pursuit that aims at studying manmade systems exhibiting behaviors that are characteristic of natural living systems. AL complements the traditional biological sciences concerned with the analysis of living organisms by attempting to synthesize life-like behaviors within computers or other artificial media. One particularly active area of artificial life is concerned with the conception and construction of artificial animals simulated by computers or by actual robots whose rules of behavior are inspired by those of animals. These simulations are known as Animats. Research in the field of standard artificial intelligence (AI) aims at simulating the most elaborate faculties of the human brain such as problem solving, natural language understanding, and logical reasoning. With the aim of explaining how peculiar human faculties might be inherited from the simplest adaptive abilities of animals, the animat approach is based on the conception or construction of simulated animals or robots capable of surviving in more or less unpredictable and threatening environments. The animat approach places emphasis on the characteristics neglected by standard AI. This approach is interested explicitly in the interactions between an animat and its environment and particularly stresses the aptitude of the animat to survive in unexpected environmental circumstances. Centered on the study of grounded and robust behaviors, research on the adaptive behavior of animats avoids the pitfalls of standard AI, improves our knowledge in those domains where standard AI has failed notoriously, and notably addresses the problems of perception, categorization, and sensorimotor control.
---
paper_title: Alarm correlation
paper_content:
The authors discuss the development of an alarm correlation model and a corresponding software support system that allow efficient specification of alarm correlation by the domain experts themselves. Emphasis is placed on the end-user orientation of IMPACT, the intelligent management platform for alarm correlation tasks which implements the proposed model. The desire was to lower the barrier between the network management application development process and the end user of the application, the network management personnel. IMPACT is a step towards this goal. The proposed alarm correlation model was used for three purposes: intelligent alarm filtering, alarm generalization and fault diagnosis. >
---
paper_title: Automated end-to-end system diagnosis of networked printing services using model based reasoning
paper_content:
distributed service management, client-server systems, automated end-to-end diagnostics, model-based reasoning, MBR, network printing, artificial intelligence Modern applications increasingly depend on services that are distributed across the infrastructure, for example, printing, mail and database services. This is especially so in mobile computing, where network access is used to compensate for the paucity of onboard resources. However, the increased dependence on distributed services poses a challenge to IT departments whose personnel and tools typically focus on individual components of the distributed service. This calls for IT management tools that have an end-to-end perspective on entire distributed services.
---
paper_title: Rule Discovery in Telecommunication Alarm Data
paper_content:
Fault management is an important but difficult area of telecommunication network management: networks produce large amounts of alarm information which must be analyzed and interpreted before faults can be located. So called alarm correlation is a central technique in fault identification. While the use of alarm correlation systems is quite popular and methods for expressing the correlations are maturing, acquiring all the knowledge necessary for constructing an alarm correlation system for a network and its elements is difficult. We describe a novel partial solution to the task of knowledge acquisition for correlation systems. We present a method and a tool for the discovery of recurrent patterns of alarms in databases; these patterns, episode rules, can be used in the construction of real-time alarm correlation systems. We also present tools with which network management experts can browse the large amounts of rules produced. The construction of correlation systems becomes easier with these tools, as the episode rules provide a wealth of statistical information about recurrent phenomena in the alarm stream. This methodology has been implemented in a research system called TASA, which is used by several telecommunication operators. We briefly discuss experiences in the use of TASA.
---
paper_title: Composite events for network event correlation
paper_content:
With the increasing complexity of enterprise networks and the Internet, event correlation is playing an increasingly important role in network as well as integrated system management systems. Even though the timing of events often reveals important diagnostic information about event relationships and should therefore be represented in event correlation rules or models, most extant approaches lack a formal mechanism to define complex temporal relationships among correlated events. In this paper, we discuss the formal use of composite events for event correlation and present a composite event specification approach that can precisely express complex timing constraints among correlated event instances, for which efficient compilation and detection algorithms have been developed in Mok et al., (1997). A Java implementation of this approach, called Java Event Correlator (JECTOR), is described, and some preliminary experimental results of using JECTOR in an experimental network management environment are also discussed in the paper.
---
paper_title: Alarm correlation engine (ACE)
paper_content:
Networks are growing in size and complexity, resulting in increased alarm volume and number of unfamiliar alarms. Often, there is no proportional increase in monitoring personnel and response time to faults suffers. GTE deployed Telephone Operations Network Integrated Control System (TONICS) in 1993 to support its network management operations. To stay competitive in the face of continued staff reductions, increase in network size, and monitoring complications related to deregulation of the telephone industry, GTE is introducing artificial intelligence techniques into TONICS. Alarm Correlation Engine (ACE), the system described in this paper, is part of the effort. ACE aids network management by correlating alarms on the basis of common cause to provide alarm compression, filtering, and suppression. In conjunction with its ability to carry out prescribed responses, it improves response times and increases productivity. ACE was developed with the following requirements: reliability, speed, versatility (handle alarms from different switches and networks), ease of knowledge engineering (field technicians must be able to construct, test, and modify correlation patterns), handle in real time multiple network problems, and finally, interface smoothly with GTE's TONICS system. ACE's strength lies in its domain specific correlation language which facilitates knowledge engineering and in its asynchronous processing core that enables integration into a real-time monitoring system.
---
paper_title: Investigation and practical assessment of alarm correlation methods for the use in GSM access networks
paper_content:
This paper compares and assesses several alarm correlation methods for their suitability and performance in global systems for mobile communications (GSM). The assessment criteria used reflect the special circumstances found in these networks. A high importance is given to the aspects related to a practical and feasible network management. Of the neural networks investigated, the cascade correlation learning algorithm performs best. This approach is compared with correlation techniques proposed in the literature: rule-based diagnosis, model based diagnosis and alarm correlation using codebooks. It is shown that for alarm correlation in a GSM access network the proposed cascade correlation approach is superior to the other correlation techniques.
---
paper_title: Pattern discovery and specification techniques for alarm correlation
paper_content:
Ever increasing amounts of alarm data threaten the stability of management systems in high speed telecommunications networks. As networks continue to develop, becoming larger, using substantially higher bandwidth links and using more complex equipment, so the danger of alarm inundation is increased. Alarm correlation systems have been seen to play a vital role in dealing with the problem. However, the question of what to correlate and how to recognise and specify related alarms has either been left largely unanswered or distinct from the physical correlation process. In this presentation, we describe a unifying framework which uses a purpose-designed language to specify alarm patterns and then use the results in a real-time correlation engine. In order to test the effectiveness of the solution, the language was translated onto an existing proprietary correlation system and fed with alarm data from an SDH network test-bed. Preliminary evaluation has indicated the system to be extremely fast, potentially robust to network dynamics and, importantly, resilient to a degree of input space error. Furthermore, it is easy to use, intuitive and may be extended through the incorporation of artificial intelligence modules.
---
paper_title: Designing expert systems for real-time diagnosis of self-correcting networks
paper_content:
The authors discuss design issues that they have encountered during an investigation of expert systems for network management that they believe to be generic to the real-time diagnosis of self-correcting networks. By real-time they mean that the diagnostic system must keep pace with a dynamic process, that is, the flow of alarms from intelligent network elements. The objective is to present the operator with a set of recommended actions rather than large volumes of raw alarm data. They outline the general requirements of such a system and then suggest how each can be addressed using an expert-system approach. >
---
paper_title: Real-time telecommunication network management: extending event correlation with temporal constraints
paper_content:
Event correlation is becoming one of the most central techniques in managing the high volume of event messages. Practically, no network management system can ignore network surveillance and control procedures which are based on event correlation. The majority of existing network management systems use relatively simple ad hoc additions to their software to perform alarm correlation. In these systems, alarm correlation is handled as an aggregation procedure over sets of alarms exhibiting similar attributes. In recent years, several more sophisticated alarm correlation models have been proposed. In this paper, we will expand our knowledge-based event correlation model to capture temporal constraints.
---
paper_title: Event correlation using rule and object based techniques
paper_content:
Today’s competitive market place has forced the telecommunications industry to improve their service and reliability. One step that telecommunications companies have taken to reduce network failures is the installation of operations centers to collect data from network elements. These centers are staffed by network managers who monitor network activity by correlating alarms across various operational disciplines (switch, facility, traffic) and relating them to a common cause. Accurate analysis is often difficult due to the volume of data and complexity of problems.
---
paper_title: Fault isolation and event correlation for integrated fault management
paper_content:
An algorithm for fault isolation and event correlation in integrated networks is presented. It reconstructs fault propagation during run-time by exploring relationships between managed objects and provides improved focus and efficiency compared to similar algorithms. The functionality of the algorithm is generic, straightforward, efficient, and applicable for different management architectures such as SNMP and TMN. Clearly defined interfaces allow parallel execution of problem investigation, integration of different management architectures and systems, and incorporation of other correlation techniques.
---
paper_title: Integrated Event Management: Event Correlation Using Dependency Graphs
paper_content:
Today’s fault management requires a sophisticated event management to condense events to meaningful fault reports. This severe practical need is addressed by event correlation which is an area of intense research in the scientific community and the industry. This paper introduces an approach for event correlation that uses a dependency graph to represent correlation knowledge. The benefit over existing approaches that are briefly classified here is that this approach is specifically well suited to instrument an existing management system for event correlation. It thereby deals with the complexity, dynamics and distribution of real–life managed systems. That is why it is considered to provide integrated event management. The basic idea is to connect the event correlator to a given management system and gain a dependency graph from it to model the functional dependencies within the managed system. The event correlator searches through the dependency graph to localize managed objects whose failure would explain a large number of management events received. The paper gives a short overview of existing approaches, introduces our approach and its application. It shows why dependency graphs are suitable, how they can be derived and finally presents the prototype developed so far.
---
paper_title: Towards a practical alarm correlation system
paper_content:
A single fault in a telecommunication network frequently results in a number of alarms being reported to the network operator. This multitude of alarms can easily obscure the real cause of the fault. In addition, when multiple faults occur at approximately the same time, it can be difficult to determine how many faults have occurred, thus creating the possibility that some may be missed. A variety of solution approaches have been proposed in the literature, however, practically deployable, commercial solutions remain elusive. The experiences of the Network Fault and Alarm Correlator and Tester (NetFACT) project, carried out at IBM Research and described in this paper, provide some insight as to why this is the case, and what must be done to overcome the barriers encountered. Our observations are based on experimental use of the NetFACT system to process a live, continuous alarm stream from a portion of the Advantis physical backbone network, one of the largest private telecommunications networks in the world.
---
paper_title: A Modeling Framework for Integrated Distributed Systems Fault Management
paper_content:
This paper describes a modeling framework for integrated fault management of distributed systems . The model integrates all different layers of distributed systems such as applica tion, system, and network layer in a single, consistent view. This enables generic manage ment applications to perform their tasks across layer boundaries of the distributed system without knowledge about the specific details. The focus is on fault management issues . Dependencies between resources critical for the availability of the distributed system are modeled using relationships. Generic fault management applications are hereby enabled to determine the root cause of a distributed system failure automatically. The SAP Rj3 application serves as an example to demonstrate the capabilities of the modeling framework.
---
paper_title: A Generic Model for Fault Isolation in Integrated Management Systems
paper_content:
Distributed systems in enterprises as well as telecommunication environments strongly demand more automated fault management. A single fault in these complex systems might cause a huge number of symptomatic error messages and side effects to occur. The common root faults for these symptoms have to be identified to start fault removal procedures as soon as possible and to decrease system down-time. This paper presents a methodology for fault isolation in integrated management systems. A generic model is described that unifies the view of the management system on the managed environment. It integrates the relevant aspects of network, system, and service management layers in order to perform integrated fault isolation. Our approach is based on a general dependency graph model. It captures the information that is required to determine the root cause of a fault on the one hand, and the set of fault affected services and customers on the other hand. The layered TMN architecture serves as an example for an integrated management environment throughout this paper.
---
paper_title: High speed and robust event correlation
paper_content:
The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.
---
paper_title: Fault isolation and event correlation for integrated fault management
paper_content:
An algorithm for fault isolation and event correlation in integrated networks is presented. It reconstructs fault propagation during run-time by exploring relationships between managed objects and provides improved focus and efficiency compared to similar algorithms. The functionality of the algorithm is generic, straightforward, efficient, and applicable for different management architectures such as SNMP and TMN. Clearly defined interfaces allow parallel execution of problem investigation, integration of different management architectures and systems, and incorporation of other correlation techniques.
---
paper_title: A coding approach to event correlation
paper_content:
This paper describes a novel approach to event correlation in networks based on coding techniques. Observable symptom events are viewed as a code that identifies the problems that caused them; correlation is performed by decoding the set of observed symptoms. The coding approach has been implemented in SMARTS Event Management System (SEMS), as server running under Sun Solaris 2.3. Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems. In addition, it is resilient to high rates of symptom loss and false alarms. Finally, the coding approach scales well to very large domains involving thousands of problems.
---
paper_title: End-to-end service failure diagnosis using belief networks
paper_content:
We present fault localization techniques suitable for diagnosing end-to-end service problems in communication systems with complex topologies. We refine a layered system model that represents relationships between services and functions offered between neighboring protocol layers. In a given layer, an end-to-end service between two hosts may be provided using multiple host-to-host services offered in this layer between two hosts on the end-to-end path. Relationships among end-to-end and host-to-host services form a bipartite probabilistic dependency graph whose structure depends on the network topology in the corresponding protocol layer. When an end-to-end service fails or experiences performance problems it is important to efficiently find the responsible host-to-host services. Finding the most probable explanation (MPE) of the observed symptoms is NP-hard. We propose two fault localization techniques based on Pearl's (1988) iterative algorithms for singly connected belief networks. The probabilistic dependency graph is transformed into a belief network, and then the approximations based on Pearl's algorithms and exact bucket tree elimination algorithm are designed and evaluated through extensive simulation study.
---
paper_title: Schemes for fault identification in communication networks
paper_content:
A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.
---
paper_title: Towards a practical alarm correlation system
paper_content:
A single fault in a telecommunication network frequently results in a number of alarms being reported to the network operator. This multitude of alarms can easily obscure the real cause of the fault. In addition, when multiple faults occur at approximately the same time, it can be difficult to determine how many faults have occurred, thus creating the possibility that some may be missed. A variety of solution approaches have been proposed in the literature, however, practically deployable, commercial solutions remain elusive. The experiences of the Network Fault and Alarm Correlator and Tester (NetFACT) project, carried out at IBM Research and described in this paper, provide some insight as to why this is the case, and what must be done to overcome the barriers encountered. Our observations are based on experimental use of the NetFACT system to process a live, continuous alarm stream from a portion of the Advantis physical backbone network, one of the largest private telecommunications networks in the world.
---
paper_title: A conceptual framework for network management event correlation and filtering systems
paper_content:
Event correlation is a key functionality of a network management system that is used to determine the root cause of faults in a network, and to filter out redundant and spurious events. A number of event correlation systems have been proposed. The event correlation systems generally combine causal and temporal correlation models with the topology of a network. The power and robustness of the models used and the algorithms developed vary from system to system. However, in the absence of a simple, uniform, and precise presentation of the event-correlation problem, it is impossible to compare their relative power or even analyze them for their properties. In general, causal and temporal-based correlation models have not been rigorously presented or thoroughly investigated. In this paper we formalize the concepts of causal and temporal correlation using a single conceptual framework. We characterize various properties of the framework. We can characterize existing systems based on the formal properties of our framework, and we consider one system as an illustrative example.
---
paper_title: Probabilistic fault diagnosis in communication systems through incremental hypothesis updating
paper_content:
This paper presents a probabilistic event-driven fault localization technique, which uses a probabilistic symptom-fault map as a fault propagation model. The technique isolates the most probable set of faults through incremental updating of a symptom-explanation hypothesis. At any time, it provides a set of alternative hypotheses, each of which is a complete explanation of the set of symptoms observed thus far. The hypotheses are ranked according to a measure of their goodness. The technique allows multiple simultaneous independent faults to be identified and incorporates both negative and positive symptoms in the analysis. As shown in a simulation study, the technique offers close-to-optimal accuracy and is resilient both to noise in the symptom data and to inaccuracies of the probabilistic fault propagation model.
---
paper_title: Alarm correlation and fault identification in communication networks
paper_content:
Presents an approach for modeling and solving the problem of fault identification and alarm correlation in large communication networks. A single fault in a large network may result in a large number of alarms, and it is often very difficult to isolate the true cause of the fault. This appears to be one of the most important difficulties in managing faults in today's networks. The problem may become worse in the case of multiple faults. The authors present a general methodology for solving the alarm correlation and fault identification problem. They propose a new alarm structure, propose a general model for representing the network, and give two algorithms which can solve the alarm correlation and fault identification problem in the presence of multiple faults. These algorithms differ in the degree of accuracy achieved in identifying the fault, and in the degree of complexity required for implementation. >
---
paper_title: Schemes for fault identification in communication networks
paper_content:
A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.
---
paper_title: Alarm correlation and fault identification in communication networks
paper_content:
Presents an approach for modeling and solving the problem of fault identification and alarm correlation in large communication networks. A single fault in a large network may result in a large number of alarms, and it is often very difficult to isolate the true cause of the fault. This appears to be one of the most important difficulties in managing faults in today's networks. The problem may become worse in the case of multiple faults. The authors present a general methodology for solving the alarm correlation and fault identification problem. They propose a new alarm structure, propose a general model for representing the network, and give two algorithms which can solve the alarm correlation and fault identification problem in the presence of multiple faults. These algorithms differ in the degree of accuracy achieved in identifying the fault, and in the degree of complexity required for implementation. >
---
paper_title: High speed and robust event correlation
paper_content:
The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.
---
paper_title: A coding approach to event correlation
paper_content:
This paper describes a novel approach to event correlation in networks based on coding techniques. Observable symptom events are viewed as a code that identifies the problems that caused them; correlation is performed by decoding the set of observed symptoms. The coding approach has been implemented in SMARTS Event Management System (SEMS), as server running under Sun Solaris 2.3. Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems. In addition, it is resilient to high rates of symptom loss and false alarms. Finally, the coding approach scales well to very large domains involving thousands of problems.
---
paper_title: Coding and information theory
paper_content:
1: Entropy. 2: Noisless Coding. 3: Noisy Coding. 4: General Remarks on Codes. 5: Linear Codes. 6: Some Linear Codes. 7: Finite Fields and Cyclic Codes. 8: Some Cyclic Codes.
---
paper_title: Identification of Faulty Links in Dynamic-Routed Networks
paper_content:
The authors present a maximum a posteriori method to identify faulty links in a communication network. A designated network node with management responsibilities determines a fault has occurred due to its inability to communicate with certain other nodes. Given this information as well as the information that it can communicate with another specified set of nodes, one would like to identify as quickly as possible a ranked list of the most probable failed network links. The authors also indicate how the method might be extended to the identification of most probable faulty network resources in a more abstract (higher level) model of a network, including, for example, an object-oriented model. >
---
paper_title: A Probabilistic Approach to Fault Diagnosis in Linear Lightwave Networks
paper_content:
The application of probabilistic reasoning to fault diagnosis in linear lightwave networks (LLNs) is investigated. The LLN inference model is represented by a Bayesian network (or causal network). An inference algorithm is proposed that is capable of conducting fault diagnosis (inference) with incomplete evidence and on an interactive basis. Two belief updating algorithms are presented which are used by the inference algorithm for performing fault diagnosis. The first belief updating algorithm is a simplified version of the one proposed by Pearl (1988) for singly connected inference models. The second belief updating algorithm applies to multiply connected inference models and is more general than the first. The authors also introduce a t-fault diagnosis system and an adaptive diagnosis system to further reduce the computational complexity of the fault diagnosis process. >
---
paper_title: Markov Monitoring with Unknown States
paper_content:
Pattern recognition methods and hidden Markov models can be effective tools for online health monitoring of communications systems. Previous work has assumed that the states in the system model are exhaustive. This can be a significant drawback in real-world fault monitoring applications where it is difficult if not impossible to model all the possible fault states of the system in advance. In this paper a method is described for extending the Markov monitoring approach to allow for unknown or novel states which cannot be accounted for when the model is being designed. The method is described and evaluated on data from one of the Jet Propulsion Laboratory's Deep Space Network antennas. The experimental results indicate that the method is both practical and effective, allowing both discrimination between known states and detection of previously unknown fault conditions. >
---
paper_title: High speed and robust event correlation
paper_content:
The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.
---
paper_title: End-to-end service failure diagnosis using belief networks
paper_content:
We present fault localization techniques suitable for diagnosing end-to-end service problems in communication systems with complex topologies. We refine a layered system model that represents relationships between services and functions offered between neighboring protocol layers. In a given layer, an end-to-end service between two hosts may be provided using multiple host-to-host services offered in this layer between two hosts on the end-to-end path. Relationships among end-to-end and host-to-host services form a bipartite probabilistic dependency graph whose structure depends on the network topology in the corresponding protocol layer. When an end-to-end service fails or experiences performance problems it is important to efficiently find the responsible host-to-host services. Finding the most probable explanation (MPE) of the observed symptoms is NP-hard. We propose two fault localization techniques based on Pearl's (1988) iterative algorithms for singly connected belief networks. The probabilistic dependency graph is transformed into a belief network, and then the approximations based on Pearl's algorithms and exact bucket tree elimination algorithm are designed and evaluated through extensive simulation study.
---
paper_title: Combinatorial designs in multiple faults localization for battlefield networks
paper_content:
We present an application of combinatorial designs and variance analysis to correlating events in the midst of multiple network faults. The network fault model is based on the probabilistic dependency graph that accounts for the uncertainty about the state of network elements. Orthogonal arrays help reduce the exponential number of failure configurations to a small subset on which further analysis is performed. The preliminary results show that statistical analysis can pinpoint the probable causes of the observed symptoms with high accuracy and significant level of confidence. An example demonstrates how multiple soft link failures are localized in MIL-STD 188-220's datalink layer to explain the end-to-end connectivity problems in the network layer This technique can be utilized for the networks operating in an unreliable environment such as wireless and/or military networks.
---
paper_title: Schemes for fault identification in communication networks
paper_content:
A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.
---
paper_title: An automated fault diagnosis system using hierarchical reasoning and alarm correlation
paper_content:
A slight fault may even cause critical disruptions or remediless damages to the network while a network manager is lost in a large number of alarms. Therefore, the development of a practical and effective system for network fault diagnosis becomes an urgent job. We develop a hierarchical domain-oriented reasoning mechanism suitable for the delegated management architecture. It is based on the causality graph of the sensibly-reduced network fault propagation model from the result of our empirical study. An automated fault diagnosis system called ACView (Alarm Correlation View) for isolating network faults in a multidomain environment is proposed according to the hierarchical reasoning mechanism. This diagnosis system provides not only the process of automated alarm collection and correlation, but also the function of efficient fault localization and identification.
---
paper_title: Probabilistic fault diagnosis in communication systems through incremental hypothesis updating
paper_content:
This paper presents a probabilistic event-driven fault localization technique, which uses a probabilistic symptom-fault map as a fault propagation model. The technique isolates the most probable set of faults through incremental updating of a symptom-explanation hypothesis. At any time, it provides a set of alternative hypotheses, each of which is a complete explanation of the set of symptoms observed thus far. The hypotheses are ranked according to a measure of their goodness. The technique allows multiple simultaneous independent faults to be identified and incorporates both negative and positive symptoms in the analysis. As shown in a simulation study, the technique offers close-to-optimal accuracy and is resilient both to noise in the symptom data and to inaccuracies of the probabilistic fault propagation model.
---
paper_title: The new software paladins
paper_content:
Many people have yet to hear of application service providers, but this new breed of company is already seeking to transform the world of corporate software. Industry watchers, in fact, are saying that the providers are the next step in the evolution of the information technology business and that their success will in the future alter the way all companies obtain and use software, whether enterprise resource management packages, spreadsheets, or even word-processing applications.
---
paper_title: End-to-end service failure diagnosis using belief networks
paper_content:
We present fault localization techniques suitable for diagnosing end-to-end service problems in communication systems with complex topologies. We refine a layered system model that represents relationships between services and functions offered between neighboring protocol layers. In a given layer, an end-to-end service between two hosts may be provided using multiple host-to-host services offered in this layer between two hosts on the end-to-end path. Relationships among end-to-end and host-to-host services form a bipartite probabilistic dependency graph whose structure depends on the network topology in the corresponding protocol layer. When an end-to-end service fails or experiences performance problems it is important to efficiently find the responsible host-to-host services. Finding the most probable explanation (MPE) of the observed symptoms is NP-hard. We propose two fault localization techniques based on Pearl's (1988) iterative algorithms for singly connected belief networks. The probabilistic dependency graph is transformed into a belief network, and then the approximations based on Pearl's algorithms and exact bucket tree elimination algorithm are designed and evaluated through extensive simulation study.
---
paper_title: Virtual Private Networks
paper_content:
A system and method of automatically configuring virtual private networks is provided. The virtual private networks disclosed, include multiple routers selectively connectable to the shared network, such that each of the routers is assigned at least one: shared network address, private network address and virtual private network identifier. Each router includes a controller configured to communicate a router configuration message over the shared network to other members of the same virtual private network. The router configuration message informs the other members of the virtual private network the address of the router and what devices are connected to the router.
---
paper_title: Event modeling with the MODEL language
paper_content:
Event modeling is an essential component of event correlation systems; this paper introduces the MODEL language, which comprises the event modeling component of SMARTS’ InChargeTM event correlation system. We demonstrate the features of the MODEL language through examples from the multimedia Quality of Service (QoS) domain. In addition, we provide a comparison of MODEL with the event modeling capabilities of other event correlation systems; we demonstrate that MODEL generalizes the capabilities of other systems and is more flexible.
---
paper_title: Layered model for supporting fault isolation and recovery
paper_content:
The primary objective of our research is to efficiently support the manual and automated steps needed to perform network fault management (detection, isolation, and recovery). This paper introduces a layered model and an implementation scheme for enhancing the level of automation in fault isolation and recovery. This complements earlier fault management work that has mostly focussed on automating the fault detection and isolation aspects. Our model allows the use of network information, which is typically available to the network designers, in a form more suitable to the network operators. The model supports the use of fault analysis rules that are defined for the network functions, protocols and services at the various network layers. These rules can capture dependency, availability, redundancy, switchover and hierarchical status information with respect to the network services provided by a network.
---
paper_title: The GRID: Blueprint for a New Computing Infrastructure
paper_content:
Edited by Tianruo Yang Kluwer Academic Publisher, Dordrech, Netherlands, 1999, 248 pp. ISBN 0-7923-8588-8, $135.00 This book contains a selection of contributed and invited papers presented and the workshop Frontiers of Parallel Numerical Computations and Applications, in the IEEE 7th Symposium on the Frontiers on Massively Parallel Computers (Frontiers '99) at Annapolis, Maryland, February 20-25, 1999. Its main purpose is to update the designers and users of parallel numerical algorithms with the latest research in the field. A broad spectrum of topics on parallel numerical computations, with applications to some of the more challenging engineering problems, is covered. Parallel algorithm designers and engineers who use extensively parallel numerical computations, as well as graduate students in Computer Science, Scientific Computing, various engineering fields and applied mathematics should benefit from reading it. The first part is addressed to a larger audience and presents papers on parallel numerical algorithms. Two new libraries are presented: PSPASSES and PoLAPACK. PSPASSES is a collection of parallel direct solvers, for sparse symmetric positive definite linear systems, which are characterized by high performance and good scalability. PoLAPACK library contains LU and QR codes based on a new blocking strategy that guarantees good performance regardless of the physical block size. Next, an efficient approach to solving stiff ordinary differential equations by diagonal implicitly iterated Runge-Kutta (DIIRK) method is described. DIIRK renders a fast parallel implementation due to a reduced number of function evaluation and an automatic stepsize control mechanism. Finally, minimization of sufficiently smooth non-linear functionals is sought via parallel space decomposition. Here, a theoretical background of the problem and two equivalent algorithms are presented. New research directions for classical solvers are treated in the next three papers: first, reduction of the global synchronization in the biconjugate gradient method, second, a new more efficient Jacobi ordering for the multiple-port hypercubes, and finally, an analysis of the theoretical performance of an improved version of the Quasi-minimal residual method. Parallel numerical applications constitute the second part of the book, with results from fluid mechanics, material sciences, applications to signal and image processing, dynamic systems, semiconductor technology and electronic circuits and systems design. With one exception, the authors expose in detail parallel implementations of the algorithms and numerical results. First, a 3D-elasticity problem is solved using an additive overlapping domain decomposition algorithm. Second, an overlapping mesh technique is used in a parallel solver for the compressible flow problem. Then, a parallel version of a complex numerical algorithm to solve a lubrication problem studied in tribology is introduced. Next, a timid approach to parallel computing of the cavity flow by the finite element method is presented. The problem solved is rather small for today's needs and only up to 6 processors are used. This is also the only paper that does not present results from numerical experiments. The remaining applications discussed in the subsequent chapters are: large scale multidisciplinary design optimization problem with application to the design of a supersonic commercial aircraft, a report on progress in parallel solution of an electromagnetic scattering problem using boundary integral methods and an optimal solution to the convection-diffusion equation modeling the concentration of a pollutant in the air. The book is of definite interest to readers who keep up-to-date with the parallel numerical computation research. The main purpose, to present the novel ideas, results and work in progress and advancing state-of-the-art techniques in the area of parallel and distributed computing for numerical and computational optimization problems in scientific and engineering application is clearly achieved. However, due to its content it cannot serve as a textbook for a computer science or engineering class. Overall, is a reference type book to be kept by specialists and in a library rather than a book to be purchased for self-introduction to the field. Most of the papers presented are results of ongoing research and so they rely heavily on previous results. On the other hand, with only one exception, the results presented in the papers are a great source of information for the researchers currently involved in the field. Michelle Pal, Los Alamos National Laboratory
---
paper_title: High speed and robust event correlation
paper_content:
The authors describe a network management system and illustrate its application to managing a distributed database application on a complex enterprise network.
---
paper_title: Composite events for network event correlation
paper_content:
With the increasing complexity of enterprise networks and the Internet, event correlation is playing an increasingly important role in network as well as integrated system management systems. Even though the timing of events often reveals important diagnostic information about event relationships and should therefore be represented in event correlation rules or models, most extant approaches lack a formal mechanism to define complex temporal relationships among correlated events. In this paper, we discuss the formal use of composite events for event correlation and present a composite event specification approach that can precisely express complex timing constraints among correlated event instances, for which efficient compilation and detection algorithms have been developed in Mok et al., (1997). A Java implementation of this approach, called Java Event Correlator (JECTOR), is described, and some preliminary experimental results of using JECTOR in an experimental network management environment are also discussed in the paper.
---
paper_title: Schemes for fault identification in communication networks
paper_content:
A single fault in a large communication network may result in a large number of fault indications (alarms) making the isolation of the primary source of failure a difficult task. The problem becomes worse in cases of multiple faults. In this paper we present an approach for modelling the problem of fault diagnosis. We propose a graph based network model that takes into account the dependencies among the different objects in the telecommunication environment and a novel approach to estimate the domain of an alarm. Based on that model, we design an algorithm for fault diagnosis and analyze its performance with respect to the accuracy of the fault hypotheses it provides. We also propose and analyze a fault diagnosis algorithm suitable for systems for which an independent failure assumption is valid. Finally, we examine the importance of the information of dependency between objects for the fault diagnosis process.
---
paper_title: A conceptual framework for network management event correlation and filtering systems
paper_content:
Event correlation is a key functionality of a network management system that is used to determine the root cause of faults in a network, and to filter out redundant and spurious events. A number of event correlation systems have been proposed. The event correlation systems generally combine causal and temporal correlation models with the topology of a network. The power and robustness of the models used and the algorithms developed vary from system to system. However, in the absence of a simple, uniform, and precise presentation of the event-correlation problem, it is impossible to compare their relative power or even analyze them for their properties. In general, causal and temporal-based correlation models have not been rigorously presented or thoroughly investigated. In this paper we formalize the concepts of causal and temporal correlation using a single conceptual framework. We characterize various properties of the framework. We can characterize existing systems based on the formal properties of our framework, and we consider one system as an illustrative example.
---
paper_title: Real-time telecommunication network management: extending event correlation with temporal constraints
paper_content:
Event correlation is becoming one of the most central techniques in managing the high volume of event messages. Practically, no network management system can ignore network surveillance and control procedures which are based on event correlation. The majority of existing network management systems use relatively simple ad hoc additions to their software to perform alarm correlation. In these systems, alarm correlation is handled as an aggregation procedure over sets of alarms exhibiting similar attributes. In recent years, several more sophisticated alarm correlation models have been proposed. In this paper, we will expand our knowledge-based event correlation model to capture temporal constraints.
---
paper_title: Centralized vs Distributed Fault Localization
paper_content:
In this paper we compare the performance of fault localization schemes for communication networks. Our model assumes a number of management centers, each responsible for a logically autonomous part of the whole telecommunication network. We briefly present three different fault localization schemes: namely, “Centralized”, “Decentralized” and “Distributed” fault localization, and, we compare their performance with respect to the computational effort each requires and the accuracy of the solution that each provides.
---
paper_title: Multi-domain Diagnosis of End-to-End Service Failures in Hierarchically Routed Networks
paper_content:
This paper investigates an approach to improving the scalability and feasibility of probabilistic fault localization in communication systems by exploiting the domain semantics of computer networks. The proposed technique divides the computational effort and system knowledge among multiple, hierarchically organized managers. Each manager performs fault localization in the domain it manages and requires only the knowledge of its own domain. Since failures propagate among domains, domain managers cooperate with each other to find a consensus explanation of the observed disorder. We show through simulation that the proposed approach increases the effectiveness of probabilistic diagnosis and makes it feasible in networks of considerable size 1.
---
paper_title: Contract-driven creation and operation of virtual enterprises
paper_content:
This paper examines the support needed for dynamically creating and managing contract-driven virtual enterprises. Our approach to virtual enterprises views contracts as the central theme that runs throughout the enterprises' life cycle and touches upon all major aspects thereof. ::: A Contract Framework integrates the concepts and entities necessary for the contract-centred support. A combination of Virtual Market technology and an advanced Matchmaking Engine (MME) facilitates the creation of service markets where matching business partners can find each other. The market facilitates the deferment of business partner selection and contract signing to the point when the need for the service arises. A set of pre-prepared Internal Enactment Specifications (IESs) provides the mapping of the contract to an organisational blueprint, specified in terms of the internal language, resources and infrastructure of each organisation. The blueprint provides a way of automating the configuration of the Contract Enactment Infrastructure (CEI) for the respective organisations. This complements the deferred selection of a business partner. Advanced CEI technology allows business processes to cross organisational boundaries while providing the consumer with a considerable degree of monitoring and control capability over the contracted service. ::: The integration of our proposed framework and approach supports the creation and management of highly dynamic service markets with automated, fine-grained interaction between organisations, thereby fulfilling the flexibility and efficiency requirements of modern e-business. The approach was used to implement a number of example scenarios and the conclusions drawn from this experience are presented.
---
paper_title: Dynamic e-business: Trends in web services
paper_content:
In the last couple of years, the concept of a web service (WS) has emerged as an important paradigm for general application integration in the internet environment. More particularly, WS is viewed as an important vehicle for the creation of dynamic e-business applications and as a means for the J2EE and .NET worlds to come together. Several companies, including Microsoft, have been collaborating in proposing new WS standards. The World Wide Web Consortium has been the forum for many WS-related standardization activities. Many traditional concepts like business process management, security, directory services, routing and transactions are being extended for WS. This extended abstract traces some of the trends in the WS arena. After the TES2002 workshop is over, more information could be found in the presentation material at http://www.almaden.ibm.com/u/mohan/WebServices_TES2002_Slides.pdf
---
paper_title: Classification and Computation of Dependencies for Distributed Management
paper_content:
This paper addresses the role of dependency analysis in distributed management. The identification of dependencies becomes increasingly important in today's networked environments because applications and services rely on a variety of supporting services which might be outsourced to a service provider. However, service dependencies are not made explicit in today's systems, thus making the task of problem determination particularly difficult. Solving this problem requires the determination and computation of the dependencies between services and applications. A key contribution of the paper is a methodology for making IP-based services and applications manageable that have not been designed to include management instrumentation (which is the case today for almost every application and service). Unlike other approaches, it is not necessary to modify the application code. Instead our approach yields a technique that enumerates the characteristics and interdependencies of applications and services, thus permitting the derivation of appropriate management information.
---
paper_title: Large-scale fault isolation
paper_content:
Of the many distributed applications designed for the Internet, the successful ones are those that have paid careful attention to scale and robustness. These applications share several design principles. In this paper, we illustrate the application of these principles to common network monitoring tasks. Specifically, we describe and evaluate 1) a robust distributed topology discovery mechanism and 2) a mechanism for scalable fault isolation in multicast distribution trees. Our mechanisms reveal a different design methodology for network monitoring-one that carefully trades off monitoring fidelity (where necessary) for more graceful degradation in the presence of different kinds of network dynamics.
---
paper_title: Auto-Discovery Capabilities for Service Management: An ISP Case Study
paper_content:
Auto-discovery is one of the key technologies that enables management systems to be quickly customized to the environments that they are intended to manage. As Internet services have grown in complexity in recent years, it is no longer sufficient to monitor and manage these services in isolation. Instead, it is critical that management systems discover dependencies that exist among Internet services, and use this knowledge for correlation of measurement resutls, so as to determine the root-causes of problems. While most existing management systems have focused on discovery of host, servers, and network elements in isolation, in this paper we describe auto-discovery techniques that discover relationships among services. Since new Internet services and service elements are being deployed at a rapid pace, it is essential that the discovery methodologies be implemented in an extensible manner, so that new discovery capabilities can be incrementally added to the management system. In this paper, we present an extensible architecture for auto-discovery and describe a prototype implementation of this architecture and associated auto-discovery techniques. We also highlight experiences from applying these techniques to discover real-world ISP systems. Although described in the context of ISP systems, the concepts described in this paper are applicable for the discovery of services and inter-service relationships in enterprise systems as well.
---
paper_title: Gulfstream - a system for dynamic topology management in multi-domain server farms
paper_content:
This paper describes GulfStream, a scalable distributed software system designed to address the problem of managing the network topology in a multi-domain server farm. In particular, it addresses the following core problems: topology discovery and verification, and failure detection. Un-like most topology discovery and failure detection systems which focus on the nodes in a cluster, GulfStream logically organizes the network adapters of the server farm into groups. Each group contains those adapters that can directly exchange messages. GulfStream dynamically establishes a hierarchy for reporting network topology and availability of network adapters. We describe a prototype implementation of GulfStream on a 55 node heterogeneous server farm interconnected using switched fast Ethernet.
---
paper_title: Topology discovery for large ethernet networks
paper_content:
Accurate network topology information is important for both network management and application performance prediction. Most topology discovery research has focused on wide-area networks and examined topology only at the IP router level, ignoring the need for LAN topology information. Recent work has demonstrated that bridged Ethernet topology can be determined using standard SNMP MIBs; however, these algorithms require each bridge to learn about all other bridges in the network. Our approach to Ethernet topology discovery can determine the connection between a pair of the bridges that share forwarding entries for only three hosts. This minimal knowledge requirement significantly expands the size of the network that can be discovered. We have implemented the new algorithm, and it has accurately determined the topology of several different networks using a variety of hardware and network configurations. Our implementation requires access to only one endpoint to perform the queries needed for topology discovery.
---
paper_title: Topology discovery in heterogeneous IP networks
paper_content:
Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either: (a) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored; or (b) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of a topology discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology in time that is roughly quadratic in the number of network elements.
---
paper_title: Heuristics for Internet map discovery
paper_content:
Mercator is a program that uses hop-limited probes-the same primitive used in traceroute-to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route capable routers wherever possible to enhance the fidelity of the resulting map, and employs novel mechanisms for resolving aliases (interfaces belonging to the same router). This paper describes the design of these heuristics and our experiences with Mercator, and presents some preliminary analysis of the resulting Internet map.
---
paper_title: Managing Dynamic Service Dependencies
paper_content:
We anticipate that software development will be service-centric in the near future. Applications will be created from ::: existing services that are distributed throughout a network. Sure enough, management of those components will be ::: mandatory. While service management is usually service-specific, a few areas can be identified that can be addressed ::: in a generic way. One of these areas is the management of dependencies between services. Despite its genericity, this ::: problem is currently mostly addressed in a service-specific way. This paper details the need for a generic dependency ::: management approach, identifies the key properties of dependencies as well as the requirements for dependency management ::: schemes, describes the model derived from the requirements, and presents an implementation using the Jini ::: connection technology.
---
paper_title: An active approach to characterizing dynamic dependencies for problem determination in a distributed environment
paper_content:
We describe a methodology for identifying and characterizing dynamic dependencies between system components in distributed application environments such as e-commerce systems. The methodology relies on active perturbation of the system to identify dependencies and the use of statistical modeling to compute dependency strengths. Unlike more traditional passive techniques, our active approach requires little initial knowledge of the implementation details of the system and has the potential to provide greater coverage and more direct evidence of causality for the dependencies it identifies. We experimentally demonstrate the efficacy of our approach by applying it to a prototypical e-commerce system based on the TPC-W Web commerce benchmark, for which the active approach correctly identifies and characterizes 41 of 42 true dependencies out of a potential space of 140 dependencies. Finally, we consider how the dependencies computed by our approach can be used to simplify and guide the task of root-cause analysis, an important part of problem determination.
---
paper_title: Dependency Analysis in Distributed Systems using Fault Injection: Application to Problem Determination in an e-commerce Environment
paper_content:
──────────────────────────────────────── Distributed networked applications that are being deployed in enterprise settings, increasingly rely on a large number of heterogeneous hardware and software components for providing end-to-end services. In such settings, the issue of problem diagnosis becomes vitally important, in order to minimize system outages and improve system availability. This motivates interest in dependency characterization among the different components in distributed application environments. A promising approach for obtaining dynamic dependency information is the Active Dependency Discovery technique in which a dependency graph of e-commerce transactions on hardware and software components in the system is built by individually “perturbing” the system components during a testing phase and collecting measurements corresponding to the external behavior of the system. In this paper, we propose using fault injection as the perturbation tool for dynamic dependency discovery and problem determination. We describe a method for characterizing dependencies of transactions on the system resources in a typical e-commerce environment, and show how it can aid in problem diagnosis. The method is applied to an application server middleware platform, running end-user activity composed of TPC-W transactions. Representative fault models for such an environment, that can be used to construct the fault injection campaign, are also presented.
---
| Title: A Survey of Fault Localization Techniques in Computer Networks
Section 1: Introduction
Description 1: Introduce the basic concepts of fault management in network systems, emphasizing the importance of fault diagnosis.
Section 2: Expert-system techniques for fault localization
Description 2: Discuss expert systems in fault localization, including rule-based reasoning, model-based systems, and their advantages and limitations.
Section 3: Model traversing techniques
Description 3: Explain model traversing techniques and how they utilize formal system representations for fault localization.
Section 4: Graph-theoretic techniques
Description 4: Describe graph-theoretic techniques, including dependency and causality graphs, and their application in fault localization.
Section 5: Divide and conquer algorithm
Description 5: Detail the divide and conquer algorithm, its methodology, and examples of its application in fault localization.
Section 6: Context-free grammar
Description 6: Present how context-free grammar can be used to model fault dependencies in network systems, along with relevant algorithms.
Section 7: Codebook technique
Description 7: Introduce the codebook technique and describe how fault propagation patterns are represented and utilized for fault localization.
Section 8: Belief-network approach
Description 8: Explain the belief-network approach for fault localization and discuss its algorithms and application scenarios.
Section 9: Bipartite causality graphs
Description 9: Discuss bipartite causality graphs and their usage in fault localization, including relevant techniques and examples.
Section 10: Open research problems
Description 10: Outline the current open research questions and challenges in the field of fault localization, highlighting areas needing further development.
Section 11: Conclusions
Description 11: Summarize the survey findings, highlighting the complications in fault localization and the need for continuous research in this domain. |
A survey of offline algorithms for energy minimization under deadline constraints | 7 | ---
paper_title: Energy-efficient algorithms
paper_content:
Algorithmic solutions can help reduce energy consumption in computing environs.
---
paper_title: Reducing Peak Power Consumption inMulti-Core Systems without ViolatingReal-Time Constraints
paper_content:
The potential of multi-core chips for high performance and reliability at low cost has made them ideal computing platforms for embedded real-time systems. As a result, power management of a multi-core chip has become an important issue in the design of embedded real-time systems. Most existing approaches have been designed to regulate the behavior of average power consumption, such as minimizing the total energy consumption or the chip temperature. However, little attention has been paid to the worst-case behavior of instantaneous power consumption on a chip, called chip-level peak power consumption, an important design parameter that determines the cost and/or size of chip design/packaging and the underlying power supply. We address this problem by reducing the chip-level peak power consumption at design time without violating any real-time constraints. We achieve this by carefully scheduling real-time tasks, without relying on any additional hardware implementation for power management, such as dynamic voltage and frequency scaling. Specifically, we propose a new scheduling algorithm FPΘ that restricts the concurrent execution of tasks assigned on different cores, and perform its schedulability analysis. Using this analysis, we develop a method that finds a set of concurrent executable tasks, such that the design-time chip-level peak power consumption is minimized and all timing requirements are met. We demonstrate via simulation that the proposed method not only keeps the design-time chip-level peak power consumption as low as the theoretical lower bound for trivial cases, but also reduces the peak power consumption for non-trivial cases by up to 12.9 percent compared to the case of no restriction on concurrent task execution.
---
paper_title: Survey of Energy-Cognizant Scheduling Techniques
paper_content:
Execution time is no longer the only metric by which computational systems are judged. In fact, explicitly sacrificing raw performance in exchange for energy savings is becoming a common trend in environments ranging from large server farms attempting to minimize cooling costs to mobile devices trying to prolong battery life. Hardware designers, well aware of these trends, include capabilities like DVFS (to throttle core frequency) into almost all modern systems. However, hardware capabilities on their own are insufficient and must be paired with other logic to decide if, when, and by how much to apply energy-minimizing techniques while still meeting performance goals. One obvious choice is to place this logic into the OS scheduler. This choice is particularly attractive due to the relative simplicity, low cost, and low risk associated with modifying only the scheduler part of the OS. Herein we survey the vast field of research on energy-cognizant schedulers. We discuss scheduling techniques to perform energy-efficient computation. We further explore how the energy-cognizant scheduler's role has been extended beyond simple energy minimization to also include related issues like the avoidance of negative thermal effects as well as addressing asymmetric multicore architectures.
---
paper_title: Peak power reduction and workload balancing by space-time multiplexing based demand-supply matching for 3D thousand-core microprocessor
paper_content:
Space-time multiplexing is utilized for demand-supply matching between many-core microprocessors and power converters. Adaptive clustering is developed to classify cores by similar power level in space and similar power behavior in time. In each power management cycle, minimum number of power converters are allocated for space-time multiplexed matching, which is physically enabled by 3D through-silicon-vias. Moreover, demand-response based task adjustment is applied to reduce peak power and to balance workload. The proposed power management system is verified by system models with physical design parameters and benched power traces, which show 38.10% peak power reduction and 2.60x balanced workload.
---
paper_title: Power-aware scheduling for makespan and flow
paper_content:
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work.For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by Pruhs et al. to give an arbitrarilygood approximation for scheduling equal-work jobs on a multiprocessor.
---
paper_title: Accurate Modeling of the Delay and Energy Overhead of Dynamic Voltage and Frequency Scaling in Modern Microprocessors
paper_content:
Dynamic voltage and frequency scaling (DVFS) has been studied for well over a decade. Nevertheless, existing DVFS transition overhead models suffer from significant inaccuracies; for example, by incorrectly accounting for the effect of DC–DC converters, frequency synthesizers, voltage, and frequency change policies on energy losses incurred during mode transitions. Incorrect and/or inaccurate DVFS transition overhead models prevent one from determining the precise break-even time and thus forfeit some of the energy saving that is ideally achievable. This paper introduces accurate DVFS transition overhead models for both energy consumption and delay. In particular, we redefine the DVFS transition overhead including the underclocking-related losses in a DVFS-enabled microprocessor, additional inductor IR losses, and power losses due to discontinuous-mode DC–DC conversion. We report the transition overheads for a desktop, a mobile and a low-power representative processor. We also present DVFS transition overhead macromodel for use by high-level DVFS schedulers.
---
paper_title: A time series-based approach for power management in mobile processors and disks
paper_content:
In this paper, we present a time series-based approach for managing power in mobile processors and disks that see multimedia workloads. Since multimedia applications impose soft real-time constraints, a key goal of our approach is to reduce energy consumption of multimedia applications without degrading performance. We present simple statistical techniques based on time series to dynamically compute the processor and I/O demands of multimedia applications and present techniques to dynamically vary the voltage settings and rotational speeds of mobile processors and disks, respectively. We implement our approaches in the Linux kernel running on a Sony Transmeta laptop and in a trace-driven simulator. Our experiments show that, compared to the traditional system-wide CPU voltage scaling approaches, our technique can achieve up to a 38.6% energy saving while delivering good performance to applications. Simulation results for our disk power management technique show a 20.3% reduction in energy consumption without any significant performance loss when compared to a traditional disk power management scheme.
---
paper_title: Using dynamic voltage scaling for energy-efficient flash-based storage devices
paper_content:
NAND flash memory is commonly known as a power-efficient storage medium. Because of the increasing complexity of flash-based storage devices, however, it is more difficult to achieve good power-efficiency without considering an energy-efficient storage device design. In this paper, we investigate the potential benefit of dynamic voltage/frequency scaling (DVFS) on the energy-efficiency of flash-based storage devices. We first develop a performance/power model for a flash device by using an FPGA-based flash device platform. We then propose a simple DVFS heuristic algorithm that exploits workload fluctuations of a flash device to achieve a significant reduction in energy consumption without performance degradation. Experimental results show that a flash device with DVFS can reduce energy consumption by up to 20%-30%.
---
paper_title: Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
paper_content:
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
---
paper_title: Energy-efficient algorithms
paper_content:
Algorithmic solutions can help reduce energy consumption in computing environs.
---
paper_title: Energy-Efficient Scheduling for Real-Time Systems on Dynamic Voltage Scaling (DVS) Platforms
paper_content:
Energy-efficient designs have played import roles for hardware and software implementations for a decade. With the advanced technology of VLSI circuit designs, energy-efficiency can be achieved by adopting the dynamic voltage scaling (DVS) technique. In this paper, we survey the studies for energy-efficient scheduling in real-time systems on DVS platforms to cover both theoretical and practical issues.
---
paper_title: Algorithmic problems in power management
paper_content:
We survey recent research that has appeared in the theoretical computer science literature on algorithmic problems related to power management. We will try to highlight some open problem that we feel are interesting. This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice.
---
paper_title: Survey of Energy-Cognizant Scheduling Techniques
paper_content:
Execution time is no longer the only metric by which computational systems are judged. In fact, explicitly sacrificing raw performance in exchange for energy savings is becoming a common trend in environments ranging from large server farms attempting to minimize cooling costs to mobile devices trying to prolong battery life. Hardware designers, well aware of these trends, include capabilities like DVFS (to throttle core frequency) into almost all modern systems. However, hardware capabilities on their own are insufficient and must be paired with other logic to decide if, when, and by how much to apply energy-minimizing techniques while still meeting performance goals. One obvious choice is to place this logic into the OS scheduler. This choice is particularly attractive due to the relative simplicity, low cost, and low risk associated with modifying only the scheduler part of the OS. Herein we survey the vast field of research on energy-cognizant schedulers. We discuss scheduling techniques to perform energy-efficient computation. We further explore how the energy-cognizant scheduler's role has been extended beyond simple energy minimization to also include related issues like the avoidance of negative thermal effects as well as addressing asymmetric multicore architectures.
---
paper_title: Understanding the Thermal Implications of Multi-Core Architectures
paper_content:
Multicore architectures are becoming the main design paradigm for current and future processors. The main reason is that multicore designs provide an effective way of overcoming instruction-level parallelism (ILP) limitations by exploiting thread-level parallelism (TLP). In addition, it is a power and complexity-effective way of taking advantage of the huge number of transistors that can be integrated on a chip. On the other hand, today's higher than ever power densities have made temperature one of the main limitations of microprocessor evolution. Thermal management in multicore architectures is a fairly new area. Some works have addressed dynamic thermal management in bi/quad-core architectures. This work provides insight and explores different alternatives for thermal management in multicore architectures with 16 cores. Schemes employing both energy reduction and activity migration are explored and improvements for thread migration schemes are proposed.
---
paper_title: Accurate Modeling of the Delay and Energy Overhead of Dynamic Voltage and Frequency Scaling in Modern Microprocessors
paper_content:
Dynamic voltage and frequency scaling (DVFS) has been studied for well over a decade. Nevertheless, existing DVFS transition overhead models suffer from significant inaccuracies; for example, by incorrectly accounting for the effect of DC–DC converters, frequency synthesizers, voltage, and frequency change policies on energy losses incurred during mode transitions. Incorrect and/or inaccurate DVFS transition overhead models prevent one from determining the precise break-even time and thus forfeit some of the energy saving that is ideally achievable. This paper introduces accurate DVFS transition overhead models for both energy consumption and delay. In particular, we redefine the DVFS transition overhead including the underclocking-related losses in a DVFS-enabled microprocessor, additional inductor IR losses, and power losses due to discontinuous-mode DC–DC conversion. We report the transition overheads for a desktop, a mobile and a low-power representative processor. We also present DVFS transition overhead macromodel for use by high-level DVFS schedulers.
---
paper_title: An optimal speed control scheme supported by media servers for low-power multimedia applications
paper_content:
In this paper, we present a new concept of dynamic voltage scaling (DVS) for low-power multimedia decoding in battery-powered mobile devices. Most existing DVS techniques are suboptimal in achieving energy efficiency while providing the guaranteed playback quality of service, which is mainly due to the inherent limitations of client-only approaches. To address this problem, in this paper, we investigate the possibility of media server supported DVS techniques with smoothing mechanisms. Towards this new direction, we propose a generic offline bitstream analysis framework and an optimal speed control algorithm which achieves the maximal energy savings among all feasible speed profiles for the given buffers. The proposed scheme enables us to compute the buffer sizes of feasibility condition, which are the theoretical lower bound of buffer size requirement for a given media clip. More importantly, our scheme facilitates practical applications from four aspects. First, it does not require feedback information on clients’ configuration. This renders our scheme particularly suitable for broadcast or multicast applications. Second, the speed profile based on buffer sizes of feasibility condition can provide satisfactory energy efficiency. Third, the required buffer sizes are so small that they can be met by most mobile devices. Fourth, additional side information (i.e., speed profile) of the proposed scheme is negligible compared to the size of media content. These properties solve the diversity issue and feasibility issue of media server supported DVS schemes. Experimental results show that, in comparison with the representative existing techniques, our scheme improves the performance of DVS significantly.
---
paper_title: Scheduling for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994]. What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance.
---
paper_title: Optimal voltage allocation techniques for dynamically variable voltage processors
paper_content:
This paper presents important, new results of a study on the problem of task scheduling and voltage allocation in dynamically variable voltage processors, the purpose of which was minimization of processor energy consumption. The contributions are twofold: (1) For given multiple discrete supply voltages and tasks with arbitrary arrival-time/deadline constraints, we propose a voltage allocation technique that produces a feasible task schedule with optimal processor energy consumption. (2) We then extend the problem to include the case in which tasks have nonuniform loads (i.e.; switched) capacitances and solve it optimally. The proposed technique, called Alloc-vt, in (1) is based on the prior results in [Yao, Demers and Shenker. 1995. In Proceedings of IEEE Symposium on Foundations of Computer Science. 374--382] (which is optimal for dynamically continuously variable voltages, but not for discrete ones) and [Ishihara and Yasuura. 1998. In Proceedings of International Symposium on Low Power Electronics and Design. 197--202] (which is optimal for a single task, but not for multiple tasks), whereas the proposed technique, called Alloc-vtcap, in (2) is based on an efficient linear programming (LP) formulation. Both techniques solve the allocation problems optimally in polynomial time.
---
paper_title: A scheduling model for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
---
paper_title: Voltage scheduling problem for dynamically variable voltage processors
paper_content:
This paper presents a model of dynamically variable voltage processors and basic theorems for power-delay optimization. A static voltage scheduling problem is also proposed and formulated as an integer linear programming (ILP) problem. In the problem, we assume that a core processor can vary its supply voltage dynamically, but can use only a single voltage level at a time. For a given application program and a dynamically variable voltage processor, a voltage scheduling which minimizes energy consumption under an execution time constraint can be found.
---
paper_title: TL-plane-based multi-core energy-efficient real-time scheduling algorithm for sporadic tasks
paper_content:
As the energy consumption of multi-core systems becomes increasingly prominent, it's a challenge to design an energy-efficient real-time scheduling algorithm in multi-core systems for reducing the system energy consumption while guaranteeing the feasibility of real-time tasks. In this paper, we focus on multi-core processors, with the global Dynamic Voltage Frequency Scaling (DVFS) and Dynamic Power Management (DPM) technologies. In this setting, we propose an energy-efficient real-time scheduling algorithm, the Time Local remaining execution plane based Dynamic Voltage Frequency Scaling (TL-DVFS). TL-DVFS utilizes the concept of Time Local remaining execution (TL) plane to dynamically scale the voltage and frequency of a processor at the initial time of each TL plane as well as at the release time of a sporadic task in each TL plane. Consequently, TL-DVFS can obtain a reasonable tradeoff between the real-time constraint and the energy-saving while realizing the optimal feasibility of sporadic tasks. Mathematical analysis and extensive simulations demonstrate that TL-DVFS always saves more energy than existing algorithms, especially in the case of high workloads, and guarantees the optimal feasibility of sporadic tasks at the same time.
---
paper_title: Energy-Aware Partitioned Fixed-Priority Scheduling for Chip Multi-processors
paper_content:
Energy management is becoming an increasingly important problem in application domains ranging from embedded devices to data centers. In many such systems, multi-core processors are projected as a promising technology to achieve improved performance with a lower power envelope. Managing the application power consumption under timing constraints poses significant challenges in these emerging platforms. In this paper, we study the energy-efficient scheduling of periodic real time tasks with implicit deadlines on chip multi-core processors (CMPs). We specifically consider processors with a single voltage and clock frequency domain, such as the state-of-the-art embedded multi-core NVIDIA Tegra 2processor and enterprise-class processors such as Intel'sItanium 2, i5, i7 and IBM's Power 6 and Power 7series. The major contributions of this work are (i)we prove that Worst-Fit-Decreasing (WFD) task partitioning when Rate-Monotonic Scheduling (RMS) is used has an approximation ratio of 1.71 for the problem of minimizing the schedulable operating frequency with partitioned fixed-priority scheduling, (ii) we illustrate the major shortcoming of WFD with RMS resulting from not considering task periods during allocation, and(iii) we propose a Single-clock domain multi-processor Frequency Assignment Algorithm (SFAA) that determines a globally energy-efficient frequency while including task period relationships. Our evaluation results show that SFAA provides significant energy gains when compared to WFD. In fact SFAA is shown to save up to 55% more power compared to WFD for an octa-core processor.
---
paper_title: On the Interplay of Voltage/Frequency Scaling and Device Power Management for Frame-Based Real-Time Embedded Applications
paper_content:
Voltage/Frequency Scaling (VFS) and Device Power Management (DPM) are two popular techniques commonly employed to save energy in real-time embedded systems. VFS policies aim at reducing the CPU energy, while DPM-based solutions involve putting the system components (e.g., memory or I/O devices) to low-power/sleep states at runtime, when sufficiently long idle intervals can be predicted. Despite numerous research papers that tackled the energy minimization problem using VFS or DPM separately, the interactions of these two popular techniques are not yet well understood. In this paper, we undertake an exact analysis of the problem for a real-time embedded application running on a VFS-enabled CPU and using multiple devices. Specifically, by adopting a generalized system-level energy model, we characterize the variations in different components of the system energy as a function of the CPU processing frequency. Then, we propose a provably optimal and efficient algorithm to determine the optimal CPU frequency as well as device state transition decisions to minimize the system-level energy. We also extend our solution to deal with workload variability. The experimental evaluations confirm that substantial energy savings can be obtained through our solution that combines VFS and DPM optimally under the given task and energy models.
---
paper_title: A New Energy-Aware Dynamic Task Set Partitioning Algorithm for Soft and Hard Embedded Real-Time Systems
paper_content:
Power consumption is a major design concern in current embedded systems. To deal with consumption, many systems apply dynamic voltage scaling (DVS) techniques which dynamically change the system speed depending on the workload characteristics. DVS costs in a multicore system can be reduced by sharing the same DVS regulator among the cores. In this context, to handle energy efficiently, the workload must be properly balanced among the cores. This paper proposes a new heuristic algorithm to balance the workload in an embedded system with a coarse-grain multithreaded multicore processor. This heuristic is aimed at improving the overlapping time between the memory and the processor while keeping balanced core utilizations. To this end, the heuristic dynamically drives the frequency/voltage level to guarantee deadline fulfillment of the hard real-time tasks as well as to achieve a good trade-off between deadline losses and energy savings of the soft real-time tasks. The proposed technique has been evaluated on a model of a contemporary high-end ARM embedded microprocessor executing a set of standard embedded benchmarks. Energy savings depend on the range of frequency/voltage levels that the DVS regulator implements. Experimental results show that with the proposed heuristic, when working with hard real-time tasks, the energy consumption is about 33% the energy dissipated by a system without DVS regulator and balancing heuristic. Moreover, when soft real-time tasks are also considered, the normalized consumption presents values ranging in between 8 and 70% depending on the scheduler aggressiveness.
---
paper_title: Optimal DPM and DVFS for frame-based real-time systems
paper_content:
Dynamic Power Management (DPM) and Dynamic Voltage and Frequency Scaling (DVFS) are popular techniques for reducing energy consumption. Algorithms for optimal DVFS exist, but optimal DPM and the optimal combination of DVFS and DPM are not yet solved. In this article we use well-established models of DPM and DVFS for frame-based systems. We show that it is not sufficient—as some authors argue—to consider only individual invocations of a task. We define a schedule that also takes interactions between invocations into account and prove—in a theoretical fashion—that this schedule is optimal.
---
paper_title: Optimal Power-Down Strategies
paper_content:
We consider the problem of selecting threshold times to transition a device to low-power sleep states during an idle period. The two-state case, in which there is a single active and a single sleep state, is a continuous version of the ski-rental problem. We consider a generalized version in which there is more than one sleep state, each with its own power-consumption rate and transition costs. We give an algorithm that, given a system, produces a deterministic strategy whose competitive ratio is arbitrarily close to optimal. We also give an algorithm to produce the optimal online strategy given a system and a probability distribution that generates the length of the idle period. We also give a simple algorithm that achieves a competitive ratio of $3 + 2\sqrt{2} \approx 5.828$ for any system.
---
paper_title: Algorithms for power savings
paper_content:
This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.
---
paper_title: From Preemptive to Non-preemptive Speed-Scaling Scheduling
paper_content:
We are given a set of jobs, each one specified by its release date, its deadline and its processing volume (work), and a single (or a set of) speed-scalable processor(s). We adopt the standard model in speed-scaling in which if a processor runs at speed s then the energy consumption is s α units of energy per time unit, where α > 1. Our goal is to find a schedule respecting the release dates and the deadlines of the jobs so that the total energy consumption is minimized. While most previous works have studied the preemptive case of the problem, where a job may be interrupted and resumed later, we focus on the non-preemptive case where once a job starts its execution, it has to continue until its completion without any interruption. As the preemptive case is known to be polynomially solvable for both the single-processor and the multiprocessor case, we explore the idea of transforming an optimal preemptive schedule to a non-preemptive one. We prove that the preemptive optimal solution does not preserve enough of the structure of the non-preemptive optimal solution, and more precisely that the ratio between the energy consumption of an optimal non-preemptive schedule and the energy consumption of an optimal preemptive schedule can be very large even for the single-processor case. Then, we focus on some interesting families of instances: (i) equal-work jobs on a single-processor, and (ii) agreeable instances in the multiprocessor case. In both cases, we propose constant factor approximation algorithms. In the latter case, our algorithm improves the best known algorithm of the literature. Finally, we propose a (non-constant factor) approximation algorithm for general instances in the multiprocessor case.
---
paper_title: Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
paper_content:
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
---
paper_title: Algorithms for power savings
paper_content:
This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.
---
paper_title: An optimal speed control scheme supported by media servers for low-power multimedia applications
paper_content:
In this paper, we present a new concept of dynamic voltage scaling (DVS) for low-power multimedia decoding in battery-powered mobile devices. Most existing DVS techniques are suboptimal in achieving energy efficiency while providing the guaranteed playback quality of service, which is mainly due to the inherent limitations of client-only approaches. To address this problem, in this paper, we investigate the possibility of media server supported DVS techniques with smoothing mechanisms. Towards this new direction, we propose a generic offline bitstream analysis framework and an optimal speed control algorithm which achieves the maximal energy savings among all feasible speed profiles for the given buffers. The proposed scheme enables us to compute the buffer sizes of feasibility condition, which are the theoretical lower bound of buffer size requirement for a given media clip. More importantly, our scheme facilitates practical applications from four aspects. First, it does not require feedback information on clients’ configuration. This renders our scheme particularly suitable for broadcast or multicast applications. Second, the speed profile based on buffer sizes of feasibility condition can provide satisfactory energy efficiency. Third, the required buffer sizes are so small that they can be met by most mobile devices. Fourth, additional side information (i.e., speed profile) of the proposed scheme is negligible compared to the size of media content. These properties solve the diversity issue and feasibility issue of media server supported DVS schemes. Experimental results show that, in comparison with the representative existing techniques, our scheme improves the performance of DVS significantly.
---
paper_title: A scheduling model for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
---
paper_title: Speed Scaling with an Arbitrary Power Function
paper_content:
This article initiates a theoretical investigation into online scheduling problems with speed scaling where the allowable speeds may be discrete, and the power function may be arbitrary, and develops algorithmic analysis techniques for this setting. We show that a natural algorithm, which uses Shortest Remaining Processing Time for scheduling and sets the power to be one more than the number of unfinished jobs, is 3-competitive for the objective of total flow time plus energy. We also show that another natural algorithm, which uses Highest Density First for scheduling and sets the power to be the fractional weight of the unfinished jobs, is a 2-competitive algorithm for the objective of fractional weighted flow time plus energy.
---
paper_title: When Discreteness Meets Continuity: Energy-Optimal DVS Scheduling Revisited ∗
paper_content:
The energy-optimal DVS scheduling problem seeks to create a frequency-voltage schedule for the CPU that can achieve energy minimization with tolerable performance loss. Prior solutions to the problem assume that the CPU canrunacrossacontinuousrangeoffrequenciesandvoltages, but today’s DVS-enabled microprocessors can only support a discrete set. As a result, the energy-optimal results in the continuous case may no longer be valid in the discrete case. This paper bridges the two cases by showingthattheoptimalitycanberetainedthroughemulation. The resultisalso applicable tosystemsthat consider leakage and/or system energy usage.
---
paper_title: Leakage aware dynamic voltage scaling for real-time embedded systems
paper_content:
A five-fold increase in leakage current is predicted with each technology generation. While Dynamic Voltage Scaling (DVS) is known to reduce dynamic power consumption, it also causes increased leakage energy drain by lengthening the interval over which a computation is carried out. Therefore, for minimization of the total energy, one needs to determine an operating point, called the critical speed. We compute processor slowdown factors based on the critical speed for energy minimization. Procrastination scheduling attempts to maximize the duration of idle intervals by keeping the processor in a sleep/shutdown state even if there are pending tasks, within the constraints imposed by performance requirements. Our simulation experiments show that the critical speed slowdown results in up to 5% energy gains over a leakage oblivious dynamic voltage scaling. Procrastination scheduling scheme extends the sleep intervals to up to 5 times, resulting in up to an additional 18% energy gains, while meeting all timing requirements.
---
paper_title: A scheduling model for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
---
paper_title: Algorithms for power savings
paper_content:
This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.
---
paper_title: Bounding energy consumption in large-scale MPI programs
paper_content:
Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.
---
paper_title: Optimal voltage allocation techniques for dynamically variable voltage processors
paper_content:
This paper presents important, new results of a study on the problem of task scheduling and voltage allocation in dynamically variable voltage processors, the purpose of which was minimization of processor energy consumption. The contributions are twofold: (1) For given multiple discrete supply voltages and tasks with arbitrary arrival-time/deadline constraints, we propose a voltage allocation technique that produces a feasible task schedule with optimal processor energy consumption. (2) We then extend the problem to include the case in which tasks have nonuniform loads (i.e.; switched) capacitances and solve it optimally. The proposed technique, called Alloc-vt, in (1) is based on the prior results in [Yao, Demers and Shenker. 1995. In Proceedings of IEEE Symposium on Foundations of Computer Science. 374--382] (which is optimal for dynamically continuously variable voltages, but not for discrete ones) and [Ishihara and Yasuura. 1998. In Proceedings of International Symposium on Low Power Electronics and Design. 197--202] (which is optimal for a single task, but not for multiple tasks), whereas the proposed technique, called Alloc-vtcap, in (2) is based on an efficient linear programming (LP) formulation. Both techniques solve the allocation problems optimally in polynomial time.
---
paper_title: An optimal speed control scheme supported by media servers for low-power multimedia applications
paper_content:
In this paper, we present a new concept of dynamic voltage scaling (DVS) for low-power multimedia decoding in battery-powered mobile devices. Most existing DVS techniques are suboptimal in achieving energy efficiency while providing the guaranteed playback quality of service, which is mainly due to the inherent limitations of client-only approaches. To address this problem, in this paper, we investigate the possibility of media server supported DVS techniques with smoothing mechanisms. Towards this new direction, we propose a generic offline bitstream analysis framework and an optimal speed control algorithm which achieves the maximal energy savings among all feasible speed profiles for the given buffers. The proposed scheme enables us to compute the buffer sizes of feasibility condition, which are the theoretical lower bound of buffer size requirement for a given media clip. More importantly, our scheme facilitates practical applications from four aspects. First, it does not require feedback information on clients’ configuration. This renders our scheme particularly suitable for broadcast or multicast applications. Second, the speed profile based on buffer sizes of feasibility condition can provide satisfactory energy efficiency. Third, the required buffer sizes are so small that they can be met by most mobile devices. Fourth, additional side information (i.e., speed profile) of the proposed scheme is negligible compared to the size of media content. These properties solve the diversity issue and feasibility issue of media server supported DVS schemes. Experimental results show that, in comparison with the representative existing techniques, our scheme improves the performance of DVS significantly.
---
paper_title: Optimal voltage allocation techniques for dynamically variable voltage processors
paper_content:
This paper presents important, new results of a study on the problem of task scheduling and voltage allocation in dynamically variable voltage processors, the purpose of which was minimization of processor energy consumption. The contributions are twofold: (1) For given multiple discrete supply voltages and tasks with arbitrary arrival-time/deadline constraints, we propose a voltage allocation technique that produces a feasible task schedule with optimal processor energy consumption. (2) We then extend the problem to include the case in which tasks have nonuniform loads (i.e.; switched) capacitances and solve it optimally. The proposed technique, called Alloc-vt, in (1) is based on the prior results in [Yao, Demers and Shenker. 1995. In Proceedings of IEEE Symposium on Foundations of Computer Science. 374--382] (which is optimal for dynamically continuously variable voltages, but not for discrete ones) and [Ishihara and Yasuura. 1998. In Proceedings of International Symposium on Low Power Electronics and Design. 197--202] (which is optimal for a single task, but not for multiple tasks), whereas the proposed technique, called Alloc-vtcap, in (2) is based on an efficient linear programming (LP) formulation. Both techniques solve the allocation problems optimally in polynomial time.
---
paper_title: A scheduling model for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
---
paper_title: When Discreteness Meets Continuity: Energy-Optimal DVS Scheduling Revisited ∗
paper_content:
The energy-optimal DVS scheduling problem seeks to create a frequency-voltage schedule for the CPU that can achieve energy minimization with tolerable performance loss. Prior solutions to the problem assume that the CPU canrunacrossacontinuousrangeoffrequenciesandvoltages, but today’s DVS-enabled microprocessors can only support a discrete set. As a result, the energy-optimal results in the continuous case may no longer be valid in the discrete case. This paper bridges the two cases by showingthattheoptimalitycanberetainedthroughemulation. The resultisalso applicable tosystemsthat consider leakage and/or system energy usage.
---
paper_title: On the Interplay between Global DVFS and Scheduling Tasks with Precedence Constraints
paper_content:
Many multicore processors are capable of decreasing the voltage and clock frequency to save energy at the cost of an increased delay. While a large part of the theory oriented literature focuses on local dynamic voltage and frequency scaling (local DVFS), where every core’s voltage and clock frequency can be set separately, this article presents an in-depth theoretical study of the more commonly available global DVFS that makes such changes for the entire chip. This article shows how to choose the optimal clock frequencies that minimize the energy for global DVFS, and it discusses the relationship between scheduling and optimal global DVFS. Formulas are given to find this optimum under time constraints, including proofs thereof. The problem of simultaneously choosing clock frequencies and a schedule that together minimize the energy consumption is discussed, and based on this a scheduling criterion is derived that implicitly assigns frequencies and minimizes energy consumption. Furthermore, this article studies the effectivity of a large class of scheduling algorithms with regard to the derived criterion, and a bound on the maximal relative deviation is given. Simulations show that with our techniques an energy reduction of 30% can be achieved with respect to state-of-the-art research.
---
paper_title: Optimal voltage allocation techniques for dynamically variable voltage processors
paper_content:
This paper presents important, new results of a study on the problem of task scheduling and voltage allocation in dynamically variable voltage processors, the purpose of which was minimization of processor energy consumption. The contributions are twofold: (1) For given multiple discrete supply voltages and tasks with arbitrary arrival-time/deadline constraints, we propose a voltage allocation technique that produces a feasible task schedule with optimal processor energy consumption. (2) We then extend the problem to include the case in which tasks have nonuniform loads (i.e.; switched) capacitances and solve it optimally. The proposed technique, called Alloc-vt, in (1) is based on the prior results in [Yao, Demers and Shenker. 1995. In Proceedings of IEEE Symposium on Foundations of Computer Science. 374--382] (which is optimal for dynamically continuously variable voltages, but not for discrete ones) and [Ishihara and Yasuura. 1998. In Proceedings of International Symposium on Low Power Electronics and Design. 197--202] (which is optimal for a single task, but not for multiple tasks), whereas the proposed technique, called Alloc-vtcap, in (2) is based on an efficient linear programming (LP) formulation. Both techniques solve the allocation problems optimally in polynomial time.
---
paper_title: New Results for Non-Preemptive Speed Scaling
paper_content:
We consider the speed scaling problem introduced in the seminal paper of Yao et al. [23]. In this problem, a number of jobs, each with its own processing volume, release time, and deadline, needs to be executed on a speed-scalable processor. The power consumption of this processor is P(s) = s α , where s is the processing speed, and α > 1 is a constant. The total energy consumption is power integrated over time, and the objective is to process all jobs while minimizing the energy consumption.
---
paper_title: Green Scheduling, Flows and Matchings
paper_content:
Recently, optimal combinatorial algorithms have been presented for the energy minimization multi-processor speed scaling problem with migration [Albers et al., SPAA 2011], [Angel et al., Euro-Par 2012]. These algorithms are based on repeated maximum-flow computations allowing the partition of the set of jobs into subsets in which all the jobs are executed at the same speed. The optimality of these algorithms is based on a series of technical lemmas showing that this partition and the corresponding speeds lead to the minimization of the energy consumption. In this paper, we show that both the algorithms and their analysis can be greatly simplified. In order to do this, we formulate the problem as a convex cost flow problem in an appropriate flow network. Furthermore, we show that our approach is useful to solve other problems in the dynamic speed scaling setting. As an example, we consider the preemptive open-shop speed scaling problem and we propose a polynomial-time algorithm for finding an optimal solution based on the computation of convex cost flows. We also propose a polynomial-time algorithm for minimizing a linear combination of the sum of the completion times of the jobs and the total energy consumption, for the multi-processor speed scaling problem without preemptions. Instead of using convex cost flows, our algorithm is based on the computation of a minimum weighted maximum matching in an appropriate bipartite graph.
---
paper_title: Speed scaling on parallel processors with migration
paper_content:
We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works) on parallel speed scalable processors so as to minimize the total energy consumption. We consider that both preemptions and migrations of jobs are allowed. For this problem, there exists an optimal polynomial-time algorithm which uses as a black box an algorithm for linear programming. Here, we formulate the problem as a convex program and we propose a combinatorial polynomial-time algorithm which is based on finding maximum flows. Our algorithm runs in \(O({ nf}(n)\log U)\) time, where n is the number of jobs, U is the range of all possible values of processors’ speeds divided by the desired accuracy and f(n) is the time needed for computing a maximum flow in a layered graph with O(n) vertices.
---
paper_title: On multi-processor speed scaling with migration: extended abstract
paper_content:
We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by Yao et al. [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3\α)α/2 + 2α.
---
paper_title: Green Scheduling, Flows and Matchings
paper_content:
Recently, optimal combinatorial algorithms have been presented for the energy minimization multi-processor speed scaling problem with migration [Albers et al., SPAA 2011], [Angel et al., Euro-Par 2012]. These algorithms are based on repeated maximum-flow computations allowing the partition of the set of jobs into subsets in which all the jobs are executed at the same speed. The optimality of these algorithms is based on a series of technical lemmas showing that this partition and the corresponding speeds lead to the minimization of the energy consumption. In this paper, we show that both the algorithms and their analysis can be greatly simplified. In order to do this, we formulate the problem as a convex cost flow problem in an appropriate flow network. Furthermore, we show that our approach is useful to solve other problems in the dynamic speed scaling setting. As an example, we consider the preemptive open-shop speed scaling problem and we propose a polynomial-time algorithm for finding an optimal solution based on the computation of convex cost flows. We also propose a polynomial-time algorithm for minimizing a linear combination of the sum of the completion times of the jobs and the total energy consumption, for the multi-processor speed scaling problem without preemptions. Instead of using convex cost flows, our algorithm is based on the computation of a minimum weighted maximum matching in an appropriate bipartite graph.
---
paper_title: Approximation schemes for scheduling
paper_content:
We consider the classic scheduling/load balancing problems where there are m identical machines and n jobs, and each job should be assigned to some machine. Traditionally, the assignment of jobs to machines is measured by the makespan (maximum load) i.e., the L{sub {infinity}} norm of the assignment. An {epsilon}-approximation scheme was given by Hochbaum and Shmoys for minimizing the L{sub {infinity}} norm. In several applications, such as in storage allocation, a more appropriate measure is the sum of the squares of the loads (which is equivalent to the L{sub 2} norm). This problem was considered in who showed how to approximate the optimum value by a factor of about 1.04. In fact, a more general measure, which is the L, norm (for any p {ge} 1) t can also be approximated to some constant which may be as large as 3/2. We improve e these results by providing an c-approximation scheme for the general L{sub p} norm (and in particular for the L{sub 2} norm). We also consider the case of restricted assignment of unit jobs where we show how to find in polynomial time, a solution a which is optimal for all norms.
---
paper_title: Multiprocessor energy-efficient scheduling with task migration considerations
paper_content:
This paper targets energy-efficient scheduling of tasks over multiple processors, where tasks share a common deadline. Distinct from many research results on heuristics-based energy-efficient scheduling, we propose approximation algorithms with different approximation bounds for processors with/without constraints on the maximum processor speed, where no task migration is allowed. When there is no constraint on processor speeds, we propose an approximation algorithm for two-processor scheduling to provide trade-offs among the specified error, the running time, the approximation ratio, and the memory space complexity. An approximation algorithm with a 1.13-approximation ratio for M-processor systems is also derived (M > 2). When there is an upper bound on processor speeds, an artificial-bound approach is taken to minimize the energy consumption with a 1.13-approximation ratio. An optimal scheduling algorithm is then proposed in the minimization of the energy consumption when task migration is allowed.
---
paper_title: Energy Efficient Scheduling and Routing via Randomized Rounding
paper_content:
We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.
---
paper_title: Energy-efficient algorithms for non-preemptive speed-scaling
paper_content:
We improve complexity bounds for energy-efficient non-preemptive scheduling problems for both the single processor and multi-processor cases. As energy conservation has become a major concern, traditional scheduling problems have been revisited in the past few years to take into account the energy consumption [1]. We consider the speed scaling setting introduced by Yao et al. [20] where a set of jobs, each with a release date, deadline and work volume, are to be scheduled on a set of identical processors. The processors may change speed as a function of time and the energy they consume is the \(\alpha \)th power of their speed integrated over time. The objective is then to find a feasible non-preemptive schedule which minimizes the total energy used.
---
paper_title: Energy Optimal Scheduling on Multiprocessors with Migration
paper_content:
We show that the problem of finding an energy minimal schedule for execution of a collection of jobs on a multiprocessor with job migration allowed has polynomial complexity. Each job is specified by a release time, a deadline, and an amount of work to be performed. All of the processors have the same, convex power-speed trade-off of the form P = phi(s), where P is power, s is speed, and phi is convex. Unlike previous work on multiprocessor scheduling, we place no restriction on the release times, deadlines, or amount of work to be done. We show that the scheduling problem is convex, and give an algorithm based on linear programming. We show that the optimal schedule is the same for any convex power-speed trade-off function.
---
paper_title: A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State
paper_content:
We study classical deadline-based preemptive scheduling of jobs in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each job is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of jobs in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [17], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,9,16]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2, 18], respectively. ::: ::: We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules.
---
paper_title: Speed-scaling with no Preemptions
paper_content:
We revisit the non-preemptive speed-scaling problem, in which a set of jobs have to be executed on a single or a set of parallel speed-scalable processor(s) between their release dates and deadlines so that the energy consumption to be minimized. We adopt the speed-scaling mechanism first introduced in [Yao et al., FOCS 1995] according to which the power dissipated is a convex function of the processor's speed. Intuitively, the higher is the speed of a processor, the higher is the energy consumption. For the single-processor case, we improve the best known approximation algorithm by providing a $(1+\epsilon)^{\alpha}\tilde{B}_{\alpha}$-approximation algorithm, where $\tilde{B}_{\alpha}$ is a generalization of the Bell number. For the multiprocessor case, we present an approximation algorithm of ratio $\tilde{B}_{\alpha}((1+\epsilon)(1+\frac{w_{\max}}{w_{\min}}))^{\alpha}$ improving the best known result by a factor of $(\frac{5}{2})^{\alpha-1}(\frac{w_{\max}}{w_{\min}})^{\alpha}$. Notice that our result holds for the fully heterogeneous environment while the previous known result holds only in the more restricted case of parallel processors with identical power functions.
---
paper_title: Race to idle: New algorithms for speed scaling with a sleep state
paper_content:
We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, Irani et al. [12] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed Scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = sα + γ, where s is the processor speed, Han et al. [11] gave an (αα + 2)-competitive online algorithm. ::: ::: We investigate the offline setting of speed scaling with a sleep state. First we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs Scrit-schedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. ::: ::: We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4/3. For power functions P(s) = βsα + γ, we obtain an approximation of 137/117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of Scrit-schedules. For general convex power functions, we give another 2-approximation algorithm. For functions P(s) = βsα + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e−1−1/e)/(eW−1(−e−1−1/e) + 1) < 1.211, where W--1 is the lower branch of the Lambert W function.
---
paper_title: From Preemptive to Non-preemptive Speed-Scaling Scheduling
paper_content:
We are given a set of jobs, each one specified by its release date, its deadline and its processing volume (work), and a single (or a set of) speed-scalable processor(s). We adopt the standard model in speed-scaling in which if a processor runs at speed s then the energy consumption is s α units of energy per time unit, where α > 1. Our goal is to find a schedule respecting the release dates and the deadlines of the jobs so that the total energy consumption is minimized. While most previous works have studied the preemptive case of the problem, where a job may be interrupted and resumed later, we focus on the non-preemptive case where once a job starts its execution, it has to continue until its completion without any interruption. As the preemptive case is known to be polynomially solvable for both the single-processor and the multiprocessor case, we explore the idea of transforming an optimal preemptive schedule to a non-preemptive one. We prove that the preemptive optimal solution does not preserve enough of the structure of the non-preemptive optimal solution, and more precisely that the ratio between the energy consumption of an optimal non-preemptive schedule and the energy consumption of an optimal preemptive schedule can be very large even for the single-processor case. Then, we focus on some interesting families of instances: (i) equal-work jobs on a single-processor, and (ii) agreeable instances in the multiprocessor case. In both cases, we propose constant factor approximation algorithms. In the latter case, our algorithm improves the best known algorithm of the literature. Finally, we propose a (non-constant factor) approximation algorithm for general instances in the multiprocessor case.
---
paper_title: Optimal voltage allocation techniques for dynamically variable voltage processors
paper_content:
This paper presents important, new results of a study on the problem of task scheduling and voltage allocation in dynamically variable voltage processors, the purpose of which was minimization of processor energy consumption. The contributions are twofold: (1) For given multiple discrete supply voltages and tasks with arbitrary arrival-time/deadline constraints, we propose a voltage allocation technique that produces a feasible task schedule with optimal processor energy consumption. (2) We then extend the problem to include the case in which tasks have nonuniform loads (i.e.; switched) capacitances and solve it optimally. The proposed technique, called Alloc-vt, in (1) is based on the prior results in [Yao, Demers and Shenker. 1995. In Proceedings of IEEE Symposium on Foundations of Computer Science. 374--382] (which is optimal for dynamically continuously variable voltages, but not for discrete ones) and [Ishihara and Yasuura. 1998. In Proceedings of International Symposium on Low Power Electronics and Design. 197--202] (which is optimal for a single task, but not for multiple tasks), whereas the proposed technique, called Alloc-vtcap, in (2) is based on an efficient linear programming (LP) formulation. Both techniques solve the allocation problems optimally in polynomial time.
---
paper_title: A scheduling model for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
---
paper_title: An $O(n^2)$ Algorithm for Computing Optimal Continuous Voltage Schedules
paper_content:
Dynamic Voltage Scaling techniques allow the processor to set its speed dynamically in order to reduce energy consumption. In the continuous model, the processor can run at any speed, while in the discrete model, the processor can only run at finite number of speeds given as input. The current best algorithm for computing the optimal schedules for the continuous model runs at \(O(n^2\log n)\) time for scheduling n jobs. In this paper, we improve the running time to \(O(n^2)\) by speeding up the calculation of s-schedules using a more refined data structure. For the discrete model, we improve the computation of the optimal schedule from the current best \(O(dn\log n)\) to \(O(n\log \max \{d,n\})\) where d is the number of allowed speeds.
---
paper_title: New Results for Non-Preemptive Speed Scaling
paper_content:
We consider the speed scaling problem introduced in the seminal paper of Yao et al. [23]. In this problem, a number of jobs, each with its own processing volume, release time, and deadline, needs to be executed on a speed-scalable processor. The power consumption of this processor is P(s) = s α , where s is the processing speed, and α > 1 is a constant. The total energy consumption is power integrated over time, and the objective is to process all jobs while minimizing the energy consumption.
---
paper_title: ⋄ Convex Optimization,
paper_content:
Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.
---
paper_title: When Discreteness Meets Continuity: Energy-Optimal DVS Scheduling Revisited ∗
paper_content:
The energy-optimal DVS scheduling problem seeks to create a frequency-voltage schedule for the CPU that can achieve energy minimization with tolerable performance loss. Prior solutions to the problem assume that the CPU canrunacrossacontinuousrangeoffrequenciesandvoltages, but today’s DVS-enabled microprocessors can only support a discrete set. As a result, the energy-optimal results in the continuous case may no longer be valid in the discrete case. This paper bridges the two cases by showingthattheoptimalitycanberetainedthroughemulation. The resultisalso applicable tosystemsthat consider leakage and/or system energy usage.
---
paper_title: Speed scaling on parallel processors with migration
paper_content:
We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works) on parallel speed scalable processors so as to minimize the total energy consumption. We consider that both preemptions and migrations of jobs are allowed. For this problem, there exists an optimal polynomial-time algorithm which uses as a black box an algorithm for linear programming. Here, we formulate the problem as a convex program and we propose a combinatorial polynomial-time algorithm which is based on finding maximum flows. Our algorithm runs in \(O({ nf}(n)\log U)\) time, where n is the number of jobs, U is the range of all possible values of processors’ speeds divided by the desired accuracy and f(n) is the time needed for computing a maximum flow in a layered graph with O(n) vertices.
---
paper_title: Algorithms for power savings
paper_content:
This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.
---
paper_title: On energy-optimal voltage scheduling for fixed-priority hard real-time systems
paper_content:
We address the problem of energy-optimal voltage scheduling for fixed-priority hard real-time systems, on which we present a complete treatment both theoretically and practically. Although most practical real-time systems are based on fixed-priority scheduling, there have been few research results known on the energy-optimal fixed-priority scheduling problem. First, we prove that the problem is NP-hard. Then, we present a fully polynomial time approximation scheme (FPTAS) for the problem. For any e > 0, the proposed approximation scheme computes a voltage schedule whose energy consumption is at most (1 + e) times that of the optimal voltage schedule. Furthermore, the running time of the proposed approximation scheme is bounded by a polynomial function of the number of input jobs and 1/e. Given the NP-hardness of the problem, the proposed approximation scheme is practically the best solution because it can compute a near-optimal voltage schedule (i.e., provably arbitrarily close to the optimal schedule) in polynomial time. Experimental results show that the approximation scheme finds more efficient (almost optimal) voltage schedules faster than the best existing heuristic.
---
paper_title: On multi-processor speed scaling with migration: extended abstract
paper_content:
We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by Yao et al. [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3\α)α/2 + 2α.
---
paper_title: Speed scaling to manage energy and temperature
paper_content:
Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by Yao et al. [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) e sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. Yao et al. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newton's law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1/b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm.
---
paper_title: Polynomial-time algorithms for minimum energy scheduling
paper_content:
The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. One common method for saving energy is to simply suspend the system during the idle times. No energy is consumed in the suspend mode. However, the process of waking up the system itself requires a certain fixed amount of energy, and thus suspending the system is beneficial only if the idle time is long enough to compensate for this additional energy expenditure. In the specific problem studied in the paper, we have a set of jobs with release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it uses one unit of energy per one unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time. We solve this problem in positive, by providing an O(n^5)-time algorithm. In addition we provide an O(n^4)-time algorithm for computing the minimum energy schedule when all jobs have unit length.
---
paper_title: An optimal speed control scheme supported by media servers for low-power multimedia applications
paper_content:
In this paper, we present a new concept of dynamic voltage scaling (DVS) for low-power multimedia decoding in battery-powered mobile devices. Most existing DVS techniques are suboptimal in achieving energy efficiency while providing the guaranteed playback quality of service, which is mainly due to the inherent limitations of client-only approaches. To address this problem, in this paper, we investigate the possibility of media server supported DVS techniques with smoothing mechanisms. Towards this new direction, we propose a generic offline bitstream analysis framework and an optimal speed control algorithm which achieves the maximal energy savings among all feasible speed profiles for the given buffers. The proposed scheme enables us to compute the buffer sizes of feasibility condition, which are the theoretical lower bound of buffer size requirement for a given media clip. More importantly, our scheme facilitates practical applications from four aspects. First, it does not require feedback information on clients’ configuration. This renders our scheme particularly suitable for broadcast or multicast applications. Second, the speed profile based on buffer sizes of feasibility condition can provide satisfactory energy efficiency. Third, the required buffer sizes are so small that they can be met by most mobile devices. Fourth, additional side information (i.e., speed profile) of the proposed scheme is negligible compared to the size of media content. These properties solve the diversity issue and feasibility issue of media server supported DVS schemes. Experimental results show that, in comparison with the representative existing techniques, our scheme improves the performance of DVS significantly.
---
paper_title: Low complexity scheduling algorithm minimizing the energy for tasks with agreeable deadlines
paper_content:
Power management aims in reducing the energy consumed by computer systems while maintaining a good level of performance. One of the mechanisms used to save energy is the shut-down mechanism which puts the system into a sleep state when it is idle. No energy is consumed in this state, but a fixed amount of energy is required for a transition from the sleep state to the active state which is equal to L times the energy required for the execution of a unit-time task. In this paper, we focus on the off-line version of this problem where a set of unit-time tasks with release dates and deadlines have to be scheduled in order to minimize the overall consumed energy during the idle periods of the schedule. Here we focus on the case where the tasks have agreeable deadlines. For the single processor case, an O(n3) algorithm has been proposed in [7] for unit-time tasks and arbitrary L. We improve this result by introducing a new O(n2) polynomial-time algorithm for tasks with arbitrary processing times and arbitrary L. For the multiprocessor case we also improve the complexity from O(n3m2) [7] to O(n2m) in the case of unit-time tasks and unit L.
---
paper_title: From Preemptive to Non-preemptive Speed-Scaling Scheduling
paper_content:
We are given a set of jobs, each one specified by its release date, its deadline and its processing volume (work), and a single (or a set of) speed-scalable processor(s). We adopt the standard model in speed-scaling in which if a processor runs at speed s then the energy consumption is s α units of energy per time unit, where α > 1. Our goal is to find a schedule respecting the release dates and the deadlines of the jobs so that the total energy consumption is minimized. While most previous works have studied the preemptive case of the problem, where a job may be interrupted and resumed later, we focus on the non-preemptive case where once a job starts its execution, it has to continue until its completion without any interruption. As the preemptive case is known to be polynomially solvable for both the single-processor and the multiprocessor case, we explore the idea of transforming an optimal preemptive schedule to a non-preemptive one. We prove that the preemptive optimal solution does not preserve enough of the structure of the non-preemptive optimal solution, and more precisely that the ratio between the energy consumption of an optimal non-preemptive schedule and the energy consumption of an optimal preemptive schedule can be very large even for the single-processor case. Then, we focus on some interesting families of instances: (i) equal-work jobs on a single-processor, and (ii) agreeable instances in the multiprocessor case. In both cases, we propose constant factor approximation algorithms. In the latter case, our algorithm improves the best known algorithm of the literature. Finally, we propose a (non-constant factor) approximation algorithm for general instances in the multiprocessor case.
---
paper_title: Min-Energy Scheduling for Aligned Jobs in Accelerate Model
paper_content:
Dynamic voltage scaling technique provides the capability for processors to adjust the speed and control the energy consumption. We study the pessimistic accelerate model where the acceleration rate of the processor speed is at most K and jobs cannot be executed during the speed transition period. The objective is to find a min-energy (optimal) schedule that finishes every job within its deadline. The job set we study in this paper is aligned jobs where earlier released jobs have earlier deadlines. We start by investigating a special case where all jobs have common arrival time and design an O(n 2) algorithm to compute the optimal schedule based on some nice properties of the optimal schedule. Then, we study the general aligned jobs and obtain an O(n 2) algorithm to compute the optimal schedule by using the algorithm for the common arrival time case as a building block. Because our algorithm relies on the computation of the optimal schedule in the ideal model (K = ?), in order to achieve O(n 2) complexity, we improve the complexity of computing the optimal schedule in the ideal model for aligned jobs from the currently best known O(n 2logn) to O(n 2).
---
paper_title: Low complexity scheduling algorithms minimizing the energy for tasks with agreeable deadlines
paper_content:
Power management aims in reducing the energy consumed by computer systems while maintaining a good level of performance. One of the mechanisms used to save energy is the shut-down mechanism which puts the system into a sleep state when it is idle. No energy is consumed in this state, but a fixed amount of energy is required for a transition from the sleep state to the active state which is equal to L times the energy required for the execution of a unit-time task. In this paper, we focus on the off-line version of this problem where a set of unit-time tasks with release dates and deadlines have to be scheduled in order to minimize the overall consumed energy during the idle periods of the schedule. Here we focus on the case where the tasks have agreeable deadlines. For the single processor case, an O(n^3) algorithm has been proposed in Gururaj et al. (2010) for unit-time tasks and arbitrary L. We improve this result by introducing a new O(n^2) polynomial-time algorithm for tasks with arbitrary processing times and arbitrary L. For the multiprocessor case we also improve the complexity from O(n^3m^2) Gururaj et al. (2010) to O(n^2m) in the case of unit-time tasks and unit L.
---
paper_title: New Results for Non-Preemptive Speed Scaling
paper_content:
We consider the speed scaling problem introduced in the seminal paper of Yao et al. [23]. In this problem, a number of jobs, each with its own processing volume, release time, and deadline, needs to be executed on a speed-scalable processor. The power consumption of this processor is P(s) = s α , where s is the processing speed, and α > 1 is a constant. The total energy consumption is power integrated over time, and the objective is to process all jobs while minimizing the energy consumption.
---
paper_title: Green Scheduling, Flows and Matchings
paper_content:
Recently, optimal combinatorial algorithms have been presented for the energy minimization multi-processor speed scaling problem with migration [Albers et al., SPAA 2011], [Angel et al., Euro-Par 2012]. These algorithms are based on repeated maximum-flow computations allowing the partition of the set of jobs into subsets in which all the jobs are executed at the same speed. The optimality of these algorithms is based on a series of technical lemmas showing that this partition and the corresponding speeds lead to the minimization of the energy consumption. In this paper, we show that both the algorithms and their analysis can be greatly simplified. In order to do this, we formulate the problem as a convex cost flow problem in an appropriate flow network. Furthermore, we show that our approach is useful to solve other problems in the dynamic speed scaling setting. As an example, we consider the preemptive open-shop speed scaling problem and we propose a polynomial-time algorithm for finding an optimal solution based on the computation of convex cost flows. We also propose a polynomial-time algorithm for minimizing a linear combination of the sum of the completion times of the jobs and the total energy consumption, for the multi-processor speed scaling problem without preemptions. Instead of using convex cost flows, our algorithm is based on the computation of a minimum weighted maximum matching in an appropriate bipartite graph.
---
paper_title: Multiprocessor energy-efficient scheduling with task migration considerations
paper_content:
This paper targets energy-efficient scheduling of tasks over multiple processors, where tasks share a common deadline. Distinct from many research results on heuristics-based energy-efficient scheduling, we propose approximation algorithms with different approximation bounds for processors with/without constraints on the maximum processor speed, where no task migration is allowed. When there is no constraint on processor speeds, we propose an approximation algorithm for two-processor scheduling to provide trade-offs among the specified error, the running time, the approximation ratio, and the memory space complexity. An approximation algorithm with a 1.13-approximation ratio for M-processor systems is also derived (M > 2). When there is an upper bound on processor speeds, an artificial-bound approach is taken to minimize the energy consumption with a 1.13-approximation ratio. An optimal scheduling algorithm is then proposed in the minimization of the energy consumption when task migration is allowed.
---
paper_title: Energy Efficient Scheduling and Routing via Randomized Rounding
paper_content:
We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.
---
paper_title: Energy Optimal Scheduling on Multiprocessors with Migration
paper_content:
We show that the problem of finding an energy minimal schedule for execution of a collection of jobs on a multiprocessor with job migration allowed has polynomial complexity. Each job is specified by a release time, a deadline, and an amount of work to be performed. All of the processors have the same, convex power-speed trade-off of the form P = phi(s), where P is power, s is speed, and phi is convex. Unlike previous work on multiprocessor scheduling, we place no restriction on the release times, deadlines, or amount of work to be done. We show that the scheduling problem is convex, and give an algorithm based on linear programming. We show that the optimal schedule is the same for any convex power-speed trade-off function.
---
paper_title: Analytic Clock Frequency Selection for Global DVFS
paper_content:
Computers can reduce their power consumption by decreasing their speed using Dynamic Voltage and Frequency Scaling (DVFS). A form of DVFS for multicore processors is global DVFS, where the voltage and clock frequency is shared among all processor cores. Because global DVFS is efficient and cheap to implement, it is used in modern multicore processors like the IBM Power 7, ARM Cortex A9 and NVIDIA Tegra 2. This theory oriented paper discusses energy optimal DVFS algorithms for such processors. There are no known provably optimal algorithms that minimize the energy consumption of nontrivial real-time applications on a global DVFS system. Such algorithms only exist for single core systems, or for simpler application models. While many DVFS algorithms focus on tasks, this theoretical study is conceptually different and focuses on the amount of parallelism. We provide a transformation from a multicore problem to a single core problem, by using the amount of parallelism of an application. Then existing single core algorithms can be used to find the optimal solution. Furthermore, we extend an existing single core algorithm such that it takes static power into account.
---
paper_title: From Preemptive to Non-preemptive Speed-Scaling Scheduling
paper_content:
We are given a set of jobs, each one specified by its release date, its deadline and its processing volume (work), and a single (or a set of) speed-scalable processor(s). We adopt the standard model in speed-scaling in which if a processor runs at speed s then the energy consumption is s α units of energy per time unit, where α > 1. Our goal is to find a schedule respecting the release dates and the deadlines of the jobs so that the total energy consumption is minimized. While most previous works have studied the preemptive case of the problem, where a job may be interrupted and resumed later, we focus on the non-preemptive case where once a job starts its execution, it has to continue until its completion without any interruption. As the preemptive case is known to be polynomially solvable for both the single-processor and the multiprocessor case, we explore the idea of transforming an optimal preemptive schedule to a non-preemptive one. We prove that the preemptive optimal solution does not preserve enough of the structure of the non-preemptive optimal solution, and more precisely that the ratio between the energy consumption of an optimal non-preemptive schedule and the energy consumption of an optimal preemptive schedule can be very large even for the single-processor case. Then, we focus on some interesting families of instances: (i) equal-work jobs on a single-processor, and (ii) agreeable instances in the multiprocessor case. In both cases, we propose constant factor approximation algorithms. In the latter case, our algorithm improves the best known algorithm of the literature. Finally, we propose a (non-constant factor) approximation algorithm for general instances in the multiprocessor case.
---
paper_title: Speed scaling on parallel processors with migration
paper_content:
We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works) on parallel speed scalable processors so as to minimize the total energy consumption. We consider that both preemptions and migrations of jobs are allowed. For this problem, there exists an optimal polynomial-time algorithm which uses as a black box an algorithm for linear programming. Here, we formulate the problem as a convex program and we propose a combinatorial polynomial-time algorithm which is based on finding maximum flows. Our algorithm runs in \(O({ nf}(n)\log U)\) time, where n is the number of jobs, U is the range of all possible values of processors’ speeds divided by the desired accuracy and f(n) is the time needed for computing a maximum flow in a layered graph with O(n) vertices.
---
paper_title: On multi-processor speed scaling with migration: extended abstract
paper_content:
We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by Yao et al. [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3\α)α/2 + 2α.
---
paper_title: On the Interplay between Global DVFS and Scheduling Tasks with Precedence Constraints
paper_content:
Many multicore processors are capable of decreasing the voltage and clock frequency to save energy at the cost of an increased delay. While a large part of the theory oriented literature focuses on local dynamic voltage and frequency scaling (local DVFS), where every core’s voltage and clock frequency can be set separately, this article presents an in-depth theoretical study of the more commonly available global DVFS that makes such changes for the entire chip. This article shows how to choose the optimal clock frequencies that minimize the energy for global DVFS, and it discusses the relationship between scheduling and optimal global DVFS. Formulas are given to find this optimum under time constraints, including proofs thereof. The problem of simultaneously choosing clock frequencies and a schedule that together minimize the energy consumption is discussed, and based on this a scheduling criterion is derived that implicitly assigns frequencies and minimizes energy consumption. Furthermore, this article studies the effectivity of a large class of scheduling algorithms with regard to the derived criterion, and a bound on the maximal relative deviation is given. Simulations show that with our techniques an energy reduction of 30% can be achieved with respect to state-of-the-art research.
---
paper_title: Scheduling Precedence Constrained Tasks with Reduced Processor Energy on Multiprocessor Computers
paper_content:
Energy-efficient scheduling of sequential tasks with precedence constraints on multiprocessor computers with dynamically variable voltage and speed is investigated as combinatorial optimization problems. In particular, the problem of minimizing schedule length with energy consumption constraint and the problem of minimizing energy consumption with schedule length constraint are considered. Our scheduling problems contain three nontrivial subproblems, namely, precedence constraining, task scheduling, and power supplying. Each subproblem should be solved efficiently so that heuristic algorithms with overall good performance can be developed. Such decomposition of our optimization problems into three subproblems makes design and analysis of heuristic algorithms tractable. Three types of heuristic power allocation and scheduling algorithms are proposed for precedence constrained sequential tasks with energy and time constraints, namely, prepower-determination algorithms, postpower-determination algorithms, and hybrid algorithms. The performance of our algorithms are analyzed and compared with optimal schedules analytically. Such analysis has not been conducted in the literature for any algorithm. Therefore, our investigation in this paper makes initial contribution to analytical performance study of heuristic power allocation and scheduling algorithms for precedence constrained sequential tasks. Our extensive simulation data demonstrate that for wide task graphs, the performance ratios of all our heuristic algorithms approach one as the number of tasks increases.
---
paper_title: Energy-Efficient Scheduling for Real-Time Systems on Dynamic Voltage Scaling (DVS) Platforms
paper_content:
Energy-efficient designs have played import roles for hardware and software implementations for a decade. With the advanced technology of VLSI circuit designs, energy-efficiency can be achieved by adopting the dynamic voltage scaling (DVS) technique. In this paper, we survey the studies for energy-efficient scheduling in real-time systems on DVS platforms to cover both theoretical and practical issues.
---
paper_title: Analytic Clock Frequency Selection for Global DVFS
paper_content:
Computers can reduce their power consumption by decreasing their speed using Dynamic Voltage and Frequency Scaling (DVFS). A form of DVFS for multicore processors is global DVFS, where the voltage and clock frequency is shared among all processor cores. Because global DVFS is efficient and cheap to implement, it is used in modern multicore processors like the IBM Power 7, ARM Cortex A9 and NVIDIA Tegra 2. This theory oriented paper discusses energy optimal DVFS algorithms for such processors. There are no known provably optimal algorithms that minimize the energy consumption of nontrivial real-time applications on a global DVFS system. Such algorithms only exist for single core systems, or for simpler application models. While many DVFS algorithms focus on tasks, this theoretical study is conceptually different and focuses on the amount of parallelism. We provide a transformation from a multicore problem to a single core problem, by using the amount of parallelism of an application. Then existing single core algorithms can be used to find the optimal solution. Furthermore, we extend an existing single core algorithm such that it takes static power into account.
---
| Title: A Survey of Offline Algorithms for Energy Minimization under Deadline Constraints
Section 1: Introduction
Description 1: Introduce the importance of energy consumption in computing systems, provide context on power management techniques, and give an overview of the paper’s contributions and structure.
Section 2: Related surveys
Description 2: Discuss existing surveys on energy-aware scheduling and highlight how this survey differs by focusing on offline algorithms for real-time systems.
Section 3: Modeling and notation
Description 3: Provide a unified notation and modeling assumptions for speed scaling and sleep mode problems to be used throughout the survey.
Section 4: Fundamental results
Description 4: Introduce key theoretical concepts and results that are foundational to various power management algorithms.
Section 5: Uniprocessor problems
Description 5: Survey algorithms for minimizing energy consumption in single-processor systems with deadline constraints.
Section 6: Multiprocessor problems
Description 6: Discuss algorithms focused on multiprocessor systems, covering general tasks, tasks with agreeable deadlines, and tasks with precedence constraints.
Section 7: Open problems
Description 7: Highlight unresolved issues and challenges in the field of algorithmic power management, particularly for multiprocessor systems and tasks with precedence constraints.
Section 8: Discussion
Description 8: Summarize the main contributions of the survey and discuss the gap between theory and practice in algorithmic power management. |
A survey of motion-parallax-based 3-D reconstruction algorithms | 6 | ---
paper_title: When is the Shape of a Scene Unique Given its Light-Field: A Fundamental Theorem of 3D Vision?
paper_content:
The complete set of measurements that could ever be used by a passive 3D vision algorithm is the plenoptic function or light-field. We give a concise characterization of when the light-field of a Lambertian scene uniquely determines its shape and, conversely, when the shape is inherently ambiguous. In particular, we show that stereo computed from the light-field is ambiguous if and only if the scene is radiating light of a constant intensity (and color, etc.) over an extended region.
---
paper_title: A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses
paper_content:
A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.
---
paper_title: The Geometry of Multiple Images
paper_content:
Nitrocyclopropane is prepared by reacting 3-chloro-1-nitropropane with an amine which has a polar center in addition to an amino function in the presence of a polar, aprotic solvent. For example, the reaction of 3-chloro-1-nitropropane with ethylene diamine in the presence of dimethyl sulfoxide produces nitrocyclopropane in high yield.
---
paper_title: Multiple view geometry in computer vision
paper_content:
From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
---
paper_title: The Geometry of Multiple Images
paper_content:
Nitrocyclopropane is prepared by reacting 3-chloro-1-nitropropane with an amine which has a polar center in addition to an amino function in the presence of a polar, aprotic solvent. For example, the reaction of 3-chloro-1-nitropropane with ethylene diamine in the presence of dimethyl sulfoxide produces nitrocyclopropane in high yield.
---
paper_title: A flexible new technique for camera calibration
paper_content:
We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.
---
paper_title: Multiple view geometry in computer vision
paper_content:
From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
---
paper_title: A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses
paper_content:
A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.
---
paper_title: Gauge fixing for accurate 3D estimation
paper_content:
Computer vision techniques can estimate 3D shape from images, but usually only up to a scale factor. The scale factor must be obtained by a physical measurement of the scene or the camera motion. Using gauge theory, we show that how this scale factor is determined can significantly affect the accuracy of the estimated shape. And yet these considerations have been ignored in previous works where 3D shape accuracy is optimized. We investigate how scale fixing influences the accuracy of 3D reconstruction and determine what measurement should be made to maximize the shape accuracy.
---
paper_title: A flexible new technique for camera calibration
paper_content:
We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.
---
paper_title: Cooperative computation of stereo disparity
paper_content:
Perhaps one of the most striking differences between a brain and today’s computers is the amount of “wiring.” In a digital computer the ratio of connections to components is about 3, whereas for the mammalian cortex it lies between 10 and 10,000 (1).
---
paper_title: A combined corner and edge detector
paper_content:
The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.
---
paper_title: Performance of optical flow techniques
paper_content:
While different optical flow techniques continue to appear, there has been a lack of quantitative evaluation of existing methods. For a common set of real and synthetic image sequences, we report the results of a number of regularly cited optical flow techniques, including instances of differential, matching, energy-based, and phase-based methods. Our comparisons are primarily empirical, and concentrate on the accuracy, reliability, and density of the velocity measurements; they show that performance can differ significantly among the techniques we implemented.
---
paper_title: Three-dimensional computer vision: a geometric viewpoint
paper_content:
Projective geometry modelling and calibrating cameras edge detection representing geometric primitives and their uncertainty stereo vision determining discrete motion from points and lines tracking tokens over time motion fields of curves interpolating and approximating three-dimensional data recognizing and locating objects and places answers to problems. Appendices: constrained optimization some results from algebraic geometry differential geometry.
---
paper_title: SUSAN—A New Approach to Low Level Image Processing
paper_content:
This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. ::: ::: Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. ::: ::: Details of the new feature detectors and of the new noise reduction method are described, along with test results.
---
paper_title: Subspace methods for recovering rigid motion I: Algorithm and implementation
paper_content:
As an observer moves and explores the environment, the visual stimulation in his/her eye is constantly changing. Somehow he/she is able to perceive the spatial layout of the scene, and to discern his/her movement through space. Computational vision researchers have been trying to solve this problem for a number of years with only limited success. It is a difficult problem to solve because the optical flow field is nonlinearly related to the 3D motion and depth parameters.
---
paper_title: Computing visual correspondence with occlusions using graph cuts
paper_content:
Several new algorithms for visual correspondence based on graph cuts have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present a new method which properly addresses occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our method performs well both at detecting occlusions and computing disparities.
---
paper_title: Optical Flow Based Structure from Motion
paper_content:
Reconstructing the 3D shape of a scene from its 2D images is a problem that has attracted a great deal of research. 3D models are nowadays widely used for scientific visualization, entertainment and engineering tasks. Most of the approaches developed by the computer vision community can be roughly classified as feature based or flow based, according to if the data they use is a set of features matches or an optical flow field. While a dense optical flow field, due to its noisy nature, is not extremely suitable for tracking, finding corresponding features between different views of large baseline is still an open problem. The system we develop in this thesis is of a hybrid type. We track sparse features over sequences acquired at 25Hz from an hand held camera. During the tracking good features can be selected as those laying in high textured areas: this guarantees higher precision in the estimation of features displacements. Such displacements are used to approximate optical flow. We demonstrate that this approximation is a good one for our working conditions. Using this approach we bypass the matching problem of stereo and the complexity and time integration problems of the optical flow based reconstruction. Time integration is obtained by an optimal predict-update procedure that merges measurements by re-weighting by the respective covariance measurements. Most of the research effort of this thesis is focused on the robust estimation of structure and motion from a pair of images and the related optical flow field. We test first a linear solution that has the appealing property of being of closed form but the problem of returning biased estimates. We propose an non-linear refinement to the linear estimator showing convergence properties and improvements in bias and variance. We further extend the non-linear estimator to incorporate the optical flow covariance matrix (maximum-likelihood) and, moreover, we show that, in the case of dense sequences, it is possible to locally time integrate the reconstruction process for increased robustness. We experimentally investigate the possibility of introducing geometrical constraints in the structure and motion estimation. Such constraints are of bilinear type, i.e. planes, lines and incidence of these primitives are used. For this purpose we present a new motion based segmentation algorithm able to automatically detect and reconstruct planar regions. To asses the efficacy of our solution the algorithms were tested on a variety of real and simulated sequences. ISBN 91-7283-308-4 • TRITA-02-11 • ISSN 0348-2952 • ISRN KTH/NA/R 02-11
---
paper_title: Gauge fixing for accurate 3D estimation
paper_content:
Computer vision techniques can estimate 3D shape from images, but usually only up to a scale factor. The scale factor must be obtained by a physical measurement of the scene or the camera motion. Using gauge theory, we show that how this scale factor is determined can significantly affect the accuracy of the estimated shape. And yet these considerations have been ignored in previous works where 3D shape accuracy is optimized. We investigate how scale fixing influences the accuracy of 3D reconstruction and determine what measurement should be made to maximize the shape accuracy.
---
paper_title: Depth from Edge and Intensity Based Stereo
paper_content:
The past few years have seen a growing interest in the application" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation [7,12], automatic surveillance [9], aerial cartography [10,13], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error - those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed.
---
paper_title: A direct method for stereo correspondence based on singular value decomposition
paper_content:
This paper proposes a new algorithm for matching point features across pairs of images. Despite the well-known combinatorial complexity of the problem, this work shows that an acceptably good solution can be obtained directly by singular value decomposition of an appropriate correspondence strength matrix. The approach draws from the method proposed previously but, besides suggesting its usefulness for stereo matching, in this work a correlation-weighted proximity function is used as correspondence strength to specifically cater for real images.
---
paper_title: Robust image corner detection through curvature scale space
paper_content:
This paper describes a novel method for image corner detection based on the curvature scale-space (CSS) representation. The first step is to extract edges from the original image using a Canny detector (1986). The corner points of an image are defined as points where image edges have their maxima of absolute curvature. The corner points are detected at a high scale of the CSS and tracked through multiple lower scales to improve localization. This method is very robust to noise, and we believe that it performs better than the existing corner detectors An improvement to Canny edge detector's response to 45/spl deg/ and 135/spl deg/ edges is also proposed. Furthermore, the CSS detector can provide additional point features (curvature zero-crossings of image edge contours) in addition to the traditional corners.
---
paper_title: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
paper_content:
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.
---
paper_title: What energy functions can be minimized via graph cuts
paper_content:
In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.
---
paper_title: Stereo Correspondence Based on Line Matching in Hough Space Using Dynamic Programming
paper_content:
This paper presents a method of using Hough space for solving the correspondence problem in stereo vision. It is shown that the line-matching problem in image space can readily be converted into a point-matching problem in Hough (/spl rho/-/spl theta/) space. Dynamic programming can be used for searching the optimal matching, now in Hough space. The combination of multiple constraints, especially the natural embedding of the constraint of figural continuity, ensures the accuracy of the matching. The time complexity for searching in dynamic programming is O(pmn), where m and n are the numbers of the lines for each /spl theta/ in the pair of stereo images, respectively, and p is the number of all possible line orientations. Since m and n are usually fairly small, the matching process is very efficient. Experimental results from both binocular and trinocular matchings are presented and analyzed. >
---
paper_title: Gray-level corner detection
paper_content:
The usual approach to detecting corners in shapes involves first segmenting the shape, then locating the corners in its boundary. We present several techniques for measuring 'cornerity' values in gray-level images, without prior segmentation, so that corners can be detected by thresholding these values.
---
paper_title: Generalized Voxel Coloring
paper_content:
Image-based reconstruction from randomly scattered views is a challenging problem. We present a new algorithm that extends Seitz and Dyer's Voxel Coloring algorithm. Unlike their algorithm, ours can use images from arbitrary camera locations. The key problem in this class of algorithms is that of identifying the images from which a voxel is visible. Unlike Kutulakos and Seitz's Space Carving technique, our algorithm solves this problem exactly and the resulting reconstructions yield better results in our application, which is synthesizing new views. One variation of our algorithm minimizes color consistency comparisons; another uses less memory and can be accelerated with graphics hardware. We present efficiency measurements and, for comparison, we present images synthesized using our algorithm and Space Carving.
---
paper_title: Rapid octree representation from image sequences
paper_content:
A mercury trap apparatus used in connection with a differential pressure meter comprising a plurality of mercury trap receptacles, each of the mercury trap receptacles having a pipe with a T-shaped nozzle inserted therein, and extending a predetermined distance into the enclosed area of the mercury trap receptacle; each of the mercury trap receptacles are connected to a gas pipeline by a high-pressure conduit and with the trap receptacle pipe being connected to a mercury differential pressure meter by a connector conduit.
---
paper_title: Photorealistic scene reconstruction by voxel coloring
paper_content:
A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images. The approach is evaluated with images from both inward- and outward-facing cameras.
---
paper_title: Variational multiframe stereo in the presence of specular reflections
paper_content:
We consider the problem of estimating the surface of an object from a calibrated set of views under the assumption that the reflectance of the object is non-Lambertian. In particular, we consider the case when the object presents sharp specular reflections. We pose the problem within a variational framework and use fast numerical techniques to approach the local minimum of a regularized cost functional.
---
paper_title: Variational principles, surface evolution, PDE's, level set methods and the stereo problem
paper_content:
We present a novel geometric approach for solving the stereo problem for an arbitrary number of images (greater than or equal to 2). It is based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images. The Euler-Lagrange equations which are deduced from the variational principle provide a set of PDE's which are used to deform an initial set of surfaces which then move towards the objects to be detected. The level set implementation of these PDE's potentially provides an efficient and robust way of achieving the surface evolution and to deal automatically with changes in the surface topology during the deformation, i.e. to deal with multiple objects. Results of a two dimensional implementation of our theory are presented on synthetic and real images.
---
paper_title: Canonic representations for the geometries of multiple projective views
paper_content:
We show how a special decomposition of general projection matrices, called canonic enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), affine, and projective, that we also relate to each other. In the last case, a new decomposition of the well-known fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established, as well as the link between local (ie for pairs of views) representations and global (ie for a sequence of images) representations.
---
paper_title: A Factorization Based Algorithm for Multi-Image Projective Structure and Motion
paper_content:
We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images.
---
paper_title: SUSAN—A New Approach to Low Level Image Processing
paper_content:
This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. ::: ::: Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. ::: ::: Details of the new feature detectors and of the new noise reduction method are described, along with test results.
---
paper_title: Shape and motion from im-age streams under orthography: A factorization approach
paper_content:
A weft yarn control device for assuring proper positioning of the weft yarn during the second phase of the weaving cycle in shuttleless looms of the type wherein weft yarn from a stationary source is inserted individually into separate sheds of warp threads in pairs of interconnected picks.
---
paper_title: Using vanishing points for camera calibration
paper_content:
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
---
paper_title: Multiple view geometry in computer vision
paper_content:
From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
---
paper_title: Determining vanishing points from perspective images
paper_content:
Abstract This paper describes a computationally inexpensive algorithm for the determination of vanishing points once line segments in an image have been determined. The approach is particularly attractive since it has no computationally degenerate cases and the only operations necessary are vector cross products and arc tangents. The need to know the distance to the focal plane is also eliminated thus avoiding tedious calibration procedures.
---
paper_title: Contribution to the determination of vanishing points using Hough transform
paper_content:
We propose a method to locate three vanishing points on an image, corresponding to three orthogonal directions of the scene. This method is based on two cascaded Hough transforms. We show that, even in the case of synthetic images of high quality, a naive approach may fail, essentially because of the errors due to the limitation of the image size. We take into account these errors as well as errors due to detection inaccuracy of the image segments, and provide a method efficient, even in the case of real complex scenes. >
---
paper_title: Determining perspective structures using hierarchical Hough transform
paper_content:
In this paper, we present an efficient algorithm determining perspective structures such as vanishing points and horizon lines for indoor scenes. The algorithm is implemented as a hierarchical Hough transform on the pyramidal sphere.
---
paper_title: Using vanishing points for camera calibration
paper_content:
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.
---
paper_title: Interpreting perspective images
paper_content:
A fundamental problem in computer vision is how to determine the 3-D spatial orientation of curves and surfaces appearing in an image. The problem is generally underconstrained, and is complicated by the fact that metric properties, such as orientation and length, are not invariant under projection. Under perspective projection (the correct model for most real images) the transform is nonlinear, and therefore hard to invert. Two constructive methods are presented. The first finds the orientation of parallel lines and planes by locating vanishing points and vanishing lines. The second determines the orientation of planes by 'backprojection' of two intrinsic properties of contours: angle magnitude and curvature.
---
paper_title: Multiple view geometry in computer vision
paper_content:
From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
---
paper_title: Cooperative computation of stereo disparity
paper_content:
Perhaps one of the most striking differences between a brain and today’s computers is the amount of “wiring.” In a digital computer the ratio of connections to components is about 3, whereas for the mammalian cortex it lies between 10 and 10,000 (1).
---
paper_title: Kruppa's equations derived from the fundamental matrix
paper_content:
The purpose of this paper is to give a specific form for Kruppa's equations in terms of the fundamental matrix. Kruppa's equations can be written explicitly in terms of the singular value decomposition (SVD) of the fundamental matrix.
---
paper_title: Three-dimensional computer vision: a geometric viewpoint
paper_content:
Projective geometry modelling and calibrating cameras edge detection representing geometric primitives and their uncertainty stereo vision determining discrete motion from points and lines tracking tokens over time motion fields of curves interpolating and approximating three-dimensional data recognizing and locating objects and places answers to problems. Appendices: constrained optimization some results from algebraic geometry differential geometry.
---
paper_title: Camera Self-Calibration: Theory and Experiments
paper_content:
The problem of finding the internal orientation of a camera (camera calibration) is extremely important for practical applications. In this paper a complete method for calibrating a camera is presented. In contrast with existing methods it does not require a calibration object with a known 3D shape. The new method requires only point matches from image sequences. It is shown, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interest and then tracking them in the image as the camera moves. It is not necessary to know the camera motion.
---
paper_title: Subspace methods for recovering rigid motion I: Algorithm and implementation
paper_content:
As an observer moves and explores the environment, the visual stimulation in his/her eye is constantly changing. Somehow he/she is able to perceive the spatial layout of the scene, and to discern his/her movement through space. Computational vision researchers have been trying to solve this problem for a number of years with only limited success. It is a difficult problem to solve because the optical flow field is nonlinearly related to the 3D motion and depth parameters.
---
paper_title: Computing visual correspondence with occlusions using graph cuts
paper_content:
Several new algorithms for visual correspondence based on graph cuts have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present a new method which properly addresses occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our method performs well both at detecting occlusions and computing disparities.
---
paper_title: Optical Flow Based Structure from Motion
paper_content:
Reconstructing the 3D shape of a scene from its 2D images is a problem that has attracted a great deal of research. 3D models are nowadays widely used for scientific visualization, entertainment and engineering tasks. Most of the approaches developed by the computer vision community can be roughly classified as feature based or flow based, according to if the data they use is a set of features matches or an optical flow field. While a dense optical flow field, due to its noisy nature, is not extremely suitable for tracking, finding corresponding features between different views of large baseline is still an open problem. The system we develop in this thesis is of a hybrid type. We track sparse features over sequences acquired at 25Hz from an hand held camera. During the tracking good features can be selected as those laying in high textured areas: this guarantees higher precision in the estimation of features displacements. Such displacements are used to approximate optical flow. We demonstrate that this approximation is a good one for our working conditions. Using this approach we bypass the matching problem of stereo and the complexity and time integration problems of the optical flow based reconstruction. Time integration is obtained by an optimal predict-update procedure that merges measurements by re-weighting by the respective covariance measurements. Most of the research effort of this thesis is focused on the robust estimation of structure and motion from a pair of images and the related optical flow field. We test first a linear solution that has the appealing property of being of closed form but the problem of returning biased estimates. We propose an non-linear refinement to the linear estimator showing convergence properties and improvements in bias and variance. We further extend the non-linear estimator to incorporate the optical flow covariance matrix (maximum-likelihood) and, moreover, we show that, in the case of dense sequences, it is possible to locally time integrate the reconstruction process for increased robustness. We experimentally investigate the possibility of introducing geometrical constraints in the structure and motion estimation. Such constraints are of bilinear type, i.e. planes, lines and incidence of these primitives are used. For this purpose we present a new motion based segmentation algorithm able to automatically detect and reconstruct planar regions. To asses the efficacy of our solution the algorithms were tested on a variety of real and simulated sequences. ISBN 91-7283-308-4 • TRITA-02-11 • ISSN 0348-2952 • ISRN KTH/NA/R 02-11
---
paper_title: Depth from Edge and Intensity Based Stereo
paper_content:
The past few years have seen a growing interest in the application" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation [7,12], automatic surveillance [9], aerial cartography [10,13], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error - those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed.
---
paper_title: A direct method for stereo correspondence based on singular value decomposition
paper_content:
This paper proposes a new algorithm for matching point features across pairs of images. Despite the well-known combinatorial complexity of the problem, this work shows that an acceptably good solution can be obtained directly by singular value decomposition of an appropriate correspondence strength matrix. The approach draws from the method proposed previously but, besides suggesting its usefulness for stereo matching, in this work a correlation-weighted proximity function is used as correspondence strength to specifically cater for real images.
---
paper_title: Volume/surface octrees for the representation of three-dimensional objects
paper_content:
The octree structure for the representation of 3D objects is an extension of the quadtree representation of 2D (binary) images. It is generated from the 3D binary array of the object it represents. However, the acquisition of a 3D array is not a trivial problem. In this study, we propose a scheme to generate an octree of an object from its three orthogonal views (silhouettes) exploiting a volume intersection technique. A multi-level boundary search algorithm is developed to incorporate surface information into the octree representation. This makes the octree representation compact, informative, and especially useful for graphic displays and object recognition tasks. An algorithm is also designed for computing the moment of inertia matrix, which is useful for object recognition. All the algorithms developed in this study are essentially tree traversal procedures and therefore are suitable for implementation on parallel processors.
---
paper_title: Stereo Correspondence Based on Line Matching in Hough Space Using Dynamic Programming
paper_content:
This paper presents a method of using Hough space for solving the correspondence problem in stereo vision. It is shown that the line-matching problem in image space can readily be converted into a point-matching problem in Hough (/spl rho/-/spl theta/) space. Dynamic programming can be used for searching the optimal matching, now in Hough space. The combination of multiple constraints, especially the natural embedding of the constraint of figural continuity, ensures the accuracy of the matching. The time complexity for searching in dynamic programming is O(pmn), where m and n are the numbers of the lines for each /spl theta/ in the pair of stereo images, respectively, and p is the number of all possible line orientations. Since m and n are usually fairly small, the matching process is very efficient. Experimental results from both binocular and trinocular matchings are presented and analyzed. >
---
paper_title: A theory of self-calibration of a moving camera
paper_content:
There is a close connection between the calibration of a single camera and the epipolar transformation obtained when the camera undergoes a displacement. The epipolar transformation imposes two algebraic constraints on the camera calibration. If two epipolar transformations, arising from different camera displacements, are available then the compatible camera calibrations are parameterized by an algebraic curve of genus four. The curve can be represented either by a space curve of degree seven contained in the intersection of two cubic surfaces, or by a curve of degree six in the dual of the image plane. The curve in the dual plane has one singular point of order three and three singular points of order two.
---
paper_title: Stratified self-calibration with the modulus constraint
paper_content:
In computer vision and especially for 3D reconstruction, one of the key issues is the retrieval of the calibration parameters of the camera. These are needed to obtain metric information about the scene from the camera. Often these parameters are obtained through cumbersome calibration procedures. There is a way to avoid explicit calibration of the camera. Self-calibration is based on finding the set of calibration parameters which satisfy some constraints (e.g., constant calibration parameters). Several techniques have been proposed but it often proved difficult to reach a metric calibration at once. Therefore, in the paper, a stratified approach is proposed, which goes from projective through affine to metric. The key concept to achieve this is the modulus constraint. It allows retrieval of the affine calibration for constant intrinsic parameters. It is also suited for use in conjunction with scene knowledge. In addition, if the affine calibration is known, it can also be used to cope with a changing focal length.
---
paper_title: Self-Calibration from Image Triplets
paper_content:
We describe a method for determining affine and metric calibration of a camera with unchanging internal parameters undergoing planar motion. It is shown that affine calibration is recovered uniquely, and metric calibration up to a two fold ambiguity.
---
paper_title: Critical motion sequences for monocular self-calibration and uncalibrated Euclidean reconstruction
paper_content:
In this paper sequences of camera motions that lead to inherent ambiguities in uncalibrated Euclidean reconstruction or self-calibration are studied. Our main contribution is a complete, detailed classification of these critical motion sequences (CMS). The practically important classes are identified and their degrees of ambiguity are derived. We also discuss some practical issues, especially concerning the reduction of the ambiguity of a reconstruction.
---
paper_title: Ground Plane Motion Camera Models
paper_content:
We show that it is possible to build an application-specific motion constraint into a general hierarchy of camera models so that the resulting algorithm is efficient and well conditioned; and the system degrades gracefully if the constraint is violated. The current work arose in the context of a specific application, namely smart convoying of vehicles on highways, and so ground plane motion (GPM) arises naturally. The algorithm for estimating motion from monocular, uncalibrated image sequences under ground plane motion involves the computation of three kinds of structure α,Β and γ for which algorithms are presented. Typical results on real data are shown.
---
paper_title: Multiple view geometry in computer vision
paper_content:
From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.
---
paper_title: Rapid octree representation from image sequences
paper_content:
A mercury trap apparatus used in connection with a differential pressure meter comprising a plurality of mercury trap receptacles, each of the mercury trap receptacles having a pipe with a T-shaped nozzle inserted therein, and extending a predetermined distance into the enclosed area of the mercury trap receptacle; each of the mercury trap receptacles are connected to a gas pipeline by a high-pressure conduit and with the trap receptacle pipe being connected to a mercury differential pressure meter by a connector conduit.
---
paper_title: A direct method for stereo correspondence based on singular value decomposition
paper_content:
This paper proposes a new algorithm for matching point features across pairs of images. Despite the well-known combinatorial complexity of the problem, this work shows that an acceptably good solution can be obtained directly by singular value decomposition of an appropriate correspondence strength matrix. The approach draws from the method proposed previously but, besides suggesting its usefulness for stereo matching, in this work a correlation-weighted proximity function is used as correspondence strength to specifically cater for real images.
---
| Title: A Survey of Motion-Parallax-Based 3-D Reconstruction Algorithms
Section 1: INTRODUCTION
Description 1: Provide an overview of computer vision's focus on analyzing visual information, the importance of depth cues like motion parallax, and the paper's goals.
Section 2: CLASSIFICATION OF RECONSTRUCTION ALGORITHMS
Description 2: Discuss the classification of different 3-D reconstruction algorithms and their dependence on camera calibration.
Section 3: REVIEW OF PROJECTIVE GEOMETRY
Description 3: Review the fundamental concepts of projective geometry relevant to 3-D reconstruction, including the pinhole camera model, epipolar geometry, the fundamental matrix, and the infinite homography.
Section 4: PRE-CALIBRATED RECONSTRUCTION
Description 4: Examine 3-D reconstruction algorithms that require prior camera calibration, detailing image-based, voxel-based, and object-based approaches.
Section 5: ONLINE CALIBRATED RECONSTRUCTION
Description 5: Explore reconstruction algorithms that do not require prior camera calibration, focusing on projective reconstruction, calibration using scene constraints, and calibration using geometric constraints.
Section 6: CONCLUSION
Description 6: Summarize the surveyed algorithms and discuss open issues and future research directions in motion-parallax-based 3-D reconstruction. |
A Survey on Hough Transform, Theory, Techniques and Applications | 6 | ---
paper_title: Use of the Hough transformation to detect lines and curves in pictures
paper_content:
Hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. This paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. It also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.
---
paper_title: A survey of the Hough transform
paper_content:
We present a comprehensive review of the Hough transform, HT, in image processing and computer vision. It has long been recognized as a technique of almost unique promise for shape and motion analysis in images containing noisy, missing, and extraneous data but its adoption has been slow due to its computational and storage complexity and the lack of a detailed understanding of its properties. However, in recent years much progress has been made in these areas. In this review we discuss ideas for the efficient implementation of the HT and present results on the analytic and empirical performance of various methods. We also report the relationship of Hough methods and other transforms and consider applications in which the HT has been used. It seems likely that the HT will be an increasingly used technique and we hope that this survey will provide a useful guide to quickly acquaint researchers with the main literature in this research area.
---
paper_title: A survey of efficient Hough transform methods
paper_content:
The Hough Transform, HT, has long been recognised as a technique of almost unique promise for shape and motion analysis in images containing noisy, missing and extraneous data. However its widespread adoption in practical systems has been slow due to its computational complexity, the need for large storage arrays and the lack of a detailed understanding of its properties. The aims of Alvey project MMI/IP078 are • the investigation of the properties of HTs • the study of efficient implementations of HTs • the production of a HT hardware device for use in real time industrial inspection systems. • the development of general high level image interpretation strategies which use information derived from the HT. The first three items listed above are being studied at Surrey University in collaboration with Computer Recognition Systems while the final item is under investigation at Heriot Watt University. There has recently been much activity concerned with space and time efficient implementations of HT ideas and this paper presents a survey of this area. Topics covered include the development of new algorithms, the use of optical techniques and the implementation of HTs on novel hardware architectures.
---
paper_title: Vanishing point detection in corridors: using hough transform and K-means clustering
paper_content:
One of the main challenges in steering a vehicle or a robot is the detection of appropriate heading. Many solutions have been proposed during the past few decades to overcome the difficulties of intelligent navigation platforms. In this study, the authors try to introduce a new procedure for finding the vanishing point based on the visual information and K-Means clustering. Unlike other solutions the authors do not need to find the intersection of lines to extract the vanishing point. This has reduced the complexity and the processing time of our algorithm to a large extent. The authors have imported the minimum possible information to the Hough space by using only two pixels (the points) of each line (start point and end point) instead of hundreds of pixels that form a line. This has reduced the mathematical complexity of our algorithm while maintaining very efficient functioning. The most important and unique characteristic of our algorithm is the usage of processed data for other important tasks in navigation such as mapping and localisation.
---
paper_title: A New Vanishing Point Detection Algorithm Based on Hough Transform
paper_content:
Detecting vanishing points in digital image is important for 3D reconstruction of a scene using no calibrated cameras. A new vanishing point detection algorithm based on twice Hough Transform was presented in this paper. Hough Transform has been recognized as one more popular method for the detection of straight lines. Firstly, the straight lines that in the image are detected by first Hough Transform. Secondly, using a coordinate transform, points in a circle were translated to points in a line. Finally, for detection of the line, Hough Transform was used again. Thus, the positions of vanishing points can be calculated by corresponding to the line parameters in pole coordinate. Experimental results using real images of a scene demonstrate that the proposed algorithm is effective and can acquire the vanishing point.
---
paper_title: Probabilistic and non-probabilistic Hough transforms: overview and comparisons
paper_content:
Abstract A new and efficient version of the Hough transform for curve detection, the Randomized Hough Transform (RHT), has been recently suggested. The RHT selects n pixels from an edge image by random sampling to solve n parameters of a curve and then accumulates only one cell in a parameter space. In this paper, the RHT is related to other recent developments of the Hough transform. Hough transform methods are divided into two categories: probabilistic and non-probabilistic methods. An overview of these variants is given. Some novel extensions of the RHT are proposed to improve the RHT for complex and noisy images. These new versions of the RHT, called the Dynamic RHT, and the Window RHT with its variants, use local information of the edge image. They apply the RHT process to a limited neighbourhood of edge pixels. Tests in line detection with synthetic and real-world images demonstrate the high speed and low memory usage of the new extensions, as compared both to the basic RHT and other versions of the Hough transform.
---
paper_title: Vanishing points in point-to-line mappings and other line parameterizations
paper_content:
Some variants of the Hough transform can be used for detecting vanishing points and groups of concurrent lines. This article addresses a common misconception that in the polar line parameterization the vanishing point is represented by a line. The numerical error caused by this inaccuracy is then estimated. The article studies in detail point-to-line-mappings (PTLMs) - a class of line parameterizations which have the property that the vanishing point is represented by a line (and thus can be easily searched for). When a PTLM parameterization is used for the straight line detection by the Hough transform, a pair or a triplet of complementary PTLMs has to be used in order to obtain a limited Hough space. The complementary pairs and triplets of PTLMs are formalized and discussed in this article.
---
paper_title: Detecting corners using the 'patchy' Hough transform
paper_content:
Relative vehicle motion estimation continues to be difficult. Current approaches are generally based on either optical flow or tracking feature points. In heavily carpentered environments such as institutional settings, the feature-point tracking approach requires significantly less computational effort. Algorithms which result in reduced computational effort are very attractive for the autonomous, vision-guided wheelchair project currently underway in our laboratory. We present a new method for detecting corners as feature points using a 'patchy' Hough transform. Corners are detected as intersections of straight lines extracted with the Hough transform. The image is initially segmented into patches, and only the intersections that occur within a patch due to edges also within the patch are deemed to be corners. This helps to avoid the localization problem inherent in the normal Hough transform. The use of patches also reduces the tendency of the Hough transform to ignore short edges because they are 'swamped' by longer ones. Edge pixels are initially enhanced by the Sobel operator, then detected using a histogram-derived threshold. The thresholded images are 'thinned' to produce edges of single-pixel width so as to avoid the line orientation ambiguity that arises with thick edges. The Hough transform is then computed for each patch. Peaks in the Hough accumulator array are generally poorly defined, with many adjacent accumulator 'bins' having similar values. To sharpen peaks, the Hough accumulator array is convolved with an 'annulus' kernel. The peaks are then detected as the local maxima of the convolution result. Finally, the intersections of all possible combinations of lines corresponding to the detected peaks are found. If the points of intersection lay inside the current patch, the lines are deemed to intersect. Algorithm performance is demonstrated with both synthetic and real image data. >
---
paper_title: A gridding Hough transform for detecting the straight lines in sports video
paper_content:
A gridding Hough transform (GHT) is proposed to detect the straight lines in sports video, which is much faster and requires much less memory than the previous Hough transforms. The GHT uses the active gridding to replace the random point selection in the random Hough transforms because forming the linelets from the actively selected points is easier than from the randomly selected points. Existing straight-line Hough transforms require a lot of resources because they were designed for all kinds of straight lines. Considering the fact that the straight lines interested in sports video are long and sparse, this paper proposes two techniques: the active gridding and linelets process. On account of these two techniques, the proposed GHT is fast and uses little memory. The experimental results show that the proposed GHT is faster than the random Hough transform (RHT) and the standard Hough transform (SHT) by 30% and 700% respectively and achieves a 97.5% recall, higher than those achieved by either the SHT or the RHT.
---
paper_title: Use of the Hough transformation to detect lines and curves in pictures
paper_content:
Hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. This paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. It also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.
---
paper_title: Robust Detection of Lines Using the Progressive Probabilistic Hough Transform
paper_content:
In the paper we present the progressive probabilistic Hough transform (PPHT). Unlike the probabilistic HT, where the standard HT is performed on a preselected fraction of input points, the PPHT minimizes the amount of computation needed to detect lines by exploiting the difference in the fraction of votes needed to reliably detect lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection are interleaved. The most salient features are likely to be detected first. While retaining its robustness, experiments show that the PPHT has, in many circumstances, advantages over the standard HT.
---
paper_title: On improving the accuracy of the Hough transform: theory, simulations, and experiments
paper_content:
The authors present two methods for very-high-precision estimation of straight-line parameters from the Hough transform and compare them with the standard method of taking the absolute peak in the Hough array and with least-squares fitting using both extensive simulation and a number of tests with real target images. Both methods use preprocessing and interpolation in the Hough array, and are based on compensating for effects that cause a spreading of the peak in Hough space. By interpolation, the authors achieve accuracy better than the accumulator cell size. A complete set of simulations show that the two methods produce similar results, which are much better than taking the absolute peak in Hough space. They also compare well with least-square fitting, which was considered optimal in the case of zero mean noise. Results of experiments with real images are reported, confirming that the Hough transform can yield very accurate results, almost as good as least-squares fitting for zero mean noise. >
---
paper_title: Line detection in images showing significant lens distortion and application to distortion correction
paper_content:
Lines are one of the basic primitives used by the perceptual system to analyze and interpret a scene. Therefore, line detection is a very important issue for the robustness and flexibility of Computer Vision systems. However, in the case of images showing a significant lens distortion, standard line detection methods fail because lines are not straight. In this paper we present a new technique to deal with this problem: we propose to extend the usual Hough representation by introducing a new parameter which corresponds to the lens distortion, in such a way that the search space is a three-dimensional space, which includes orientation, distance to the origin and also distortion. Using the collection of distorted lines which have been recovered, we are able to estimate the lens distortion, remove it and create a new distortion-free image by using a two-parameter lens distortion model. We present some experiments in a variety of images which show the ability of the proposed approach to extract lines in images showing a significant lens distortion.
---
paper_title: The Cascaded Hough Transform as Support for Grouping and Finding Vanishing Points and Lines
paper_content:
In the companion paper [7] a grouping strategy with a firm geometrical underpinning and without the problem of combinatorias is proposed. It is based on the exploitation of structures that remain fixed under the transformations that relate corresponding contour segments in regular patterns. In this paper we present a solution for the complementary task of extracting these fixed structures in an efficient and non-combinatorial way, based on the iterated application of the Hough transform. Apart from grouping, this ‘Cascaded Hough Transform’ or CHT for short can also be used for the detection of straight lines, vanishing points and vanishing lines.
---
paper_title: Progressive probabilistic Hough transform for line detection
paper_content:
We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.
---
paper_title: Hough-transform detection of lines in 3-D space
paper_content:
Abstract Detecting straight lines in 3-D space using a Hough transform approach involves a 4-D parameter space, which is cumbersome. In this paper, we show how to detect families of parallel lines in 3-D space at a moderate computational cost by using a (2+2)-D Hough space. We first find peaks in the 2-D slope parameter space; for each of these peaks, we then find peaks in the intercept parameter space. Our experimental results on range images of boxes and blocks indicate that the method works quite well.
---
paper_title: A windowing approach to detecting line segments using Hough transform
paper_content:
Abstract A new approach for using the Hough transform to detect line segments is presented. This approach is efficient in both space and time. Strategies combining the features of the intersection point [Ben-Tzvi, Leavers and Sandler, Proc. 5th Intl. Conf. Image Anal. 152–159 1990); Xu, Oja and Kultannen, Pattern Recognition Lett. 11 , 331–338 (1990)] and dual plane [Conker, Comput. Vis. Graphics Image Process . 43 , 115–132 (1988)] methods are used to calculate the Hough transform. A dense set of small overlapping windows are used to restrict the pairs of inage pixels that are evaluated. Experimental results indicate that this method reduces the time and space requirements significantly.
---
paper_title: Hough transform modified by line connectivity and line thickness
paper_content:
A modified Hough transform based on a likelihood principle of connectivity and thickness is proposed for line detection. It makes short as well as thick line segments easier to detect in a noisy image. Certain desirable properties of the new method are justified by theory and simulations.
---
paper_title: Statistical Hough Transform
paper_content:
The standard Hough transform is a popular method in image processing and is traditionally estimated using histograms. Densities modeled with histograms in high dimensional space and/or with few observations, can be very sparse and highly demanding in memory. In this paper, we propose first to extend the formulation to continuous kernel estimates. Second, when dependencies in between variables are well taken into account, the estimated density is also robust to noise and insensitive to the choice of the origin of the spatial coordinates. Finally, our new statistical framework is unsupervised (all needed parameters are automatically estimated) and flexible (priors can easily be attached to the observations). We show experimentally that our new modeling encodes better the alignment content of images.
---
paper_title: A comparative study of Hough transform methods for circle finding
paper_content:
Abstract A variety of circle detection methods which are based on variations of the Hough Transform are investigated. The five methods considered are the standard Hough Transform, the Fast Hough Transform of Li et al. 1 , a two stage Hough method, and two space saving approaches based on the method devised by Gerig and Klein 2 . The performance of each of the methods has been compared on synthetic imagery and real images from a metallurgical application. Figures and comments are presented concerning the accuracy, reliability, computational efficiency and storage requirements of each of the methods.
---
paper_title: A new concentric circle detection method based on Hough transform
paper_content:
A new approach of concentric circle detection is proposed in this paper. Firstly, the image is preprocessed by denosing, edge detection, and then the circle centers are allocated by the gradient Hough transform, at last, the radius are detected by the improved one-dimensional Hough transform. The detection efficiency is enhanced by image discretization and reduced resolution ratio in the process of circle center detection, and proves that the circle center is on the gradient line of circle edge points; meanwhile, the radius detection accuracy is improved by merging the similar radius in the process of radius detection. Experimental results show that the method combined with gradient Hough transform and one-dimensional Hough transform has good reliability and is high adaptive to noise, distortion, area of incomplete, edge discontinuous. Analysis shows that this new concentric circle detection algorithm reduces time complexity and improves anti-interference compared with the traditional concentric circle detection algorithm based on chord mid-point Hough transforms.
---
paper_title: Randomized Hough transform (RHT): basic mechanisms, algorithms, and computational complexities
paper_content:
Recently, a new curve detection approach called the randomized Hough transform (RHT) was heuristically proposed by the authors, inspired by the efforts of using neural computation learning techniques for curve detection. The preliminary experimental results and some qualitative analysis showed that in comparison with the Hough transform (HT) and its variants, the RHT has advantages of fast speed, small storage, infinite range of the parameter space, and high parameter resolution, and it can overcome several difficulties encountered with the HT methods. In this paper, the basic ideas of RHT are further developed into a more systematic and theoretically supported new method for curve detection. The fundamental framework and the main components of this method are elaborated. The advantages of RHT are further confirmed. The basic mechanisms behind these advantages are exposed by both theoretical analysis and detailed experimental demonstrations. The main differences between RHT and some related techniques are elucidated. This paper also proposes several improved algorithms for implementing RHT for curve detection problems in noisy images. They are tested by experiments on images with various kinds of strong noise. The results show that the advantages of RHT are quite robust. Moreover, the implementations of these algorithms are modeled by a generalized Bernoulli process, allowing probability analysis on these algorithms to estimate their computational complexities and to decide some important parameters for their implementations. It is shown quantitatively that the complexities are considerably smaller than those of the HT.
---
paper_title: Efficient randomized Hough transform for circle detection using novel probability sampling and feature points
paper_content:
Abstract This paper presents an efficient randomized Hough transform algorithm for circle detection. It optimizes the methods for determining sample points and finding candidate circles. Due to these two optimizations, sampling validity is improved and many false circles are prevented from being regarded as candidate circles. Experimental results demonstrate that the proposed algorithm, which features a strong robustness and high resolution, can dramatically speed up circle detection as compared to other algorithms. It can also be applied to detect ellipses.
---
paper_title: Detecting square-shaped objects using the Hough transform
paper_content:
Finding squares or circles in an image by a camera is always very difficult because the projections from a conventional camera and or even a simple webcam, get clogged from environment which has boasts Noise and Shadow and Find Many images. This makes the aforementioned square bit larger Villa boasts fractures in edges and when we want to get to the center coordinates of the robot is unable to come and pick it up trouble because the square of the desired shape is obtained, the coordinates will be wrong. Therefore, this paper tries to find the square object in a camera image using Hough transform and convert it to a polygon which is smaller, then You can calculate the coordinates of the center of mass of the smaller polygon which is the square of the coordinates Because the polygon is obtained under the main square
---
paper_title: Finding circles by an array of accumulators
paper_content:
We describe an efficient procedure for detecting approximate circles and approximately circular arcs of varying gray levels in an edge-enhanced digitized picture. This procedure is an extension and improvement of the circle-finding concept sketched by Duda and Hart [2] as an extension of the Hough straight-line finder [6].
---
paper_title: The randomized-Hough-transform-based method for great-circle detection on sphere
paper_content:
We propose a randomized-Hough-transform-based method for the detection of great circles on a sphere. We first define transformations from images acquired by central cameras to images on the unit sphere, that is, spherical images. Using the transformations, it is possible to normalize all central-camera images to the spherical image. Therefore, spherical image analysis is a fundamental study for image analysis of central cameras. For geometrical analysis and reconstruction of a three-dimensional space from spherical images, great circles on a sphere are an essential feature since a great circle on a sphere corresponds to a line on a plane in a space. For great-circle detection, we formulate the randomized Hough transform on the basis of the geometric duality of a point and a great circle on a sphere. Finally, as an extension of the randomized Hough transform on a sphere, we propose a method for great-circle detection using a continuous spherical Hough space.
---
paper_title: Circle recognition through a 2D Hough Transform and radius histogramming Abstract
paper_content:
We present a two-step algorithm for the recognition of circles. The first step uses a 2D Hough Transform for the detection of the centres of the circles and the second step validates their existence by radius histogramming. The 2D Hough Transform technique makes use of the property that every chord of a circle passes through its centre. We present results of experiments with synthetic data demonstrating that our method is more robust to noise than standard gradient based methods. The promise of the method is demonstrated with its application on a natural image and on a digitized mammogram.
---
paper_title: Size Invariant Circle Detection
paper_content:
The Circle Hough Transform (CHT) has become a common method for circle detection in numerous image processing applications. Various modifications to the basic CHT operation have been suggested which include: the inclusion of edge orientation, simultaneous consideration of a range of circle radii, use of a complex accumulator array with the phase proportional to the log of radius, and the implementation of the CHT as filter operations. However, there has also been much work recently on the definition and use of invariance filters for object detection including circles. The contribution of the work presented here is to show that a specific combination of modifications to the CHT is formally equivalent to applying a scale invariant kernel operator. This work brings together these two themes in image processing which have herewith been quite separate. Performance results for applying various forms of CHT filters incorporating some or all of the available modifications, along with results from the invariance kernel, are included. These are in terms of an analysis of the peak width in the output detection array (with and without the presence of noise), and also an analysis of the peak position in terms of increasing noise levels. The results support the equivalence between the specific form of the CHT developed in this work and the invariance kernel.
---
paper_title: Spherical parameter detection based on hierarchical Hough transform
paper_content:
The paper is addressed to detect the parameters of a sphere-center coordinates and radius based on a stack of CT slices. It is proposing a new hierarchical Hough transform approach. In the first step, all slices are taken into consideration sequentially and a 2D accumulator array is used to obtain the coordinates (x"0,y"0), the projecting value of the sphere center into every X-Y-plane. In this step, also a new type of 2D Hough transform for circle or circular detection is proposed based on an effective point filtering. In the second step, the radii of the circles in the different slices are obtained using 1D accumulator arrays. In the last step, the coordinate z"0 and the radius R of the sphere are acquired using a 2D planar Hough transform based on the correlation between the radii of circles, the coordinates z of the slice and the sphere radius. The hierarchical Hough transform is applied to analyze the structure of femoral head of human hip joints. Compared to the established Hough transform techniques for 3D object detection, the hierarchical Hough transform reduces storage space and calculation time significantly and it has a good robustness to noise in the images.
---
paper_title: A convolution approach to the circle Hough transform for arbitrary radius
paper_content:
The Hough transform is a well-established family of algorithms for locating and describing geometric figures in an image. However, the computational complexity of the algorithm used to calculate the transform is high when used to target complex objects. As a result, the use of the Hough transform to find objects more complex than lines is uncommon in real-time applications. We describe a convolution method for calculating the Hough transform for finding circles of arbitrary radius. The algorithm operates by performing a three-dimensional convolution of the input image with an appropriate Hough kernel. The use of the fast Fourier transform to calculate the convolution results in a Hough transform algorithm with reduced computational complexity and thus increased speed. Edge detection and other convolution-based image processing operations can be incorporated as part of the transform, which removes the need to perform them with a separate pre-processing or post-processing step. As the Discrete Fourier Transform implements circular convolution rather than linear convolution, consideration must be given to padding the input image before forming the Hough transform.
---
paper_title: Object Detection using Circular Hough Transform
paper_content:
In this study we propose a new system to detect the object from an input image. The proposed system first uses the separability filter proposed by Fukui and Yamaguchi (Trans. IEICE Japan J80-D-II. 8, 2170-2177, 1997) to obtain the best object candidates and next, the system uses the Circular Hough Transform (CHT) to detect the presence of circular shape. The main contribution of this work consists of using together two different techniques in order to take advantages from the peculiarity of each of them. As the results of the experiments, the object detection rate of the proposed system was 96% for 25 images by moving the circle template every 20 pixels to right and down.
---
paper_title: A FAST RANDOMIZED HOUGH TRANSFORM FOR CIRCLE/CIRCULAR ARC RECOGNITION
paper_content:
The main drawbacks of the Hough transform (HT) are the heavy requirement of computation and storage. To improve the drawbacks of the HT, the randomized Hough transform (RHT) was proposed. But the RHT is not suitable for detecting the pattern with the complex image because the probability is too low. In this paper, we propose a fast randomized Hough transform for circle/circular arc detection. We pick one point at random to be the seed point. Then, we propose a checking rule to confirm if the seed point is on the true circle. Compared with the previous techniques, the proposed method requires less computational time and is more suitable for complex images. In the experiments, synthetic and real images are used to show the effect of the proposed method.
---
paper_title: An Algebraic Approach to Hough Transforms
paper_content:
Abstract The main purpose of this paper is to lay the foundations of a general theory which encompasses the features of the classical Hough transform and extend them to general algebraic objects such as affine schemes. The main motivation comes from problems of detection of special shapes in medical and astronomical images. The classical Hough transform has been used mainly to detect simple curves such as lines and circles. We generalize this notion using reduced Grobner bases of flat families of affine schemes. To this end we introduce and develop the theory of Hough regularity . The theory is highly effective and we give some examples computed with CoCoA (see [1] ).
---
paper_title: Detection ellipses by finding lines of symmetry in the images via an hough transform applied to straight lines
paper_content:
Abstract Through the use of a global geometric symmetry, detection ellipses are proposed in this paper. Based on the geometric symmetry, the proposed method first locates candidates of ellipses centers. In the meantime, according to these candidate centers, all feature points in an input image are grouped into several subimages. Then, for each subimage, by using geometric properties again, all ellipses are extracted. The method significantly reduces the time required to evaluate all possible parameters without using edge direction information. Experimental results are given to show the correctness and effectiveness of the proposed method.
---
paper_title: Fast Ellipse Detection Algorithm Using Hough Transform on the GPU
paper_content:
GPUs (Graphics Processing Units) are specialized microprocessors that accelerate 3D or 2D graphics operations. Recent GPUs, which have many processing units connected with a global memory, can be used for general purpose parallel computation. To utilize the powerful computing ability, GPUs are widely used for general purpose computing. The main purpose of this paper is an ellipse detection algorithm with Hough transform. The feature of our algorithm is that to reduce computational time and space, the parameter spaces in the Hough transform are decomposed for each parameter and each parameter is computed in series. Also, we implemented our algorithm on a modern GPU system. The experimental results show that, for an input image with size of 2040$\times$2040, our GPU implementation can achieve a speedup factor of approximately 64 times over the sequential implementation without the GPU support.
---
paper_title: Detection of incomplete ellipse in images with strong noise by iterative randomized Hough transform (IRHT)
paper_content:
An iterative randomized Hough transform (IRHT) is developed for detection of incomplete ellipses in images with strong noise. The IRHT iteratively applies the randomized Hough transform (RHT) to a region of interest in the image space. The region of interest is determined from the latest estimation of ellipse parameters. The IRHT ''zooms in'' on the target curve by iterative parameter adjustments and reciprocating use of the image and parameter spaces. During the iteration process, noise pixels are gradually excluded from the region of interest, and the estimation becomes progressively close to the target. The IRHT retains the advantages of RHT of high parameter resolution, computational simplicity and small storage while overcoming the noise susceptibility of RHT. Indivisible, multiple instances of ellipse can be sequentially detected. The IRHT was first tested for ellipse detection with synthesized images. It was then applied to fetal head detection in medical ultrasound images. The results demonstrate that the IRHT is a robust and efficient ellipse detection method for real-world applications.
---
paper_title: Randomized Hough Transform for Ellipse Detection with Result Clustering
paper_content:
Our research is focused on the development of robust machine vision algorithms for pattern recognition. We want to provide robotic systems the ability to understand more on the external real world. In this paper, we describe a method for detecting ellipses in real world images using the randomized Hough transform with result clustering. A preprocessing phase is used in which real world images are transformed - noise reduction, greyscale transform, edge detection and final binarization - in order to be processed by the actual ellipse detector. The ellipse detector filters out false ellipses that may interfere with the final results. Due to the fact that usually more "virtual" ellipses are detected for one "real" ellipse, a data clustering scheme is used, the clustering method, classifies all detected "virtual" ellipses into their corresponding "real" ellipses. The post processing phase is VQ similar and it also finds the actual number of classes unknown a priori
---
paper_title: A new curve detection method: Randomized Hough transform (RHT)
paper_content:
Abstract A new method is proposed for curve detection. For a curve with n parameters, instead of transforming one pixel into a hypersurface of the n -D parameter space as the HT and its variants do, we randomly pick n pixels and map them into one point in the parameter space. In comparison with the HT and its variants, our new method has the advantages of small storage, high speed, infinite parameter space and arbitrarily high resolution. The preliminary experiments have shown that the new method is quite effective.
---
paper_title: Genetically fine-tuning the Hough transform feature space, for the detection of circular objects
paper_content:
Despite certain inherent advantages of the Hough transform (HT), it may result in inaccurate estimates of the detected parameters, in the case of excessively noisy images. In this work, we present an original method for fine-tuning the feature space for the HT using genetic algorithms (GAs). The aim is to find a subset of features that best describe the instances of the sought shape, so that the HT accumulator is contaminated the least by noisy information. A hybrid GA/HT system is configured, by embedding the HT module into the GA, which simultaneously performs feature space fine-tuning and shape detection. Illustrative examples show that the system is capable of recovering instances with high accuracy from very noisy images where standard HT variations falter.
---
paper_title: Generalized Hough Transform Using Regions with Homogeneous Color
paper_content:
A novel generalized Hough transform algorithm which makes use of the color similarity between homogeneous segments as the voting criterion is proposed in this paper. The input of the algorithm is some regions with homogeneous color. These regions are obtained by first pre-segmenting the image using the morphological watershed algorithm and then refining the resultant outputs by a region merging algorithm. Region pairs belonging to the object are selected to generate entries of the reference table for the Hough transform. Every R-table entry stores a relative color between the selected region pairs. This is done in order to compute the color similarity and in turn generate votes during the voting process and some relevant information to recover the transformation parameters of the object. Based on the experimental results, our algorithm is robust to change of illumination, occlusion and distortion of the segmentation output. It recognizes objects which were translated, rotated, scaled and even located in a complex environment.
---
paper_title: A New Generalized Hough Transform for the Detection of Irregular Objects
paper_content:
Abstract In this paper, we introduce a new generalized Hough transform for the recognition of nonanalytic objects in a 2-D image. The main idea of our approach is to use pairs of boundary points with the same gradient angle to derive some rotation-invariant parameters to effect the fast Hough transform. Each voting to the Hough domain is contributed by a pair of edge pixels with the same gradient angle. The primary obstacle to using the Hough techniques, a large memory requirement, is overcome by our new voting approach. The conventional 4-D Hough domain is significantly reduced to a 2-D domain. This approach provides an easy method for determining the parameters of the object in question and an extremely effective solution for eliminating false votes in the transform.
---
paper_title: ESPRIT-estimation of signal parameters via rotational invariance techniques
paper_content:
An approach to the general problem of signal parameter estimation is described. The algorithm differs from its predecessor in that a total least-squares rather than a standard least-squares criterion is used. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a translational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC. >
---
paper_title: Gray-scale hough transform for thick line detection in gray-scale images
paper_content:
Abstract A new extension of the Hough transform, called gray-scale Hough transform (GSHT), is proposed for detecting thick lines (called bands) in a gray-scale image. The use of the conventional Hough transform (CHT) usually requires the preprocessing steps of thresholding and edge detection (or thinning) before the transform can be performed on a gray-scale image containing bands. This causes loss of useful gray and position relationship existing among the pixels of a band, and requires certain postprocessing steps to recover the band in the original image. The proposed GSHT with a gray-scale image as the direct input removes this shortcoming, requiring neither preprocessing nor postprocessing step in detecting bands in a gray-scale image. Experimental results, showing the feasibility of the proposed approach, are also included.
---
paper_title: Spatial color histogram based center voting method for subsequent object tracking and segmentation
paper_content:
In this paper, we introduce an algorithm for object tracking in video sequences. In order to represent the object to be tracked, we propose a new spatial color histogram model which encodes both the color distribution and spatial information. Using this spatial color histogram model, a voting method based on the generalized Hough transform is employed to estimate the object location from frame to frame. The proposed voting based method, called the center voting method, requests every pixel near the previous object center to cast a vote for locating the new object center in the new frame. Once the location of the object is obtained, the back projection method is used to segment the object from the background. Experiment results show successful tracking of the object even when the object being tracked changes in size and shares similar color with the background.
---
paper_title: Generalized Hough Transform for Shape Matching
paper_content:
In this paper we propose a novel approach towards shape matching for image retrieval. The system takes advantages of generalized Hough transform, as it works well in detecting arbitrary shapes even in the presence of gaps and in handling rotation, scaling and shift variations, and solves the heavy computational aspect by introducing a preliminary automatic selection of the appropriate contour points to consider in the matching phase. The numerical simulations and comparisons have confirmed the effectiveness and the efficiency of the method proposed.
---
paper_title: Hough Transform for Region Extraction in Color Images
paper_content:
This article aims to propose a method to use the idea of Hough Transform (HT) implemented in grey scale images to color images for region extraction. A region in an image is seen as a union of pixels on several line segments having the homogeneity property. A line segment in an image is seen as a collection of pixels having the property of straight line in Euclidean plane and possessing the same property. The property ’homogeneity’ in a color image is based on the trace of the variance covariance matrix of the colors of the pixels on the straight line. As a possible application of the method, it is used to extract the homogeneous regions in the images taken from the Indian Remote Sensing Satellites.
---
paper_title: A comparative study of efficient generalised Hough transform techniques
paper_content:
The generalised Hough transform (GHT) is useful for detecting or locating translated two-dimensional objects. However, a weakness of the GHT is its storage requirements and hence the increased computational complexity resulting from the four-dimensional parameter space. In this paper, we present the results of our work which involves investigation of the performance of several efficient GHT techniques including an extension of Thomas's rotation-invariant algorithm. It is shown that our extension of Thomas's algorithm has very low memory requirements and computational complexity, and produces the best results in various tests.
---
paper_title: On Detection of Multiple Object Instances Using Hough Transforms
paper_content:
Hough transform-based methods for detecting multiple objects use nonmaxima suppression or mode seeking to locate and distinguish peaks in Hough images. Such postprocessing requires the tuning of many parameters and is often fragile, especially when objects are located spatially close to each other. In this paper, we develop a new probabilistic framework for object detection which is related to the Hough transform. It shares the simplicity and wide applicability of the Hough transform but, at the same time, bypasses the problem of multiple peak identification in Hough images and permits detection of multiple objects without invoking nonmaximum suppression heuristics. Our experiments demonstrate that this method results in a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem.
---
paper_title: A new gray level based Hough transform for region extraction: An application to IRS images
paper_content:
A technique using the Hough transform is described for detection of homogeneous line segments directly from (i.e., without binarization of) gray level images. A definition of ''region'' in terms of these line segments, with constraints on its length and variance, is provided. The algorithm is able to extract gray level regions irrespective of their shape and size. The effectiveness of the method is demonstrated on Indian Remote-sensing Satellite (IRS) images.
---
paper_title: On the gray-scale inverse Hough transform
paper_content:
This paper proposes a gray-scale inverse Hough transform (GIHT) algorithm which is combined with a modified gray-scale Hough transform (GHT). Given only the data of the Hough transform (HT) space and the dimensions of the image, the GIHT algorithm reconstructs correctly the original gray-scale image. As a first application, the GIHT is used for line detection and filtering according to conditions associated with the polar parameters, the size and the gray-scale values of the lines. The main advantage of the GIHT is the determination of the image lines exactly as they appear, i.e. pixel by pixel and with the correct gray-scale values. To avoid the quantization effects in the accumulator array of the GHT space, inversion conditions are defined which are associated only with the image size. The GIHT algorithm consists of two phases, which are the collection of gray-scale information stored in the accumulator array and the extraction of the final image according to the filtering conditions. Experimental results confirm the efficiency of the proposed method.q 2000 Elsevier Science B.V. All rights reserved.
---
paper_title: Resource-Efficient FPGA Architecture and Implementation of Hough Transform
paper_content:
Hough transform is widely used for detecting straight lines in an image, but it involves huge computations. For embedded application, field-programmable gate arrays are one of the most used hardware accelerators to achieve real-time implementation of Hough transform. In this paper, we present a resource-efficient architecture and implementation of Hough transform on an FPGA. The incrementing property of Hough transform is described and used to reduce the resource requirement. In order to facilitate parallelism, we divide the image into blocks and apply the incrementing property to pixels within a block and between blocks. Moreover, the locality of Hough transform is analyzed to reduce the memory access. The proposed architecture is implement on an Altera EP2S180F1508C3 device and can operate at a maximum frequency of 200 MHz. It could compute the Hough transform of 512 × 512 test images with 180 orientations in 2.07-3.16 ms without using many FPGA resources (i.e., one could achieve the performance by adopting a low-cost low-end FPGA).
---
paper_title: A Circular Hough Transform Hardware for Industrial Circle Detection Applications
paper_content:
The Hough transform (Hough, 1962) has been used to characterize analytic features. It was first applied to the recognition of straight lines, and later extended to circles, ellipses and arbitrary shaped objects. Its main disadvantage is the computational and storage requirements increase as a power of the dimensionality of the curve. It is not difficult to implement circular Hough transform (CHT) algorithm on modern personal computer. However, we intend to use FPGA or ASIC to perform CHT in this article. Modern FPGAs are capable of performing high speed operation and have large amount of embedded memory. The whole CHT circuitry with accumulator array excluded can be fitted into Alterareg Stratixreg 1S25 chip which has more than 1 Mb RAM embedded
---
paper_title: Hierarchical Additive Hough Transform for Lane Detection
paper_content:
Detection of lanes is an important problem in the upcoming field of vehicle safety and navigation, for which linear Hough transform (HT) is widely used. In order to meet real-time requirements, various attempts to accelerate the HT have been proposed in the past, including hierarchical HT. Existing hierarchical approaches involve the overhead of recomputing HT values at every level in the hierarchy. In this letter, we propose a novel, computationally efficient hierarchical HT by extending and applying the additive property of HT to multiple levels of hierarchies. This proposed approach, called hierarchical additive Hough transform (HAHT) is shown to lead to significant computational savings of up to 98-99% in the Hough voting process. The HAHT has been validated on a wide range of straight lane images and it is shown to successfully detect lanes.
---
paper_title: Parallelization Research of Circle Detection Based on Hough Transform
paper_content:
There is a problem of too long computation time in Circle detection of Hough transform. In this paper, two paralleled methods are given based on Threading Building Blocks (TBB) and CUDA, by utilizing multi-core and GPU, the most timeconsuming part of circle detection is coped with parallelization. Experimental results show that the circle detection algorithms proposed in this paper has extremely good result of acceleration.
---
paper_title: A novel Hough transform based on eliminating particle swarm optimization and its applications
paper_content:
Hough transform (HT) is a well established method for curve detection and recognition due to its robustness and parallel processing capability. However, HT is quite time-consuming. In this paper, an eliminating particle swarm optimization (EPSO) algorithm is employed to improve the speed of a HT. The parameters of the solution after Hough transformation are considered as the particle positions, and the EPSO algorithm searches the optimum solution by eliminating the ''weakest'' particles to speed up the computation. An accumulation array in Hough transformation is utilized as a fitness function of the EPSO algorithm. The experiments on numerous images show that the proposed approach can detect curves or contours of both noise-free and noisy images with much better performance. Especially, for noisy images, it can archive much better results than that obtained by using the existing HT algorithms.
---
paper_title: Crop-row detection algorithm based on Random Hough Transformation
paper_content:
It is important to detect crop rows accurately for field navigation. In order to spray on line, a variable rate spray system should detect the crop center line accurately. Most existing detection algorithms are slow to detect crop rows because of the complicated calculation. The gradient-based Random Hough Transform algorithm could improve the calculation speed and reduce the computation effectively by the more-to-one merger mapping method. In order to detect the center of the crop row rapidly and effectively, the detection algorithm with gradient-based Random Hough Transform was proposed to detect the center line of crop rows. We tested the center line of crop-row detection for three kinds of plant distribution, being sparse, general and intensive. The experimental results showed that the detection algorithm with gradient-based Random Hough Transform was adaptive to the difference of plant density in the crop row effectively. Contrasted with the detection algorithm based on the Hough transform, the detection algorithm based on the gradient-based Random Hough was faster and had a high detection correction rate.
---
paper_title: Parallel GPU Implementation of Hough Transform for Circles
paper_content:
Hough transform is one of the most widely used algorithms in image processing. The major problems of Hough’s transform are its time consuming and its abundant requirement of computational resources. In this paper, we try to solve this problem by paralleling this algorithm and implementing it on GPUs(Graphic Process unit) using CUDA(Compute Unified Device Architecture) . We have introduced two methods for parallelization, each of which has been implemented on four different graphic cards using CUDA. After executing the proposed methods on GPUs, we have compared our results with sequential algorithm execution on CPU and it is observable that we have about 65 times more speedup toward the sequential algorithm.
---
paper_title: Generalized Hough Transform for Shape Matching
paper_content:
In this paper we propose a novel approach towards shape matching for image retrieval. The system takes advantages of generalized Hough transform, as it works well in detecting arbitrary shapes even in the presence of gaps and in handling rotation, scaling and shift variations, and solves the heavy computational aspect by introducing a preliminary automatic selection of the appropriate contour points to consider in the matching phase. The numerical simulations and comparisons have confirmed the effectiveness and the efficiency of the method proposed.
---
paper_title: Variations of a Hough-Voting Action Recognition System
paper_content:
This paper presents two variations of a Hough-voting framework used for action recognition and shows classification results for low-resolution video and videos depicting human interactions. For lowresolution videos, where people performing actions are around 30 pixels, we adopt low-level features such as gradients and optical flow. For group actions with human-human interactions, we take the probabilistic action labels from the Hough-voting framework for single individuals and combine them into group actions using decision profiles and classifier combination.
---
paper_title: Combining Hough transform and contour algorithm for detecting vehicles' license-plates
paper_content:
Vehicle license plate (VLP) recognition is an interesting problem that has attracted many computer vision research groups. One of the most important and difficult tasks of this problem is VLP detecting. It is not only used in VLP recognition systems but also useful for many traffic management systems. Our method is used for a VLP recognition system that deals with Vietnamese VLPs and it can also be applied to other types of VLPs with minor changes. There are various approaches to this problem, such as texture-based, morphology-based and boundary line-based. In this paper, we present the boundary line-based method that optimizes speed and accuracy by combining the Hough transform and contour algorithm. The enhancement of applying the Hough transform to contour images is the much improved speed of the algorithm. In addition, the algorithm can be used on VLP images that have been taken from various distances and have inclined angles between /spl plusmn/30/spl deg/ from the camera. Especially, it can detect plates in images which have more than one VLP. The algorithm was evaluated in two image sets with an accuracy of about 99%.
---
paper_title: Class-specific Hough forests for object detection
paper_content:
We present a method for the detection of instances of an object class, such as cars or pedestrians, in natural images. Similarly to some previous works, this is accomplished via generalized Hough transform, where the detections of individual object parts cast probabilistic votes for possible locations of the centroid of the whole object; the detection hypotheses then correspond to the maxima of the Hough image that accumulates the votes from all parts. However, whereas the previous methods detect object parts using generative codebooks of part appearances, we take a more discriminative approach to object part detection. Towards this end, we train a class-specific Hough forest, which is a random forest that directly maps the image patch appearance to the probabilistic vote about the possible location of the object centroid. We demonstrate that Hough forests improve the results of the Hough-transform object detection significantly and achieve state-of-the-art performance for several classes and datasets.
---
paper_title: A comparative study of efficient generalised Hough transform techniques
paper_content:
The generalised Hough transform (GHT) is useful for detecting or locating translated two-dimensional objects. However, a weakness of the GHT is its storage requirements and hence the increased computational complexity resulting from the four-dimensional parameter space. In this paper, we present the results of our work which involves investigation of the performance of several efficient GHT techniques including an extension of Thomas's rotation-invariant algorithm. It is shown that our extension of Thomas's algorithm has very low memory requirements and computational complexity, and produces the best results in various tests.
---
paper_title: A Centroid based Hough Transformation for Indian license plate skew detection and correction of IR and color images
paper_content:
Skew Correction is a processing-stage between LP Localization and Character Segmentation in License Plate Recognition system used to identify a vehicle by its number plate. License plate is skewed in captured image due to the positioning of the vehicle with respect to the camera while capturing the LP image. The skewed license plate affects badly on the accurate character segmentation and recognition. After localization skew correction technique is applied in order to get correct character segmentation followed by character recognition. In this paper, a Centroid based Hough Transform technique is presented for skew correction of license plate which performs better than the other approaches of skew correction including Thresholding, Connected Component analysis, Hough Transform and Centroid method. The performance of the proposed algorithm has been tested on live captured LP images, yielding better performance in car license plate segmentation and hence the presented algorithm is good for all types of LP recognition applications due to its direct and simple approach with minimal computational time.
---
paper_title: A Hough transform-based voting framework for action recognition
paper_content:
We present a method to classify and localize human actions in video using a Hough transform voting framework. Random trees are trained to learn a mapping between densely-sampled feature patches and their corresponding votes in a spatio-temporal-action Hough space. The leaves of the trees form a discriminative multi-class codebook that share features between the action classes and vote for action centers in a probabilistic manner. Using low-level features such as gradients and optical flow, we demonstrate that Hough-voting can achieve state-of-the-art performance on several datasets covering a wide range of action-recognition scenarios.
---
paper_title: On Detection of Multiple Object Instances Using Hough Transforms
paper_content:
Hough transform-based methods for detecting multiple objects use nonmaxima suppression or mode seeking to locate and distinguish peaks in Hough images. Such postprocessing requires the tuning of many parameters and is often fragile, especially when objects are located spatially close to each other. In this paper, we develop a new probabilistic framework for object detection which is related to the Hough transform. It shares the simplicity and wide applicability of the Hough transform but, at the same time, bypasses the problem of multiple peak identification in Hough images and permits detection of multiple objects without invoking nonmaximum suppression heuristics. Our experiments demonstrate that this method results in a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem.
---
paper_title: A robust lane detection method for autonomous car-like robot
paper_content:
Due to illumination variation, view changes, and complex road conditions, automatic lane detection is crucial for path finding and planning for autonomous car-like robots. In this paper, a robust lane detection method is proposed. Firstly, in order to extract the edges of lanes in traffic scenarios, we adopt an adaptive thresholding strategy to binarize a gradient image and trace edges by using their local gradient information. Moreover, by integrating gradient constraints and introducing a reverse voting strategy to the standard Hough transform, we greatly improved speed and stability of line extraction. Further, through inverse perspective mapping the endpoints of extracted lines to the world coordination, we can combine the extracted lines from different cameras. Finally, the lane could be detected by matching two points instead of two parallel lines in parameter space. Extensive experiments and comparisons show the efficiency of the proposed method.
---
paper_title: Automatic lumbar vertebrae segmentation in fluoroscopic images via optimised concurrent Hough transform
paper_content:
We show how a new approach can automatically detect the positions and borders of vertebrae concurrently, relieving many of the problems experienced in other approaches. First, we use phase congruency to relieve the difficulty associated with threshold selection in edge detection of the illumination variant DVF images. Then, our new Hough transform approach is applied to determine the moving vertebrae, concurrently. We include optimisation via a genetic algorithm (as without it the extraction of moving multiple vertebrae is computationally daunting). Our results show that this new approach can indeed provide extractions of position and rotation which appear to be of sufficient quality to aid therapy and diagnosis of spinal disorders.
---
paper_title: Progressive probabilistic Hough transform for line detection
paper_content:
We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.
---
paper_title: Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision
paper_content:
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.
---
paper_title: A novel Hough transform method for line detection by enhancing accumulator array
paper_content:
In this paper, an improved Hough transform (HT) method is proposed to robustly detect line segments in images with complicated backgrounds. The work focuses on detecting line segments of distinct lengths, totally independent of prior knowledge of the original image. Based on the characteristics of accumulation distribution obtained by conventional HT, a local operator is implemented to enhance the difference between the accumulation peaks caused by line segments and noise. Through analysis of the effect of the operator, a global threshold is obtained in the histogram of the enhanced accumulator to detect peaks. Experimental results are provided to demonstrate the efficiency and robustness of the proposed method.
---
paper_title: Hierarchical Additive Hough Transform for Lane Detection
paper_content:
Detection of lanes is an important problem in the upcoming field of vehicle safety and navigation, for which linear Hough transform (HT) is widely used. In order to meet real-time requirements, various attempts to accelerate the HT have been proposed in the past, including hierarchical HT. Existing hierarchical approaches involve the overhead of recomputing HT values at every level in the hierarchy. In this letter, we propose a novel, computationally efficient hierarchical HT by extending and applying the additive property of HT to multiple levels of hierarchies. This proposed approach, called hierarchical additive Hough transform (HAHT) is shown to lead to significant computational savings of up to 98-99% in the Hough voting process. The HAHT has been validated on a wide range of straight lane images and it is shown to successfully detect lanes.
---
paper_title: Lanes Detection Based on Unsupervised and Adaptive Classifier
paper_content:
This paper describes an algorithm to detect the road lanes based on an unsupervised and adaptive classifier. We have selected this classifier because in the road we do not know the parameters of lanes, although we know that lanes are there, only they need to be classified. First of all, we tested and measured the brightness of the lanes of the road in many videos. Generally, the lines on the road are white. We used the HSV image and we improved the region of study. Then, we used a Hough transform which yields a set of possible lines. These lines have to be classified. The classifier starts with initial parameters because we suppose that the vehicle is on road and in the center of the lane. There are two classes, the first one is the left road line and the second one is the right road line. Each line has two parameters that are: middle point of line and the line slope. These parameters will be changing in order to adjust to the real lanes. A tensor holds the two lines, so these lines will not separate more than the tensor allows. A Kalman filter estimates the new class's parameters and improves the tracking of the lanes. Finally, we use a mask in order to highlight the lane and show to the user a better image.
---
paper_title: Circular road sign detection and recognition based on hough transform
paper_content:
In this paper, a new algorithm is proposed to detect and recognize common road signs. Firstly, color segmentation and connected domain analysis are used for region of interest (ROI) detection and then we used the improved two-time Hough transform to separately determinate the circle center and radium of the road sign. Then the Weighted Hausdorff Distance is applied in the recognition of extracted ROI. Compared to the conventional methods, the proposed algorithm has the advantage of small storage, lower time complexity and good robustness towards inclination and occlusion.
---
paper_title: Lane Departure Detection and Transmisson using Hough Transform Method
paper_content:
Vehicles need to be re-advancing by video transmission among vehicles for safety and cooperative deriving. The video images captured from camera could help the driver to monitor the surroundings as well as transmit the compressed images over vehicular communication network. Video over wireless communication has a lot of potential applications in intelligent transportation systems (ITS).Video streaming utilizes high bandwidth data links to transmit information. The high-bandwidth systems required larger equipment, better line-of-sight, and more complex mechanism for reliable transmit ion over the network. The intended platform for the system described in this study, is to develop a software defined algorithm for automatic video compression and transmission. The proposed algorithm is able to robustly find the left and right boundary of the lane using Hough Transform method and transmit over the network. Therefore the limitations of high-bandwidth equipment become more significant in a tactical scenario. The results show that the proposed method works well on marked roads and transmission in various lighting conditions. Streszczenie. W artykule przedstawiono algorytm kompresji i transmisji obrazow do zastosowania w komunikacji bezprzewodowej w sieci samochodowej. Przy pomocy transformacji Hough algorytm określa lewą i prawą granice pasa i przesyla dane do sieci. Przeprowadzono badania weryfikujące dzialanie algorytmu, ktore wykazaly jego skutecznośc na oznaczonych drogach przy roznych warunkach oświetlenia. (Zastosowanie transformacji Hough w detekcji wyjazdu z pasa i transmisji danych).
---
paper_title: A gridding Hough transform for detecting the straight lines in sports video
paper_content:
A gridding Hough transform (GHT) is proposed to detect the straight lines in sports video, which is much faster and requires much less memory than the previous Hough transforms. The GHT uses the active gridding to replace the random point selection in the random Hough transforms because forming the linelets from the actively selected points is easier than from the randomly selected points. Existing straight-line Hough transforms require a lot of resources because they were designed for all kinds of straight lines. Considering the fact that the straight lines interested in sports video are long and sparse, this paper proposes two techniques: the active gridding and linelets process. On account of these two techniques, the proposed GHT is fast and uses little memory. The experimental results show that the proposed GHT is faster than the random Hough transform (RHT) and the standard Hough transform (SHT) by 30% and 700% respectively and achieves a 97.5% recall, higher than those achieved by either the SHT or the RHT.
---
paper_title: Driver assistance system for lane detection and vehicle recognition with night vision
paper_content:
The objective of this research is to develop a vision-based driver assistance system to enhance the driver's safety in the nighttime. The proposed system performs both lane detection and vehicle recognition. In lane detection, three features including lane markers, brightness, slenderness and proximity are applied to detect the positions of lane markers in the image. On the other hand, vehicle recognition is achieved by using an evident feature which is extracted through three four steps: taillight standing-out process, adaptive thresholding, centroid detection, and taillight pairing algorithm. Besides, an automatic method is also provided to calculate the tilt and the pan of the camera by using the position of vanishing point which is detected in the image by applying Canny edge detection, Hough transform, major straight line extraction and vanishing point estimation. Experimental results for thousands of images are provided to demonstrate the effectiveness of the proposed approach in the nighttime. The lane detection rate is nearly 99%, and the vehicle recognition rate is about 91%. Furthermore, our system can process the image in almost real time.
---
paper_title: Combining Hough transform and contour algorithm for detecting vehicles' license-plates
paper_content:
Vehicle license plate (VLP) recognition is an interesting problem that has attracted many computer vision research groups. One of the most important and difficult tasks of this problem is VLP detecting. It is not only used in VLP recognition systems but also useful for many traffic management systems. Our method is used for a VLP recognition system that deals with Vietnamese VLPs and it can also be applied to other types of VLPs with minor changes. There are various approaches to this problem, such as texture-based, morphology-based and boundary line-based. In this paper, we present the boundary line-based method that optimizes speed and accuracy by combining the Hough transform and contour algorithm. The enhancement of applying the Hough transform to contour images is the much improved speed of the algorithm. In addition, the algorithm can be used on VLP images that have been taken from various distances and have inclined angles between /spl plusmn/30/spl deg/ from the camera. Especially, it can detect plates in images which have more than one VLP. The algorithm was evaluated in two image sets with an accuracy of about 99%.
---
paper_title: Implementation of lane detection system using optimized hough transform circuit
paper_content:
This paper describes a vision-based lane detection system with the optimized Hough Transform circuit. The Hough Transform is a popular method to find the line features in an image. This is very robust to noises and changes in the illumination level, but it requires long computation time and large data storage for calculation. It needs large logic gates for implementation. It is difficult to apply in products that require real-time performance. In this paper, we propose the optimized Hough Transform circuit architecture and a lane departure warning system using vision device. We suggest the Hough Transform architecture to minimize the size of logic and the number of cycle time. Our implemented Hough Transform circuit show the good performance than other circuit architecture. We tested the Hough Transform circuit and lane departure warning system on the Xilinx FPGA board.
---
paper_title: Robust lane localization using multiple cues on the road
paper_content:
This paper presents a method for estimating the current lane number in which the vehicle is traveling. An important component of visual automobile safety systems rely on knowledge of the lane number to know how far the vehicle is from the exit lane. The method uses taking the Inverse Perspective Map of an image to get a top view of the road and then detects multiple lanes using Hough transform line detection. Then the algorithm classifies the lines as solid or broken and also detects if the lane is branching or split using connected component analysis. This knowledge of the line type and whether the lane is splitting is the basis to identify if the vehicle is in the exit lane.
---
paper_title: A robust lane detection method in the different scenarios
paper_content:
Road recognition is critical for safe driving and intelligent vehicles. In order to improve the real-time and stability of lane recognition, a robust road recognition algorithm based on improved Canny and Progressive Probabilistic Hough Transform (PPHT) is proposed in this paper, Experiments prove accuracy of the lane recognition and high real-time capability in the different scenarios.
---
paper_title: A Centroid based Hough Transformation for Indian license plate skew detection and correction of IR and color images
paper_content:
Skew Correction is a processing-stage between LP Localization and Character Segmentation in License Plate Recognition system used to identify a vehicle by its number plate. License plate is skewed in captured image due to the positioning of the vehicle with respect to the camera while capturing the LP image. The skewed license plate affects badly on the accurate character segmentation and recognition. After localization skew correction technique is applied in order to get correct character segmentation followed by character recognition. In this paper, a Centroid based Hough Transform technique is presented for skew correction of license plate which performs better than the other approaches of skew correction including Thresholding, Connected Component analysis, Hough Transform and Centroid method. The performance of the proposed algorithm has been tested on live captured LP images, yielding better performance in car license plate segmentation and hence the presented algorithm is good for all types of LP recognition applications due to its direct and simple approach with minimal computational time.
---
paper_title: A robust lane detection method for autonomous car-like robot
paper_content:
Due to illumination variation, view changes, and complex road conditions, automatic lane detection is crucial for path finding and planning for autonomous car-like robots. In this paper, a robust lane detection method is proposed. Firstly, in order to extract the edges of lanes in traffic scenarios, we adopt an adaptive thresholding strategy to binarize a gradient image and trace edges by using their local gradient information. Moreover, by integrating gradient constraints and introducing a reverse voting strategy to the standard Hough transform, we greatly improved speed and stability of line extraction. Further, through inverse perspective mapping the endpoints of extracted lines to the world coordination, we can combine the extracted lines from different cameras. Finally, the lane could be detected by matching two points instead of two parallel lines in parameter space. Extensive experiments and comparisons show the efficiency of the proposed method.
---
paper_title: Road extraction from high-resolution remote sensing images using wavelet transform and hough transform
paper_content:
Road extraction from high-resolution remote sensing images has been a challenging problem in ground object detection field. In this paper, an effective and anti-noise road extraction method by using wavelet transform and Hough transform is proposed. Firstly, wavelet transform is used for image denoising and detecting edges of roads. Then the image is segmented into road area and non-road area. Hough transform is used to extract roads after segmentation. Finally, the effectiveness of the method is testified by the results of the experiment.
---
paper_title: Real-Time Articulated Hand Detection and Pose Estimation
paper_content:
We propose a novel method for planar hand detection from a single uncalibrated image, with the purpose of estimating the articulated pose of a generic model, roughly adapted to the current hand shape. The proposed method combines line and point correspondences, associated to finger tips, lines and concavities, extracted from color and intensity edges. The method robustly solves for ambiguous association issues, and refines the pose estimation through nonlinear optimization. The result can be used in order to initialize a contour-based tracking algorithm, as well as a model adaptation procedure.
---
paper_title: Hough Transform and Active Contour for Enhanced Iris Segmentation
paper_content:
Iris segmentation is considered as the most difficult and fundamental step in an iris recognition system. While iris boundaries are largely approximated by two circles or ellipses, other methods define more accurately the iris resulting in better recognition results. In this paper we propose an iris segmentation method using Hough transform and active contour to detect a circular approximation of the outer iris boundary and to accurately segment the inner boundary in its real shape motivated by the fact that richer iris textures are closer to the pupil than to the sclera. Normalization, encoding and matching are implemented according to Daugman‟s method. The method, tested on CASIA-V3 iris images database is compared to Daugman‟s iris recognition system. Recognition performance is measured in terms of decidability, accuracy at the equal error rate and ROC curves. Improved recognition performance is obtained using our segmentation model proposing its use for better iris
---
paper_title: DETECTING PERSONS USING HOUGH CIRCLE TRANSFORM IN SURVEILLANCE VIDEO
paper_content:
Robust person detection in real-world images is interesting and important for a variety of applications, such as visual surveillance. We address the task of detecting persons in elevator surveillance scenes in this paper. To get more passengers in the lift car, the camera usually installed at the corner of ceiling. However, the high and space of lift car are limited, which makes person occluded by each other or some parts of body invisible in captured images. In this paper, we propose a novel approach to detect head contours, which includes three main steps: pre-processing, head contour detection and post-processing. Hough circle transform is adopted in the second stage, which is robust to discontinuous boundaries in circle detection. Proposed pre-processing and post-processing methods are efficient to remove false alarms on background or body part. Experimental results show our proposed approach is time saving and has better person detection results than some other methods.
---
paper_title: Fast and Accurate Pupil Positioning Algorithm using Circular Hough Transform and Gray Projection
paper_content:
A fast pupil-positioning algorithm for real-time eye tracking is proposed in this paper. It is significant accurately locate the pupil positi2on in an eye tracking system. Commonly used method is combining edge detection algorithms and ellipse fitting. Edge detection algorithms are used to detect edges of the pupil while the ellipse fitting is supposed to find the optimum ellipse that finely fits the pupil and the centre of the ellipse is regarded as the location of the pupil. This algorithm is acceptable except that the definition of the pupil edge has a great influence on its efficiency and ellipse fitting is a time consuming method. This paper focuses on accuracy of the primary algorithm and some improvement on it and uses circular Hough transform to detect pupil area. Firstly, mainly localize the pupil position by gray projection. Secondly, fit a circle to pupil using circular Hough transform.
---
paper_title: Eyes detection in facial images using Circular Hough Transform
paper_content:
This paper presents an eye detection approach using Circular Hough transform. Assuming the face region has already been detected by any of the accurate existing face detection methods, the search of eye pair relies primarily on the circular shape of the eye in two-dimensional image. The eyes detection process includes preprocessing that filtered and cropped the face images and Circular Hough Transform is used to detect the circular shape of the eye and to mark the eye pair on the image precisely. This eyes detection method was tested on Face DB database developed by Park Lab, University of Illinois at Urbana Champaign USA. Most of the faces are frontal with open eyes and some are tilted upwards or downwards. The detection accuracy of the proposed method is about 86%.
---
paper_title: Reliable iris localization using Hough transform, histogram-bisection, and eccentricity
paper_content:
The iris technology recognizes individuals from their iris texture with great precision. However, it does not perform well for the non-ideal data, where the eye image may contain non-ideal issues such as the off-axis eye image, blurring, non-uniform illumination, hair, glasses, etc. It is because of their iris localization algorithms, which are developed for the ideal data. In this paper, we propose a reliable iris localization algorithm. It includes localizing a coarse iris location in the eye image using the Hough transform and image statistics; localizing the pupillary boundary using a bi-valued adaptive threshold and the two-dimensional (2D) shape properties; localizing the limbic boundary by reusing the Hough accumulator and image statistics; and finally, regularizing these boundaries using a technique based on the Fourier series and radial gradients. The proposed technique is tested on the public iris databases: CASIA V1, CASIA-IrisV3-Lamp, CASIA-IrisV4-Thousand, IITD V1.0, MMU V1.0, and MMU (new) V2.0. Experimental results obtained on these databases show superiority of the proposed technique over some state of the art iris localization techniques.
---
paper_title: Turkish fingerspelling recognition system using Generalized Hough Transform, interest regions, and local descriptors
paper_content:
This paper presents a computer vision system that can recognize Turkish fingerspelling sign hand postures by a method based on the Generalized Hough Transform, interest regions, and local descriptors. A novel method for calculating the reference point for the Generalized Hough Transform, and a simpler but more effective Hough voting strategy are proposed. The stages of implementing a Generalized Hough Transform are examined in detail, and the issues that affect the method success are discussed. The system is tested on a data set with 29 classes of non-rigid hand postures signed by three different signers on non-uniform backgrounds. It attains a 0.93 success rate.
---
paper_title: Latent Fingerprint Matching Using Descriptor-Based Hough Transform
paper_content:
Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem. Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information. To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications. Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy.
---
paper_title: Vanishing point detection in corridors: using hough transform and K-means clustering
paper_content:
One of the main challenges in steering a vehicle or a robot is the detection of appropriate heading. Many solutions have been proposed during the past few decades to overcome the difficulties of intelligent navigation platforms. In this study, the authors try to introduce a new procedure for finding the vanishing point based on the visual information and K-Means clustering. Unlike other solutions the authors do not need to find the intersection of lines to extract the vanishing point. This has reduced the complexity and the processing time of our algorithm to a large extent. The authors have imported the minimum possible information to the Hough space by using only two pixels (the points) of each line (start point and end point) instead of hundreds of pixels that form a line. This has reduced the mathematical complexity of our algorithm while maintaining very efficient functioning. The most important and unique characteristic of our algorithm is the usage of processed data for other important tasks in navigation such as mapping and localisation.
---
paper_title: A New Vanishing Point Detection Algorithm Based on Hough Transform
paper_content:
Detecting vanishing points in digital image is important for 3D reconstruction of a scene using no calibrated cameras. A new vanishing point detection algorithm based on twice Hough Transform was presented in this paper. Hough Transform has been recognized as one more popular method for the detection of straight lines. Firstly, the straight lines that in the image are detected by first Hough Transform. Secondly, using a coordinate transform, points in a circle were translated to points in a line. Finally, for detection of the line, Hough Transform was used again. Thus, the positions of vanishing points can be calculated by corresponding to the line parameters in pole coordinate. Experimental results using real images of a scene demonstrate that the proposed algorithm is effective and can acquire the vanishing point.
---
paper_title: Vanishing points in point-to-line mappings and other line parameterizations
paper_content:
Some variants of the Hough transform can be used for detecting vanishing points and groups of concurrent lines. This article addresses a common misconception that in the polar line parameterization the vanishing point is represented by a line. The numerical error caused by this inaccuracy is then estimated. The article studies in detail point-to-line-mappings (PTLMs) - a class of line parameterizations which have the property that the vanishing point is represented by a line (and thus can be easily searched for). When a PTLM parameterization is used for the straight line detection by the Hough transform, a pair or a triplet of complementary PTLMs has to be used in order to obtain a limited Hough space. The complementary pairs and triplets of PTLMs are formalized and discussed in this article.
---
paper_title: Hough-transform and extended RANSAC algorithms for automatic detection of 3D building roof planes from Lidar data
paper_content:
Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode - rarely detailed in the related literature - as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.
---
paper_title: 3D Implicit Shape Models Using Ray Based Hough Voting for Furniture Recognition
paper_content:
The recognition of object categories in 3D scenes is still a challenging problem in computer vision. Many state of the art approaches use Implicit Shape Models, as addressed in [8] and [14], to learn shapes of object categories and a probabilistic Hough Space Voting for the detection of instances of the learned category. In this paper we present a novel 3D Hough Space Voting approach for recognizing object categories, learned from artificial 3D models, in 3D scenes. The proposed method uses rays instead of points to vote for object reference points. The usage of ray voting allows a clustering of votes, showing in similar directions, to a single vote with an appropriate vote weight. The main advantage for the Implicit Shape Model is that it can be trained with an unlimited amount of training data, while keeping the upper bound of computation effort constant. In addition, it is also able to abstract from the model sizes which is very helpful when training with artificial models taken from different sources and modelled in different scales. We validate our approach in two tasks: an object categorization is performed on a standard 3D dataset of artificial models and a recognition of furniture categories is evaluated on a dataset of captured indoor room scenes.
---
paper_title: The Cascaded Hough Transform as Support for Grouping and Finding Vanishing Points and Lines
paper_content:
In the companion paper [7] a grouping strategy with a firm geometrical underpinning and without the problem of combinatorias is proposed. It is based on the exploitation of structures that remain fixed under the transformations that relate corresponding contour segments in regular patterns. In this paper we present a solution for the complementary task of extracting these fixed structures in an efficient and non-combinatorial way, based on the iterated application of the Hough transform. Apart from grouping, this ‘Cascaded Hough Transform’ or CHT for short can also be used for the detection of straight lines, vanishing points and vanishing lines.
---
paper_title: Hough-transform detection of lines in 3-D space
paper_content:
Abstract Detecting straight lines in 3-D space using a Hough transform approach involves a 4-D parameter space, which is cumbersome. In this paper, we show how to detect families of parallel lines in 3-D space at a moderate computational cost by using a (2+2)-D Hough space. We first find peaks in the 2-D slope parameter space; for each of these peaks, we then find peaks in the intercept parameter space. Our experimental results on range images of boxes and blocks indicate that the method works quite well.
---
paper_title: Generalized Hough Transform Using Regions with Homogeneous Color
paper_content:
A novel generalized Hough transform algorithm which makes use of the color similarity between homogeneous segments as the voting criterion is proposed in this paper. The input of the algorithm is some regions with homogeneous color. These regions are obtained by first pre-segmenting the image using the morphological watershed algorithm and then refining the resultant outputs by a region merging algorithm. Region pairs belonging to the object are selected to generate entries of the reference table for the Hough transform. Every R-table entry stores a relative color between the selected region pairs. This is done in order to compute the color similarity and in turn generate votes during the voting process and some relevant information to recover the transformation parameters of the object. Based on the experimental results, our algorithm is robust to change of illumination, occlusion and distortion of the segmentation output. It recognizes objects which were translated, rotated, scaled and even located in a complex environment.
---
paper_title: Distinctive Image Features from Scale-Invariant Keypoints
paper_content:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.
---
paper_title: Generalized Hough Transform for Shape Matching
paper_content:
In this paper we propose a novel approach towards shape matching for image retrieval. The system takes advantages of generalized Hough transform, as it works well in detecting arbitrary shapes even in the presence of gaps and in handling rotation, scaling and shift variations, and solves the heavy computational aspect by introducing a preliminary automatic selection of the appropriate contour points to consider in the matching phase. The numerical simulations and comparisons have confirmed the effectiveness and the efficiency of the method proposed.
---
paper_title: Unsupervised clustering in Hough space for recognition of multiple instances of the same object in a cluttered scene
paper_content:
We describe an active binocular vision system that is capable of localising multiple instances of objects of the same-class in different settings within a covert, pre-attentive, visual search strategy. By clustering SIFT-feature matches that have been projected into a non-quantised (i.e. continuous) Hough space we are able to detect up to 6 same-class object instances simultaneously while tolerating up to ~66% of each object's surface being occluded by another object instance of the same-class. Our findings are based on using a database of ~2300 images of synthetically composited and real-world images.
---
paper_title: Class-specific Hough forests for object detection
paper_content:
We present a method for the detection of instances of an object class, such as cars or pedestrians, in natural images. Similarly to some previous works, this is accomplished via generalized Hough transform, where the detections of individual object parts cast probabilistic votes for possible locations of the centroid of the whole object; the detection hypotheses then correspond to the maxima of the Hough image that accumulates the votes from all parts. However, whereas the previous methods detect object parts using generative codebooks of part appearances, we take a more discriminative approach to object part detection. Towards this end, we train a class-specific Hough forest, which is a random forest that directly maps the image patch appearance to the probabilistic vote about the possible location of the object centroid. We demonstrate that Hough forests improve the results of the Hough-transform object detection significantly and achieve state-of-the-art performance for several classes and datasets.
---
paper_title: Recognition of multibreak patterns by 8-neighborhood-based General Hough Transform
paper_content:
In this paper, an 8-neighborhood-based Generalized Hough Transform (ENGHT) for the recognition of multibreak patterns is proposed, which is different from the Generalized Hough Transform (GHT). The difference between ENGHT and GHT appears in the process for Reference-table (R-table) configuration and for pattern recognition. The theoretic and experimental results show that ENGHT can be employed in the recognition of multibreak patterns with a high precision and a high recognition speed, and reduce the difficulty during the recognition that arose from the broken patterns to be processed by the conventional GHT. ENGHT can be employed in pattern recognition, especially in the multibreak pattern recognition.
---
paper_title: On Detection of Multiple Object Instances Using Hough Transforms
paper_content:
Hough transform-based methods for detecting multiple objects use nonmaxima suppression or mode seeking to locate and distinguish peaks in Hough images. Such postprocessing requires the tuning of many parameters and is often fragile, especially when objects are located spatially close to each other. In this paper, we develop a new probabilistic framework for object detection which is related to the Hough transform. It shares the simplicity and wide applicability of the Hough transform but, at the same time, bypasses the problem of multiple peak identification in Hough images and permits detection of multiple objects without invoking nonmaximum suppression heuristics. Our experiments demonstrate that this method results in a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem.
---
paper_title: Unsupervised moving object detection with on-line generalized hough transform
paper_content:
Generalized Hough Transform-based methods have been successfully applied to object detection. Such methods have the following disadvantages: (i) manual labeling of training data ; (ii) the off-line construction of codebook. To overcome these limitations, we propose an unsupervised moving object detection algorithm with on-line Generalized Hough Transform. Our contributions are two-fold: (i) an unsupervised training data selection algorithm based on Multiple Instance Learning (MIL); (ii) an on-line Extremely Randomized Trees construction algorithm for on-line codebook adaptation. We evaluate the proposed algorithm on three video datasets. The experimental results show that the proposed algorithm achieves comparable performance to the supervised detection method with manual labeling. They also show that the proposed algorithm outperforms the previously proposed unsupervised learning algorithm.
---
paper_title: 2D into 3D Hough-space mapping for planar object pose estimation
paper_content:
A novel approach is proposed that relates the classical two-dimensional Hough space to a different Hough space embedding 3D information about the poses of planar objects in a single gray-level image. The Hough transform is used to detect rectilinear segments that, suitably grouped into a bounded figure, constitute a planar surface. Then, a pure geometrical mechanism is used to map the numerical Hough space representation of the image into a similar representation in a reference system that is fixed with respect to the surface. The object pose to be estimated is computed by comparing the numerical representations of the test and model images (usually in a fronto-parallel view) in the same space invariant to the object pose.
---
paper_title: Rectangular object tracking based on standard hough transform
paper_content:
Object tracking in computer vision is important. Numerous research papers have been published about this problem. Few references relating to tracking shaped objects via the Hough Transforms exist. This paper provides a method based on the Standard Hough Transform to track rectangular objects. Our method works on edge images obtained by applying the Canny edge detector to the source image. The rectangular object in the edge image is then extracted based on the Standard Hough Transform and related conditions of the rectangle. Afterwards, we shift the region of interest to the new position of the tracked object and continue to detect the rectangle in successive frames. Our technique can be run in real-time due to its low computation time and simplicity.
---
paper_title: Efficient tracking with the Bounded Hough Transform
paper_content:
The Bounded Hough Transform is introduced to track objects in a sequence of sparse range images. The method is based upon a variation of the General Hough Transform that exploits the coherence across image frames that results from the relationship between known bounds on the object's velocity and the sensor frame rate. It is extremely efficient, running in O(N) for N range data points, and effectively trades off localization precision for runtime efficiency. The method has been implemented and tested on a variety of objects, including freeform surfaces, using both simulated and real data from Lidar and stereovision sensors. The motion bounds allow the inter-frame transformation space to be reduced to a reasonable, and indeed small size, containing only 729 possible states. In a variation, the rotational subspace is projected onto the translational subspace, which further reduces the transformation space to only 54 states. Experimental results confirm that the technique works well with very sparse data, possibly comprising only tens of points per frame, and that it is also robust to measurement error and outliers.
---
paper_title: Dam wall detection and tracking using a Mechanically Scanned Imaging Sonar
paper_content:
In Dam inspection tasks, an underwater robot has to grab images while surveying the wall meanwhile maintaining a certain distance and relative orientation. This paper proposes the use of an MSIS (Mechanically Scanned Imaging Sonar) for relative positioning of a robot with respect to the wall. An imaging sonar gathers polar image scans from which depth images (Range & Bearing) are generated. Depth scans are first processed to extract a line corresponding to the wall (with the Hough Transform), which is then tracked by means of an EKF (Extended Kalman Filter) using a static motion model and an implicit measurement equation associating the sensed points to the candidate line. The line estimate is referenced to the robot fixed frame and represented in polar coordinates (ρ&θ) which directly corresponds to the actual distance and relative orientation of the robot with respect to the wall. The proposed system has been tested in simulation as well as in water tank conditions.
---
paper_title: Implementation of Hough Transform as track detector
paper_content:
Hough Transform is a convenient tool for features extraction from images. In this paper an implementation of Hough Transform is considered for automatic track initiation in the surveillance radar space. The need of track initiation arises when there are many moving objects in the sensor surveillance volume and when it is not clear which measurement to which target belongs. If the type of target trajectories is known, the trajectories can be easily detected by performing corresponding Hough transform on the track image. Here, the effectiveness of the Hough transform track initiator is discussed. The influence of Hough parameter space granularity upon probability of track detection is analyzed. Analytical expressions for probability of track detection using Hough Transform are derived in the presence of normal distributed additive system noise, measurement noise and without any noises. A new parameter space structure, matching with measurement errors is proposed. The Monte Carlo simulation confirms received analytical result.
---
paper_title: Spatial color histogram based center voting method for subsequent object tracking and segmentation
paper_content:
In this paper, we introduce an algorithm for object tracking in video sequences. In order to represent the object to be tracked, we propose a new spatial color histogram model which encodes both the color distribution and spatial information. Using this spatial color histogram model, a voting method based on the generalized Hough transform is employed to estimate the object location from frame to frame. The proposed voting based method, called the center voting method, requests every pixel near the previous object center to cast a vote for locating the new object center in the new frame. Once the location of the object is obtained, the back projection method is used to segment the object from the background. Experiment results show successful tracking of the object even when the object being tracked changes in size and shares similar color with the background.
---
paper_title: Multi-Hough Transform Track Initiation for Detecting Target with Constant Acceleration
paper_content:
Track initiation is an important part in multi-target tracking, especially for the lower observable targets under the condition of heavy clutter. Multi-Hough transform track initiation for detecting target with constant acceleration in heavy clutter environment is proposed. In this algorithm, standard Hough transform is used to filter clutter first. Then sift the candidate tracks by using randomized Hough transform and minimum distance method. Finally, the true track number and parameters are given by subtractive clustering. Simulation results show that this algorithm can keep higher true track initiation probability but lower false track initiation probability. By means of multi-Hough transform and subtractive clustering, the heavy clutter is filtered, the probability of sifting true track from the candidate tracks is increased, and the time for track initiation is shorten. This method show better performance than other methods.
---
paper_title: A shape-based voting algorithm for pedestrian detection and tracking
paper_content:
This paper presents the MOUGH (mixture of uniform and Gaussian Hough) Transform for shape-based object detection and tracking. We show that the edgels of a rigid object at a given orientation are approximately distributed according to a Gaussian mixture model (GMMs). A variant of the generalized Hough transform is proposed, voting using GMMs and optimized via Expectation-Maximization, that is capable of searching images for a mildly-deformable shape, based on a training dataset of (possibly noisy) images with only crude estimates of scale and centroid of the object in each image. Further modifications are proposed to optimize the algorithm for tracking. The method is able to locate and track objects reliably even against complex backgrounds such as dense moving foliage, and with a moving camera. Experimental results indicate that the algorithm is superior to previously published variants of the Hough transform and to active shape models in tracking pedestrians from a side view.
---
paper_title: Underwater Cable Tracking by Visual Feedback
paper_content:
Nowadays, the surveillance and inspection of underwater installations, such as power and telecommunication cables and pipelines, is carried out by trained operators who, from the surface, control a Remotely Operated Vehicle (ROV) with cameras mounted over it. This is a tedious, time-consuming and expensive task, prone to errors mainly because of loss of attention or fatigue of the operator and also due to the typical low quality of seabed images. In this study, the development of a vision system guiding an Autonomous Underwater Vehicle (AUV) able to detect and track automatically an underwater power cable laid on the seabed has been the main concern. The system has been tested using sequences from a video tape obtained by means of a ROV during several tracking sessions of various real cables. The average success rate that has been achieved is about 90% for a frame rate higher than 25 frames/second.
---
paper_title: Automatic Inspection of Cage Integrity with Underwater Vehicle
paper_content:
This thesis is a preliminary study for the initiative to develop an autonomous AUV for use in aquaculture. A micro-ROV is used to test the concept of inspecting net integrity in sh cages by analyzing video feed from a video camera. Software is developed, including an interactive GUI, thruster control, and the analysis algorithms. A laser module is also developed to measure the distance between the ROV and sh cage using two laser line generators. The conclusion is that checking for net integrity with a camera and computer vision software is a viable option in a future AUV. The laser module works well, providing reliable distance measurements for the ROV.
---
paper_title: Image shape extraction using interval methods
paper_content:
Abstract This paper proposes a new method for recognition of geometrical shapes (such as lines, circles or ellipsoids) in an image. The main idea is to transform the problem into a bounded error estimation problem and then to use an interval-based method which is robust with respect to outliers. The approach is illustrated on an image taken by an underwater robot where a spheric buoy has to be detected. The results will then be compared to those obtained by the more classical generalized Hough transform.
---
paper_title: Computer vision techniques for underwater navigation
paper_content:
In the world of autonomous underwater vehicles (AUV) the prominent form of sensing has been sonar due to cloudy water conditions and dispersion of light. Although underwater conditions are highly suitable for sonar, this does not mean that vision techniques should be completely ignored. There are situations where visibility is high, such is in calm waters, and where light dispersion is not an issue, such as shallow water or near the surface. In addition, even when visibility is low, once a certain proximity to an object exists, visibility can increase. The focus of this project is this gap in capability for AUVs, with an emphasis on computer-aided detection through machine learning and computer vision techniques. All experimentation utilizes the Stingray AUV, a small and unique vehicle designed by San Diego iBotics. The first experiment is detection of an anchored buoy, which mimics the real world application of mine detection for the Navy. The second experiment is detection of a pipe, which mimics pipes in bays and harbors. The current algorithm for this application uses boosting machine learning on hue, saturation, value (HSV) to create a classifier followed by post processing techniques to clean the resulting binary image. There are many further applications for computer- aided detection and classification of objects underwater, from environmental to military
---
paper_title: An Active Contour and Kalman Filter for Underwater Target Tracking and Navigation
paper_content:
The underwater survey and inspection are mandatory step for offshore industry and for mining organization from onshore-offshore structures installations to operations (Whitcomb, 2000). There are two main areas where underwater target tracking are presently employed for offshore and mining industry. First, sea floor survey and inspection and second is subsea installation, inspection and maintenance. This paper takes second area into account and AUV vision system is developed that can detect and track underwater installation such as oil or gas pipeline, and power or telecommunication cables for inspection and maintenance application. The usage of underwater installations has increased many folds. It is desirable to do routine inspection and maintenance to protect them from marine traffic, such as fishery and anchoring (Asakawa, et al., 2000). Detection and tracking of underwater pipeline in complex marine environment is fairly difficult task to achieve, due to the frequent presence of noise in a subsea surface. Noise is commonly introduced in underwater images by sporadic marine growth and dynamic lighting condition. Traditionally, vigilances, inspections and maintenances of underwater man made structures are carried out by using the remotely operated vehicle (ROV) controlled from the mother ship by a trained operator (Whitcomb, 2000). The use of ROV’s for underwater inspections are expensive and time consuming job. Furthermore controlling the ROV’s from the surface by trained operators required continuous attention and concentration to keep the vehicle in the desired position and orientation. During long mission, this become a tedious task and highly prone to errors due to lack of attention and weariness. Moreover, tethering the vehicle limits both the operation range and the vehicle movements (Ortiz, 2002). The autonomous underwater vehicle’s do not have such limitation and essentially present better capabilities to those of ROV’s. AUV’s have a wider range of operations as there is no physical link between the control station on the surface and the vehicle, as they carry their power supply onboard. The usage of AUV for underwater pipeline or cable inspection and maintenance become very popular area of research for mining and offshore industries (Griffiths & Birch 2000). During the last decade lots of efforts have been done for design and development of different AUV tracking system, to do routine inspection and maintenance for underwater installation (Asif and Arshad 2006). Conventionally, the literatures on underwater pipeline or cable tracking can be categorized according to the sensors used for detection and tracking. There are mainly three types of sensors which used for that purpose. The first two types of sensors are the sonar and the pair of magnetometers (Petillot, et al.,
---
paper_title: Combining spectral signals and spatial patterns using multiple Hough transforms: An application for detection of natural gas seepages
paper_content:
Object detection in remote sensing studies can be improved by incorporating spatial knowledge of an object in an image processing algorithm. This paper presents an algorithm based on sequential Hough transforms, which aims to detect botanical and mineralogical alterations that result from natural seepage of carbon dioxide and light hydrocarbons. As the observed alterations are not unique for gas seepages, these halos can only be distinguished from the background by their specific spatial pattern: the alterations are present as halos that line up along geological lineaments in the shallow subsurface. The algorithm is deployed in three phases: a prior spectral classification followed by two serialized Hough transforms. The first Hough transform fits circles through spectrally optimal matching pixels. Next, the centers of the detected circles are piped into the second Hough transform that detects points that are located on a line. Results show that our algorithm is successful in detecting the alteration halos. The number of false anomalies is sufficiently reduced to allow an objective detection based on field observations and spectral measurements.
---
paper_title: Knowledge-based power line detection for UAV surveillance and inspection systems
paper_content:
Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in the automatic surveillance of electrical power infrastructure. For an automatic vision based power line inspection system, detecting power lines from cluttered background an important and challenging task. In this paper, we propose a knowledge-based power line detection method for a vision based UAV surveillance and inspection system. A PCNN filter is developed to remove background noise from the images prior to the Hough transform being employed to detect straight lines. Finally knowledge based line clustering is applied to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective.
---
paper_title: Detection and tracking of pipe joints in noisy images
paper_content:
T. A. Clarke, & T. J. Ellis.Centre for Digital Image Measurement and Analysis, School of Engineering,City University, Northampton Square, London. EC1V OHB. U. K.Email: [email protected] remote and automatic inspection of the inside of pipes and tunnels is an important industrial application area. The maincharacteristics of the environment found in commonly used pipes such as sewers are: limitations on the camera spatialposition; a large variety of surface features; a wide range of surface reflectivity due to the orientation ofparts of the pipe, e.g.thejoints; and many disturbances to the environment due for example to: mist; water spray; or hanging debris. The objectiveof this research is defect detection and classification; however, a first stage is the construction of a model of the pipe structureby pipe joint tracking. This paper describes work to exploit the knowledge of the environment to: build a model of defects,reflectivity characteristics and pipe characteristics; develop appropriate methods for grouping the pipe joint features withineach image from edge information; fit a pipe joint model (a circle, or connected arcs) to the grouped features; and to trackthese features in sequential images. Each stage in these processes has been analysed to optimise the performance in terms ofreliability and speed of operation. The methods that have been developed are described and results of robust pipe jointtracking over a large sequence of images are presented. The paper also presents results of experiments of applying severalcommon edge detectors to images which have been corrupted by JPEG encoding and spatial sub-sampling. The subsequentrobustness of a Hough based method for the detection of circular images features is also reported.Keywords: pipe joints, tracking, Hough transform, edge detection, JPEG.1. INTRODUCTIONThere has been considerable research into the inspection of industrial objects using computer vision techniques"2. Anincreasing area of interest occurs when either the camera or the objects are moving. In this case, two different proceduresmay be involved in the inspection process: (i) tracking to determine the position of the camera with respect to the subject ofinterest and (ii) feature detection. In this paper, the inspection of pipe joints is discussed. The current method of pipeinspection is by manually controlling a remote TV camera and classifying defects from images displayed on a monitor. Thisis both time consuming and costly. The inspection is also stored on video tape for subsequent archiving and analysis. Thesevideo tapes are used in this research to formulate strategies and develop software for analysis which may, in the future, beused in the field. The ultimate objective of the research is to provide an objective measurement of pipe defects whichmatches or exceeds the performance of a human operator. What makes this research particularly challenging is the high levelof noise encountered. The origins of this noise are many. For example: camera instability; poor illumination; occlusion;gross distortions in the pipe; the build up of extraneous matter on the walls and joints of the pipe; and the environment thatthe images are acquired in. As a first step in the inspection process the pipe joints, which are readily visible, are identified.The subsequent extraction of information about deformation, the build up of extraneous matter, or a large number of otherfeatures will be overlaid on the basic pipe model that will be constructed from the pipe joints. Alternative methods ofinspection by more direct means have been suggested3 and may be used in conjunction with the proposed method in thefuture. One aspect of the overall project is to investigate efficient methods of minimising the amount of data typically storedduring a sewer inspection. Many thousands of video tapes are used to store the images, and JPEG image compression hasbeen employed to significantly reduce the storage requirement for digital images.
---
paper_title: Cell detection for bee comb images using Circular Hough Transformation
paper_content:
Bee colony has to be at optimum health in order to produce optimum production of honey or to act as pollination agents for crop plantations. Current methods in the evaluation of bee colony health through trained personnel visual judgement and estimation and manual counting using wire grid as guidance are time consuming and accuracy individual dependent. This study explored the use of image processing methods to improve the existing methods. Although the cells in bee comb were in hexagonal shape, the spaces in the cells were observed to be circular that made it possible for the application of Circulation Hough Transformation (cHT), a common technique of image processing for detecting lines and circles, on the counting of number of cells in the bee comb. In addition, the cells had the attributes of no overlapping on each other, making the cells clearly visible. A prototype for cell detection algorithm using cHT detecting the number of cells in bee comb from a digital image was developed. The accuracy of the result obtained which was based on the number of the cells counted from the prototype compared to the manual counting showed an average cell detection rate of more than 80%. The difference in the output of cell counting through cHT and manual count was caused by detection errors. Detection errors were divided into two types: false acceptance and false rejection. False acceptance referred to the error of counting non-cell while false rejection referred to the error of undetected cell.
---
paper_title: Label Inspection using the Hough Transform on Transputer Networks
paper_content:
Abstract A variation of the Hough transform, the special normal/angle Hough transform (RθsHT) is described detailing its characteristics and advantages and suitability for a parallel implementation. The transputer is introduced and the application of the RθsHT on two popular types of transputer networks, a control-driven array and a demand-driven farm of transputers, is described. The methods employed for communication and the breakdown of the algorithms are described for each type of network. Timing results for the implementation on both networks are provided. The use of the RθsHT is then further examined with application to the problem of label inspection, and two types of labels are used to demonstrate how to pass or fail labels under inspection.
---
paper_title: Automatic lumbar vertebrae segmentation in fluoroscopic images via optimised concurrent Hough transform
paper_content:
We show how a new approach can automatically detect the positions and borders of vertebrae concurrently, relieving many of the problems experienced in other approaches. First, we use phase congruency to relieve the difficulty associated with threshold selection in edge detection of the illumination variant DVF images. Then, our new Hough transform approach is applied to determine the moving vertebrae, concurrently. We include optimisation via a genetic algorithm (as without it the extraction of moving multiple vertebrae is computationally daunting). Our results show that this new approach can indeed provide extractions of position and rotation which appear to be of sufficient quality to aid therapy and diagnosis of spinal disorders.
---
paper_title: Warped Wigner-Hough Transform for Defect Reflection Enhancement in Ultrasonic Guided Wave Monitoring
paper_content:
To improve the defect detectability of Lamb wave inspection systems, the application of nonlinear signal processing was investigated. The approach is based on a Warped Frequency Transform (WFT) to compensate the dispersive behavior of ultrasonic guided waves, followed by a Wigner-Ville time-frequency analysis and the Hough Transform to further improve localization accuracy. As a result, an automatic detection procedure to locate defect-induced reflections was demonstrated and successfully tested by analyzing numerically simulated Lamb waves propagating in an aluminum plate. The proposed method is suitable for defect detection and can be easily implemented for real-world structural health monitoring applications.
---
paper_title: Robust Feature Matching with Alternate Hough and Inverted Hough Transforms
paper_content:
We present an algorithm that carries out alternate Hough transform and inverted Hough transform to establish feature correspondences, and enhances the quality of matching in both precision and recall. Inspired by the fact that nearby features on the same object share coherent homographies in matching, we cast the task of feature matching as a density estimation problem in the Hough space spanned by the hypotheses of homographies. Specifically, we project all the correspondences into the Hough space, and determine the correctness of the correspondences by their respective densities. In this way, mutual verification of relevant correspondences is activated, and the precision of matching is boosted. On the other hand, we infer the concerted homographies propagated from the locally grouped features, and enrich the correspondence candidates for each feature. The recall is hence increased. The two processes are tightly coupled. Through iterative optimization, plausible enrichments are gradually revealed while more correct correspondences are detected. Promising experimental results on three benchmark datasets manifest the effectiveness of the proposed approach.
---
paper_title: Distinctive Image Features from Scale-Invariant Keypoints
paper_content:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.
---
paper_title: Scene classification using color and structure-based features
paper_content:
Study of scene understanding is a significant challenge. Many conventional methods proposed by these studies have been used or applied for many fields, for instance, scene recognition system for digital camera, similar image retrieval system on websites, and robot vision for autonomous or assist robots. From above, scene understanding is important, however it is as difficult as generic object recognition due to the diversity of categories. Many conventional methods have been proposed, and these focus on color or spatial frequency features in images. Especially, scene classification using features of spatial frequency show efficacy. Seen from the results of these studies, it seems that there is common features within a same scene. In this paper we proposed scene classification method with a focus on the structure of scene. We define the structure of scene as a set of lines in images and calculate these features using Hough space acquired by applying Hough transform to images. In addition, we calculate color features and combine those features. By using these two features we generate two strong classifiers with Boosting algorithm, and combine the results of each strong classifier. To test our approach, we executed two classes classification of scenes for each category using scene classification dataset. The results show that our approach is effective for several scenes especially the scene with artifacts.
---
paper_title: Global Correlation Clustering Based on the Hough Transform
paper_content:
In this article, we propose an efficient and effective method for finding arbitrarily oriented subspace clusters by mapping the data space to a parameter space defining the set of possible arbitrarily oriented subspaces. The objective of a clustering algorithm based on this principle is to find those among all the possible subspaces that accommodate many database objects. In contrast to existing approaches, our method can find subspace clusters of different dimensionality even if they are sparse or are intersected by other clusters within a noisy environment. A broad experimental evaluation demonstrates the robustness and effectiveness of our method. Copyright © 2008 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 1: 000-000, 2008
---
| Title: A Survey on Hough Transform, Theory, Techniques and Applications
Section 1: Introduction
Description 1: This section introduces the Hough Transform (HT), its development, and its fundamental principles and advantages. It also outlines the main contributions and the structure of the paper.
Section 2: Shape Detection
Description 2: This section discusses the use of the Hough Transform in detecting both analytical and irregular shapes, detailing the various techniques and modifications for line, circle, and ellipse detection, as well as other analytical and arbitrary shapes.
Section 3: Tone and Color
Description 3: This section explores the application of the Hough Transform to gray and color images, highlighting methods that work on images with higher information content without conversion to binary images.
Section 4: Speedup and Memory Saving
Description 4: This section covers various approaches for accelerating the Hough Transform and reducing its memory requirements, including hardware implementations, parallel computing, and memory optimization techniques.
Section 5: Hough Transform Applications
Description 5: This section categorizes and describes the wide range of applications of the Hough Transform, including traffic and transport, biometrics and man-machine interaction, 3D applications, object recognition and tracking, underwater applications, industrial and commercial applications, medical applications, and unconventional applications.
Section 6: Conclusions
Description 6: This section summarizes the survey, reflecting on the Hough Transform's advantages, its wide array of applications, and suggesting future research directions and potential advancements. |
ANALYTICAL OVERVIEW OF WORKS ON HIGH PRECISION ANGLE MEASUREMENT INSTRUMENTS | 7 | ---
paper_title: Single-isotope laser gyro.
paper_content:
A single-isotope He-Ne laser gyro has been constructed and operated, and the results have been compared with those obtained from a multi-isotope system. Strong mode competition in the single-isotope ring laser has been found in a region 10-30 MHz near line center. The sense of the mode competition has been found to depend on the direction of rotation of the laser cavity. Hence, by mechanical dithering of the laser, the time-averaged effect of the mode competition can be zero. However, with a current unbalance, a large dispersion-type null shift is found near line center. Unlike the linear laser and the multi-isotope ring laser, the single-isotope ring laser is characterized by strong mode competition such that only a single longitudinal mode oscillates over most of the free spectral range.
---
| Title: ANALYTICAL OVERVIEW OF WORKS ON HIGH PRECISION ANGLE MEASUREMENT INSTRUMENTS
Section 1: Introduction
Description 1: This section introduces the importance and application of angle transducers in navigation instruments used in control systems of various vehicles.
Section 2: Studies and Publications Overview
Description 2: This section highlights the lack of systemic analysis in the literature concerning high precision angle measurement instruments.
Section 3: Terms of Reference
Description 3: This section provides an overview of achievements and advancements in high precision angle measurement instruments, detailing various types of transducers and their specifications.
Section 4: Ukrainian goniometers with visual targeting and reading
Description 4: This section discusses precision goniometers manufactured by companies in Ukraine, Germany, and the UK, focusing on their specifications and functionalities.
Section 5: Automatic goniometers in the world market
Description 5: This section reviews automatic goniometers available in the global market, describing their advanced features, automation capabilities, and accuracy improvements.
Section 6: Dynamic laser goniometer IUP-1L
Description 6: This section details the next generation of speed angle measurement systems developed by St. Petersburg Electrotechnical University, including their design and applications.
Section 7: Discussion
Description 7: This section summarizes the achievements in the field, noting the variations in accuracy and price among the different instruments discussed. |
A pedagogical overview of quantum discord | 16 | ---
paper_title: Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model
paper_content:
A state of a composite quantum system is called classically correlated if it can be approximated by convex combinations of product states, and Einstein-Podolsky-Rosen correlated otherwise. Any classically correlated state can be modeled by a hidden-variable theory and hence satisfies all generalized Bell's inequalities. It is shown by an explicit example that the converse of this statement is false.
---
paper_title: The classical-quantum boundary for correlations: discord and related measures
paper_content:
One of the best signatures of nonclassicality in a quantum system is the existence of correlations that have no classical counterpart. Different methods for quantifying the quantum and classical parts of correlations are amongst the more actively-studied topics of quantum information theory over the past decade. Entanglement is the most prominent of these correlations, but in many cases unentangled states exhibit nonclassical behavior too. Thus distinguishing quantum correlations other than entanglement provides a better division between the quantum and classical worlds, especially when considering mixed states. Here we review different notions of classical and quantum correlations quantified by quantum discord and other related measures. In the first half, we review the mathematical properties of the measures of quantum correlations, relate them to each other, and discuss the classical-quantum division that is common among them. In the second half, we show that the measures identify and quantify the deviation from classicality in various quantum-information-processing tasks, quantum thermodynamics, open-system dynamics, and many-body physics. We show that in many cases quantum correlations indicate an advantage of quantum methods over classical ones.
---
paper_title: Local versus nonlocal information in quantum-information theory: Formalism and phenomena
paper_content:
In spite of many results in quantum information theory, the complex nature of compound systems is far from clear. In general the information is a mixture of local and nonlocal ('quantum') information. It is important from both pragmatic and theoretical points of view to know the relationships between the two components. To make this point more clear, we develop and investigate the quantum-information processing paradigm in which parties sharing a multipartite state distill local information. The amount of information which is lost because the parties must use a classical communication channel is the deficit. This scheme can be viewed as complementary to the notion of distilling entanglement. After reviewing the paradigm in detail, we show that the upper bound for the deficit is given by the relative entropy distance to so-called pseudoclassically correlated states; the lower bound is the relative entropy of entanglement. This implies, in particular, that any entangled state is informationally nonlocal - i.e., has nonzero deficit. We also apply the paradigm to defining the thermodynamical cost of erasing entanglement. We show the cost is bounded from below by relative entropy of entanglement. We demonstrate the existence of several other nonlocal phenomena which can be found using themore » paradigm of local information. For example, we prove the existence of a form of nonlocality without entanglement and with distinguishability. We analyze the deficit for several classes of multipartite pure states and obtain that in contrast to the GHZ state, the Aharonov state is extremely nonlocal. We also show that there do not exist states for which the deficit is strictly equal to the whole informational content (bound local information). We discuss the relation of the paradigm with measures of classical correlations introduced earlier. It is also proved that in the one-way scenario, the deficit is additive for Bell diagonal states. We then discuss complementary features of information in distributed quantum systems. Finally we discuss the physical and theoretical meaning of the results and pose many open questions.« less
---
paper_title: Quantum entanglement
paper_content:
All our former experience with application of quantum theory seems to say: {\it what is predicted by quantum formalism must occur in laboratory}. But the essence of quantum formalism - entanglement, recognized by Einstein, Podolsky, Rosen and Schr\"odinger - waited over 70 years to enter to laboratories as a new resource as real as energy. This holistic property of compound quantum systems, which involves nonclassical correlations between subsystems, is a potential for many quantum processes, including ``canonical'' ones: quantum cryptography, quantum teleportation and dense coding. However, it appeared that this new resource is very complex and difficult to detect. Being usually fragile to environment, it is robust against conceptual and mathematical tools, the task of which is to decipher its rich structure. This article reviews basic aspects of entanglement including its characterization, detection, distillation and quantifying. In particular, the authors discuss various manifestations of entanglement via Bell inequalities, entropic inequalities, entanglement witnesses, quantum cryptography and point out some interrelations. They also discuss a basic role of entanglement in quantum communication within distant labs paradigm and stress some peculiarities such as irreversibility of entanglement manipulations including its extremal form - bound entanglement phenomenon. A basic role of entanglement witnesses in detection of entanglement is emphasized.
---
paper_title: Negative Entropy and Information in Quantum Mechanics
paper_content:
A framework for a quantum mechanical information theory is introduced that is based entirely on density operators, and gives rise to a unified description of classical correlation and quantum entanglement. Unlike in classical (Shannon) information theory, quantum (von Neumann) conditional entropies can be negative when considering quantum entangled systems, a fact related to quantum nonseparability. The possibility that negative (virtual) information can be carried by entangled particles suggests a consistent interpretation of quantum informational processes.
---
paper_title: Discord and quantum computational resources
paper_content:
Discordant states appear in a large number of quantum phenomena and seem to be a good indicator of divergence from classicality. While there is evidence that they are essential for a quantum algorithm to have an advantage over a classical one, their precise role is unclear. We examine the role of discord in quantum algorithms using the paradigmatic framework of `restricted distributed quantum gates' and show that manipulating discordant states using local operations has an associated cost in terms of entanglement and communication resources. Changing discord reduces the total correlations and reversible operations on discordant states usually require non-local resources. Discord alone is, however, not enough to determine the need for entanglement. A more general type of similar quantities, which we call K-discord, is introduced as a further constraint on the kinds of operations that can be performed without entanglement resources.
---
paper_title: Partial quantum information
paper_content:
Information—be it classical1 or quantum2—is measured by the amount of communication needed to convey it. In the classical case, if the receiver has some prior information about the messages being conveyed, less communication is needed3. Here we explore the concept of prior quantum information: given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the partial information one system needs, conditioned on its prior information. We find that it is given by the conditional entropy—a quantity that was known previously, but lacked an operational meaning. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, then sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a protocol that we term ‘quantum state merging’ which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, noiseless coding with side information, multiple access channels and assisted entanglement distillation.
---
paper_title: Negative Entropy and Information in Quantum Mechanics
paper_content:
A framework for a quantum mechanical information theory is introduced that is based entirely on density operators, and gives rise to a unified description of classical correlation and quantum entanglement. Unlike in classical (Shannon) information theory, quantum (von Neumann) conditional entropies can be negative when considering quantum entangled systems, a fact related to quantum nonseparability. The possibility that negative (virtual) information can be carried by entangled particles suggests a consistent interpretation of quantum informational processes.
---
paper_title: Quantum extension of conditional probability
paper_content:
We analyze properties of the quantum conditional amplitude operator @Phys. Rev. Lett. 79, 5194 ~1997!#, which plays a role similar to that of the conditional probability in classical information theory. The spectrum of the conditional operator that characterizes a quantum bipartite system is shown to be invariant under local unitary transformations and reflects its inseparability. More specifically, it is proven that the conditional amplitude operator of a separable state cannot have an eigenvalue exceeding 1, which results in a necessary condition for separability. A related separability criterion based on the non-negativity of the von Neumann conditional entropy is also exhibited. @S1050-2947~99!00608-3#
---
paper_title: The thermodynamic meaning of negative entropy
paper_content:
This corrects the article DOI: 10.1038/nature10123
---
paper_title: Unification of quantum and classical correlations and quantumness measures
paper_content:
We give a pedagogical introduction to quantum discord and discuss the problem of separation of total correlations in a given quantum state into entanglement, dissonance, and classical correlations using the concept of relative entropy. This allows us to put all correlations on an equal footing. Entanglement and dissonance jointly belong to what is known as quantum discord. Our methods are completely applicable for multipartite systems of arbitrary dimensions. We finally show, using relative entropy, how different notions of quantum correlations are related to each other. This gives a single theory that incorporates all correlations, quantum and classical, and different methods of quantifying them.
---
paper_title: Criteria for measures of quantum correlations
paper_content:
Entanglement does not describe all quantum correlations and several authors have shown the need to go beyond entanglement when dealing with mixed states. Various different measures have sprung up in the literature, for a variety of reasons, to describe bipartite and multipartite quantum correlations; some are known under the collective name quantum discord. Yet, in the same sprit as the criteria for entanglement measures, there is no general mechanism that determines whether a measure of quantum and classical correlations is a proper measure of correlations. This is partially due to the fact that the answer is a bit muddy. In this article we attempt tackle this muddy topic by writing down several criteria for a "good" measure of correlations. We breakup our list into necessary, reasonable, and debatable conditions. We then proceed to prove several of these conditions for generalized measures of quantum correlations. However, not all conditions are met by all measures; we show this via several examples. The reasonable conditions are related to continuity of correlations, which has not been previously discussed. Continuity is an important quality if one wants to probe quantum correlations in the laboratory. We show that most types of quantum discord are continuous but none are continuous with respect to the measurement basis used for optimization.
---
paper_title: Quantum discord: a measure of the quantumness of correlations.
paper_content:
Two classically identical expressions for the mutual information generally differ when the systems involved are quantum. This difference defines the quantum discord. It can be used as a measure of the quantumness of correlations. Separability of the density matrix describing a pair of systems does not guarantee vanishing of the discord, thus showing that absence of entanglement does not imply classicality. We relate this to the quantum superposition principle, and consider the vanishing of discord as a criterion for the preferred effectively classical states of a system, i.e., the pointer states.
---
paper_title: Thermodynamical approach to quantifying quantum correlations.
paper_content:
We consider the amount of work which can be extracted from a heat bath using a bipartite state shared by two parties. In general it is less then the amount of work extractable when one party is in possession of the entire state. We derive bounds for this "work deficit" and calculate it explicitly for a number of different cases. For pure states the work deficit is exactly equal to the distillable entanglement of the state, and this is also achievable for maximally correlated states. In these cases a form of complementarity exists between physical work which can be extracted and distillable entanglement. The work deficit is a good measure of the quantum correlations in a state and provides a new paradigm for understanding quantum non-locality.
---
paper_title: The thermodynamic meaning of negative entropy
paper_content:
This corrects the article DOI: 10.1038/nature10123
---
paper_title: Classical, quantum and total correlations
paper_content:
We discuss the problem of separating consistently the total correlations in a bipartite quantum state into a quantum and a purely classical part. A measure of classical correlations is proposed and its properties are explored.
---
paper_title: Thermodynamical approach to quantifying quantum correlations.
paper_content:
We consider the amount of work which can be extracted from a heat bath using a bipartite state shared by two parties. In general it is less then the amount of work extractable when one party is in possession of the entire state. We derive bounds for this "work deficit" and calculate it explicitly for a number of different cases. For pure states the work deficit is exactly equal to the distillable entanglement of the state, and this is also achievable for maximally correlated states. In these cases a form of complementarity exists between physical work which can be extracted and distillable entanglement. The work deficit is a good measure of the quantum correlations in a state and provides a new paradigm for understanding quantum non-locality.
---
paper_title: Classical, quantum and total correlations
paper_content:
We discuss the problem of separating consistently the total correlations in a bipartite quantum state into a quantum and a purely classical part. A measure of classical correlations is proposed and its properties are explored.
---
paper_title: Discord and quantum computational resources
paper_content:
Discordant states appear in a large number of quantum phenomena and seem to be a good indicator of divergence from classicality. While there is evidence that they are essential for a quantum algorithm to have an advantage over a classical one, their precise role is unclear. We examine the role of discord in quantum algorithms using the paradigmatic framework of `restricted distributed quantum gates' and show that manipulating discordant states using local operations has an associated cost in terms of entanglement and communication resources. Changing discord reduces the total correlations and reversible operations on discordant states usually require non-local resources. Discord alone is, however, not enough to determine the need for entanglement. A more general type of similar quantities, which we call K-discord, is introduced as a further constraint on the kinds of operations that can be performed without entanglement resources.
---
paper_title: No-local-broadcasting theorem for quantum correlations
paper_content:
We prove that the correlations present in a multipartite quantum state have an \emph{operational} quantum character as soon as the state does not simply encode a multipartite classical probability distribution, i.e. does not describe the joint state of many classical registers. Even unentangled states may exhibit such \emph{quantumness}, that is pointed out by the new task of \emph{local broadcasting}, i.e. of locally sharing pre-established correlations: this task is feasible if and only if correlations are classical and derive a no-local-broadcasting theorem for quantum correlations. Thus, local broadcasting is able to point out the quantumness of correlations, as standard broadcasting points out the quantum character of single system states. Further, we argue that our theorem implies the standard no-broadcasting theorem for single systems, and that our operative approach leads in a natural way to the definition of measures for quantumness of correlations.
---
paper_title: On Quantum No-Broadcasting
paper_content:
We address the issue of one-side local broadcasting for correlations in a quantum bipartite state, and conjecture that the correlations can be broadcast if and only if they are classical–quantum, or equivalently, the quantum discord, as defined by Ollivier and Zurek (Phys Rev Lett 88:017901, 2002), vanishes. We prove this conjecture when the reduced state is maximally mixed and further provide various plausible arguments supporting this conjecture. Moreover, we demonstrate that the conjecture implies the following two elegant and fundamental no-broadcasting theorems: (1) The original no-broadcasting theorem by Barnum et al. (Phys Rev Lett 76:2818, 1996), which states that a family of quantum states can be broadcast if and only if the quantum states commute. (2) The no-local-broadcasting theorem for quantum correlations by Piani et al. (Phys Rev Lett 100:090502, 2008), which states that the correlations in a single bipartite state can be locally broadcast if and only if they are classical. The results provide an informational interpretation for classical–quantum states from an operational perspective and shed new lights on the intrinsic relation between non-commutativity and quantumness.
---
paper_title: Quantum discord as a resource for quantum cryptography
paper_content:
Quantum discord is the minimal bipartite resource which is needed for a secure quantum key distribution, being a cryptographic primitive equivalent to non-orthogonality. Its role becomes crucial in device-dependent quantum cryptography, where the presence of preparation and detection noise (inaccessible to all parties) may be so strong to prevent the distribution and distillation of entanglement. The necessity of entanglement is re-affirmed in the stronger scenario of device-independent quantum cryptography, where all sources of noise are ascribed to the eavesdropper.
---
paper_title: Demonstration of Blind Quantum Computing
paper_content:
Quantum computers, besides offering substantial computational speedups, are also expected to preserve the privacy of a computation. We present an experimental demonstration of blind quantum computing in which the input, computation, and output all remain unknown to the computer. We exploit the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server. Various blind delegated computations, including one- and two-qubit gates and the Deutsch and Grover quantum algorithms, are demonstrated. The client only needs to be able to prepare and transmit individual photonic qubits. Our demonstration is crucial for unconditionally secure quantum cloud computing and might become a key ingredient for real-life applications, especially when considering the challenges of making powerful quantum computers widely available.
---
paper_title: Quantum locking of classical correlations and quantum discord of classical-quantum states
paper_content:
A locking protocol between two parties is as follows: Alice gives an encrypted classical message to Bob which she does not want Bob to be able to read until she gives him the key. If Alice is using classical resources, and she wants to approach unconditional security, then the key and the message must have comparable sizes. But if Alice prepares a quantum state, the size of the key can be comparatively negligible. This effect is called quantum locking. Entanglement does not play a role in this quantum advantage. We show that, in this scenario, the quantum discord quantifies the advantage of the quantum protocol over the corresponding classical one for any classical-quantum state.
---
paper_title: Quantum cryptography: Public key distribution and coin tossing
paper_content:
When elementary quantum systems, such as polarized photons, are used to transmit digital information, the uncertainty principle gives rise to novel cryptographic phenomena unachievable with traditional transmission media, e.g. a communications channel on which it is impossible in principle to eavesdrop without a high probability of disturbing the transmission in such a way as to be detected. Such a quantum channel can be used in conjunction with ordinary insecure classical channels to distribute random key information between two users with the assurance that it remains unknown to anyone else, even when the users share no secret information initially. We also present a protocol for coin-tossing by exchange of quantum messages, which is secure against traditional kinds of cheating, even by an opponent with unlimited computing power, but ironically can be subverted by use of a still subtler quantum phenomenon, the Einstein-Podolsky-Rosen paradox.
---
paper_title: Simulating Concordant Computations
paper_content:
A quantum state is called concordant if it has zero quantum discord with respect to any part. By extension, a concordant computation is one such that the state of the computer, at each time step, is concordant. In this paper, I describe a classical algorithm that, given a product state as input, permits the efficient simulation of any concordant quantum computation having a conventional form and composed of gates acting on two or fewer qubits. This shows that such a quantum computation must generate quantum discord if it is to efficiently solve a problem that requires super-polynomial time classically. While I employ the restriction to two-qubit gates sparingly, a crucial component of the simulation algorithm appears not to be extensible to gates acting on higher-dimensional systems.
---
paper_title: Quantum Discord and Quantum Computing - An Appraisal
paper_content:
We discuss models of computing that are beyond classical. The primary motivation is to unearth the cause of nonclassical advantages in computation. Completeness results from computational complexity theory lead to the identification of very disparate problems, and offer a kaleidoscopic view into the realm of quantum enhancements in computation. Emphasis is placed on the `power of one qubit' model, and the boundary between quantum and classical correlations as delineated by quantum discord. A recent result by Eastin on the role of this boundary in the efficient classical simulation of quantum computation is discussed. Perceived drawbacks in the interpretation of quantum discord as a relevant certificate of quantum enhancements are addressed.
---
paper_title: Quantum metrology.
paper_content:
We point out a general framework that encompasses most cases in which quantum effects enable an increase in precision when estimating a parameter (quantum metrology). The typical quantum precision enhancement is of the order of the square root of the number of times the system is sampled. We prove that this is optimal, and we point out the different strategies (classical and quantum) that permit one to attain this bound.
---
| Title: A Pedagogical Overview of Quantum Discord
Section 1: Introduction
Description 1: Outline the main concepts of quantum correlations, introduce the idea of quantum discord, and provide an overview of the paper’s structure.
Section 2: Local Operations and Classical Communication (LOCC)
Description 2: Explain the concept of LOCC, using the example of Alice and Bob, and describe its significance in preparing quantum states.
Section 3: Separable
Description 3: Define separable states and discuss how they are prepared using LOCC.
Section 4: Classically Correlated States
Description 4: Discuss the concept of classically correlated states and how they differ from quantum correlations and entangled states.
Section 5: Quantum Discord
Description 5: Define quantum discord and explain its distinction from classically correlated and entangled states.
Section 6: Historical Origin of Quantum Discord
Description 6: Provide an overview of the historical context and development of the concept of quantum discord.
Section 7: Quantum Conditional Entropy
Description 7: Discuss the generalisation of classical conditional entropy to the quantum domain and its implications for quantum discord.
Section 8: The Original Discord
Description 8: Introduce the original definitions of quantum discord and mutual information, and explain their significance.
Section 9: Extremisation Conditions
Description 9: Describe the conditions and methods for maximising classical correlations and defining quantum discord through various operations.
Section 10: Discord Reformulated
Description 10: Present a reformulated understanding of quantum discord based on modern interpretations and theoretical frameworks.
Section 11: Why Discord is Worth Studying
Description 11: Highlight the importance and potential applications of studying quantum discord in various fields of quantum information theory.
Section 12: Copying Correlated Quantum States
Description 12: Discuss the no-cloning theorem and its implications for the local broadcasting of quantum states and quantum discord.
Section 13: Quantum-Classically Correlated States
Description 13: Elaborate on quantum-classically correlated states and their roles in cryptographic protocols and other quantum applications.
Section 14: Quantum Computing and Metrology
Description 14: Explore the implications of quantum discord in quantum computing and metrology, including its potential for quantum enhancement.
Section 15: Decoding Correlated States
Description 15: Discuss the methods and significance of decoding quantum states with discord, comparing them with the preparation of entangled states.
Section 16: Conclusions
Description 16: Summarize the key points discussed in the paper and provide concluding thoughts on the study of quantum discord and its relevance. |
An Overview of Techniques for Designing Parameterized Algorithms | 14 | ---
paper_title: Parameterized Complexity
paper_content:
An approach to complexity theory which offers a means of analysing algorithms in terms of their tractability. The authors consider the problem in terms of parameterized languages and taking "k-slices" of the language, thus introducing readers to new classes of algorithms which may be analysed more precisely than was the case until now. The book is as self-contained as possible and includes a great deal of background material. As a result, computer scientists, mathematicians, and graduate students interested in the design and analysis of algorithms will find much of interest.
---
paper_title: Invitation to fixed-parameter algorithms
paper_content:
PART I: FOUNDATIONS 1. Introduction to Fixed-Parameter Algorithms 2. Preliminaries and Agreements 3. Parameterized Complexity Theory - A Primer 4. Vertex Cover - An Illustrative Example 5. The Art of Problem Parameterization 6. Summary and Concluding Remarks PART II: ALGORITHMIC METHODS 7. Data Reduction and Problem Kernels 8. Depth-Bounded Search Trees 9. Dynamic Programming 10. Tree Decompositions of Graphs 11. Further Advanced Techniques 12. Summary and Concluding Remarks PART III: SOME THEORY, SOME CASE STUDIES 13. Parameterized Complexity Theory 14. Connections to Approximation Algorithms 15. Selected Case Studies 16. Zukunftsmusik References Index
---
paper_title: SYSTEMATIC KERNELIZATION IN FPT ALGORITHM DESIGN
paper_content:
Data reduction is a preprocessing technique which makes huge and seemingly intractable instances of a problem small and tractable. This technique is often acknowledged as one of the most powerful methods to cope with the intractability of certain NP-complete problems. Heuristics for reducing data can often be seen as reduction rules, if considered from a parameterized complexity viewpoint. Using reduction rules to transform the instances of a parameterized problem into equivalent instances, with size bounded by a function of the parameter, is known as kernelization. This thesis introduces and develops an approach to designing FPT algorithms based on effective kernelizations. This method is called the method of extremal structure. The method operates following a paradigm presented by two lemmas, the kernelization and boundary lemmas. The boundary lemma is used to find theoretical bounds on the maximum size of an instance reduced under an adequate set of reduction rules. The kernelization lemma is invoked to decide the instances which are larger than for some function depending only on the parameter . The first aim of the method of extremal structure is to provide a systematic way to discover reduction rules for fixed-parameter tractable problems. The second is to devise an analytical way to find theoretical bounds for the size of kernels for those problems. These two aims are achieved with the aid of combinatorial extremal arguments. Furthermore, this thesis shows how the method of extremal structure can be applied to effectively solve several NP-complete problems, namely MAX CUT, MAX LEAF SPANNING TREE, NONBLOCKER, -STAR PACKING, EDGE-DISJOINT TRIANGLE PACKING, MAX INTERNAL SPANNING TREE and MINIMUM MAXIMAL MATCHING.
---
paper_title: Blow-Ups, Win/Win’s, and Crown Rules: Some New Directions in FPT
paper_content:
This survey reviews the basic notions of parameterized complexity, and describes some new approaches to designing FPT algorithms and problem reductions for graph problems.
---
paper_title: Coordinatized Kernels and Catalytic Reductions: An Improved FPT Algorithm for Max Leaf Spanning Tree and Other Problems
paper_content:
We describe some new, simple and apparently general methods for designing FPT algorithms, and illustrate how these can be used to obtain a significantly improved FPT algorithm for the MAXIMUM LEAF SPANNING TREE problem. Furthermore, we sketch how the methods can be applied to a number of other well-known problems, including the parametric dual of DOMINATING SET (also known as NONBLOCKER), MATRIX DOMINATION, EDGE DOMINATING SET, and FEEDBACK VERTEX SET FOR UNDIRECTED GRAPHS. The main payoffs of these new methods are in improved functions f(k) in the FPT running times, and in general systematic approaches that seem to apply to a wide variety of problems.
---
paper_title: On the Differences between ``Practical'' and ``Applied''
paper_content:
The terms “practical” and “applied” are often used synonymously in our community. For the purpose of this talk I will assign more precise, distinct meanings to both terms (which are not intended to be ultimate definitions). More specifically, I will reserve the word “applied” for work whose crucial, central goal is finding a feasible, reasonable (e.g. economical) solution to a concrete real-world problem, which is requested by someone outside theoretical computer science for his or her own work.
---
paper_title: Faster Fixed-Parameter Tractable Algorithms for Matching and Packing Problems
paper_content:
We obtain faster algorithms for problems such as r-dimensional matching, r-set packing, graph packing, and graph edge packing when the size k of the solution is considered a parameter. We first establish a general framework for finding and exploiting small problem kernels (of size polynomial in k). Previously such a kernel was known only for triangle packing. This technique lets us combine, in a new and sophisticated way, Alon, Yuster and Zwick’s color-coding technique with dynamic programming on the structure of the kernel to obtain faster fixed-parameter algorithms for these problems. Our algorithms run in time O(n+2 O(k)), an improvement over previous algorithms for some of these problems running in time O(n+k O(k)). The flexibility of our approach allows tuning of algorithms to obtain smaller constants in the exponent.
---
paper_title: Vertex Cover: Further Observations and Further Improvements
paper_content:
Recently, there have been increasing interests and progresses in lowering the worst case time complexity for well-known NP-hard problems, in particular for the VERTEX COVER problem. In this paper, new properties for the VERTEX COVER problem are indicated and several new techniques are introduced, which lead to a simpler and improved algorithm of time complexity O(kn + 1:271kk2) for the problem. Our algorithm also induces improvement on previous algorithms for the INDEPENDENT SET problem on graphs of small degree.
---
paper_title: An FPT Algorithm for Set Splitting
paper_content:
An FPT algorithm with a running time of O(n 4 + 2 O(k) n 2.5 ) is described for the SET SPLITTING problem, parameterized by the number k of sets to be split. It is also shown that there can be no FPT algorithm for this problem with a running time of the form 2 o(k) n c unless the satisfiability of n-variable 3SAT instances can be decided in time 2 o(n) .
---
paper_title: Reducing to independent set structure: the case of k-internal spanning tree
paper_content:
The k-INTERNAL SPANNING TREE problem asks whether a certain graph G has a spanning tree with at least k internal vertices. Basing our work on the results presented in [Prieto and Sloper 2003], we show that there exists a set of reduction rules that modify an arbitrary spanning tree of a graph into a spanning tree with no induced edges between the leaves. Thus, the rules either produce a tree with many internal vertices, effectively deciding the problem, or they identify a large independent set, the leaves, in the graph. Having a large independent set is beneficial, because then the graph allows both 'crown decompositions' and path decompositions. We show how this crown decomposition can be used to obtain a O(k2) kernel for the k-INTERNAL SPANNING TREE problem, improving on the O(k3) kernel presented in [Prieto and Sloper 2003].
---
paper_title: SYSTEMATIC KERNELIZATION IN FPT ALGORITHM DESIGN
paper_content:
Data reduction is a preprocessing technique which makes huge and seemingly intractable instances of a problem small and tractable. This technique is often acknowledged as one of the most powerful methods to cope with the intractability of certain NP-complete problems. Heuristics for reducing data can often be seen as reduction rules, if considered from a parameterized complexity viewpoint. Using reduction rules to transform the instances of a parameterized problem into equivalent instances, with size bounded by a function of the parameter, is known as kernelization. This thesis introduces and develops an approach to designing FPT algorithms based on effective kernelizations. This method is called the method of extremal structure. The method operates following a paradigm presented by two lemmas, the kernelization and boundary lemmas. The boundary lemma is used to find theoretical bounds on the maximum size of an instance reduced under an adequate set of reduction rules. The kernelization lemma is invoked to decide the instances which are larger than for some function depending only on the parameter . The first aim of the method of extremal structure is to provide a systematic way to discover reduction rules for fixed-parameter tractable problems. The second is to devise an analytical way to find theoretical bounds for the size of kernels for those problems. These two aims are achieved with the aid of combinatorial extremal arguments. Furthermore, this thesis shows how the method of extremal structure can be applied to effectively solve several NP-complete problems, namely MAX CUT, MAX LEAF SPANNING TREE, NONBLOCKER, -STAR PACKING, EDGE-DISJOINT TRIANGLE PACKING, MAX INTERNAL SPANNING TREE and MINIMUM MAXIMAL MATCHING.
---
paper_title: Packing Edge Disjoint Triangles: A Parameterized View
paper_content:
The problem of packing k edge-disjoint triangles in a graph has been thoroughly studied both in the classical complexity and the approximation fields and it has a wide range of applications in many areas, especially computational biology [BP96]. In this paper we present an analysis of the problem from a parameterized complexity viewpoint. We describe a fixed-parameter tractable algorithm for the problem by means of kernelization and crown rule reductions, two of the newest techniques for fixed-parameter algorithm design. We achieve a kernel size bounded by 4k, where k is the number of triangles in the packing.
---
paper_title: Looking at the Stars
paper_content:
The problem of packing k vertex-disjoint copies of a graph H into another graph G is NP-complete if H has more than two vertices in some connected component. In the framework of parameterized complexity, we analyze a particular family of instances of this problem, namely the packing of stars. We give a quadratic kernel for packing k copies of H = K1,s. When we consider the special case of s = 2, i.e. H being a star with two leaves, we give a linear kernel and an algorithm running in time O(25.301kk2.5 + n3).
---
paper_title: The Method of Extremal Structure on the k-Maximum Cut Problem
paper_content:
Using the Method of Extremal Structure, which combines the use of reduction rules as a preprocessing technique and combinatorial extremal arguments, we will prove the fixed-parameter tractability and find a problem kernel for k-MAXIMUM CUT. This kernel has 2k edges, the same as that found by Mahajan and Raman in (Mahajan & Raman 1999), but using our methodology we also find a bound of k vertices leading to a running time of O(k 2k/2 + n2).
---
paper_title: Bidimensionality: new connections between FPT algorithms and PTASs
paper_content:
We demonstrate a new connection between fixed-parameter tractability and approximation algorithms for combinatorial optimization problems on planar graphs and their generalizations. Specifically, we extend the theory of so-called "bidimensional" problems to show that essentially all such problems have both subexponential fixed-parameter algorithms and PTASs. Bidimensional problems include e.g. feedback vertex set, vertex cover, minimum maximal matching, face cover, a series of vertex-removal problems, dominating set, edge dominating set, r-dominating set, diameter, connected dominating set, connected edge dominating set, and connected r-dominating set. We obtain PTASs for all of these problems in planar graphs and certain generalizations; of particular interest are our results for the two well-known problems of connected dominating set and general feedback vertex set for planar graphs and their generalizations, for which PTASs were not known to exist. Our techniques generalize and in some sense unify the two main previous approaches for designing PTASs in planar graphs, namely, the Lipton-Tarjan separator approach [FOCS'77] and the Baker layerwise decomposition approach [FOCS'83]. In particular, we replace the notion of separators with a more powerful tool from the bidimensionality theory, enabling the first approach to apply to a much broader class of minimization problems than previously possible; and through the use of a structural backbone and thickening of layers we demonstrate how the second approach can be applied to problems with a "nonlocal" structure.
---
paper_title: An efficient parameterized algorithm for m-set packing
paper_content:
We present an efficient parameterized algorithm solving the Set Packing problem, in which we assume that the size of the sets is bounded by m. In particular, if the size m of the sets is bounded by a constant, then our algorithm is fixed-parameter tractable. For example, if the size of the sets is bounded by 3, then our algorithm runs in time O((5.7k)kn).
---
paper_title: Using Nondeterminism to Design Efficient Deterministic Algorithms
paper_content:
In this paper, we illustrate how nondeterminism can be used conveniently and effectively in designing efficient deterministic algorithms. In particular, our method gives an O((5.7k) k n) parameterized algorithm for the 3-D matching problem, which significantly improves the previous algorithm by Downey, Fellows, and Koblitz. The algorithm can be generalized to yield an improved algorithm for the r-D matching problem for any positive integer r. The method can also be employed in designing deterministic algorithms for other optimization problems as well.
---
paper_title: The Spatial Complexity of Oblivious K-Probe Hash Functions
paper_content:
The problem of constructing a dense static hash-based lookup table T for a set of n elements belonging to a universe $U = \{ 0, 1, 2,\cdots , m -1 \}$ is considered. Nearly tight bounds on the spatial complexity of oblivious $O(1)$-probe hash functions, which are defined to depend solely on their search key argument, are provided. This establishes a significant gap between oblivious and nonoblivious search. In particular, the results include the following: • A lower bound showing that oblivious k-probe hash functions require a program size of $\Omega(({n / k}^{2})e^{-k}+\log \log m)$ bits, on average. • A probabilistic construction of a family of oblivious k-probe hash functions that can be specified in $O(n e^{-k} +\log \log m)$ bits, which nearly matches the above lower bound. • A variation of an explicit $O(1)$ time 1-probe (perfect) hash function family that can be specified in $O(n+\log \log m)$ bits, which is tight to within a constant factor of the lower bound.
---
paper_title: Parameterized Complexity
paper_content:
An approach to complexity theory which offers a means of analysing algorithms in terms of their tractability. The authors consider the problem in terms of parameterized languages and taking "k-slices" of the language, thus introducing readers to new classes of algorithms which may be analysed more precisely than was the case until now. The book is as self-contained as possible and includes a great deal of background material. As a result, computer scientists, mathematicians, and graduate students interested in the design and analysis of algorithms will find much of interest.
---
paper_title: Linear kernels in linear time, or how to save k colors in O(n2) steps
paper_content:
This paper examines a parameterized problem that we refer to as n - k GRAPH COLORING, i.e., the problem of determining whether a graph G with n vertices can be colored using n - k colors. As the main result of this paper, we show that there exists a O(kn 2 + k 2 + 2 3.8161k ) = O(n 2 ) algorithm for n - k GRAPH COLORING for each fixed k. The core technique behind this new parameterized algorithm is kernalization via maximum (and certain maximal) matchings. The core technical content of this paper is a near linear-time kernelization algorithm for n-k CLIQUE COVERING. The near linear-time kernelization algorithm that we present for n - k CLIQUE COVERING produces a linear size (3k - 3) kernel in O(k(n + m)) steps on graphs with n vertices and m edges. The algorithm takes an instance (G, k) of CLIQUE COVERING that asks whether a graph G can be covered using |V| - k cliques and reduces it to the problem of determining whether a graph G' = (V',E') of size < 3k - 3 can be covered using |V'|-k' cliques. We also present a similar near linear-time algorithm that produces a 3k kernel for VERTEX COVER. This second kernelization algorithm is the crown reduction rule.
---
paper_title: Lean clause-sets: Generalizations of minimally unsatisfiable clause-sets
paper_content:
We study the problem of (efficiently) deleting such clauses from conjunctive normal forms (clause-sets) which cannot contribute to any proof of unsatisfiability. For that purpose we introduce the notion of an autarky system A, which detects deletion of superfluous clauses from a clause-set F and yields a canonical normal form NA(F) ⊆ F. Clause-sets where no clauses can be deleted are called A-lean, a natural weakening of minimally unsatisfiable clause-sets opening the possibility for combinatorial approaches and including also satisfiable instances. Three special examples for autarky systems are considered: general autarkies, linear autarkies (based on linear programming) and matching autarkies (based on matching theory). We give new characterizations of ("absolutely") lean clause-sets in terms of qualitative matrix analysis, while matching lean clause-sets are characterized in terms of deficiency (the difference between the number of clauses and the number of variables), by having a cyclic associated transversal matroid, and also in terms of fully indecomposable matrices. Finally we discuss how to obtain polynomial time satisfiability decision for clause-sets with bounded deficiency, and we make a few steps towards a general theory of autarky systems.
---
paper_title: Investigations on Autark Assignments
paper_content:
Abstract The structure of the monoid of autarkies and the monoid of autark subsets for clause-sets F is investigated, where autarkies are partial (truth) assignments satisfying some subset F′⊆F (called an autark subset), while not interacting with the clauses in F⧹F′. Generalising minimally unsatisfiable clause-sets, the notion of lean clause-sets is introduced, which do not have non-trivial autarkies, and it is shown that a clause-set is lean iff every clause can be used by some resolution refutation. The largest lean sub-clause-set and the largest autark subset yield a (2-)partition for every clause-set. As a special case of autarkies we introduce the notion of linear autarkies, which can be found in polynomial time by means of linear programming. Clause-sets without non-trivial linear autarkies we call linearly lean, and clause-sets satisfiable by a linear autarky we call linearly satisfiable. As before, the largest linearly lean sub-clause-set and the largest linearly autark subset yield a (2-)partition for every clause-set, but this time the decomposition is computable in polynomial time. The class of linearly satisfiable clause-sets generalises the notion of matched clause-sets introduced in a recent paper by J. Franco and A. Van Gelder, and, as shown by H. van Maaren, contains also (“modulo Unit-clause elimination”) all satisfiable q-Horn clause-sets, introduced by E. Boros, Y. Crama and P. Hammer. The class of linearly lean clause-sets is stable under “crossing out variables” and union, and has some interesting combinatorial properties with respect to the deficiency δ=c−n, the difference of the number of clauses and the number of variables: So for example (non-empty) linearly lean clause-sets fulfill δ⩾1, where this property has been known before only for minimally unsatisfiable clause-sets.
---
paper_title: Reducing to independent set structure: the case of k-internal spanning tree
paper_content:
The k-INTERNAL SPANNING TREE problem asks whether a certain graph G has a spanning tree with at least k internal vertices. Basing our work on the results presented in [Prieto and Sloper 2003], we show that there exists a set of reduction rules that modify an arbitrary spanning tree of a graph into a spanning tree with no induced edges between the leaves. Thus, the rules either produce a tree with many internal vertices, effectively deciding the problem, or they identify a large independent set, the leaves, in the graph. Having a large independent set is beneficial, because then the graph allows both 'crown decompositions' and path decompositions. We show how this crown decomposition can be used to obtain a O(k2) kernel for the k-INTERNAL SPANNING TREE problem, improving on the O(k3) kernel presented in [Prieto and Sloper 2003].
---
paper_title: Introduction to Algorithms: A Creative Approach
paper_content:
Introduction. Mathematical Induction. Analysis of Algorithms. Data Structures. Design of Algorithms by Induction. Algorithms Involving Sequences and Sets. Graph Algorithms. Geometric Algorithms. Algebraic and Numeric Algorithms. Reductions. NP-Completeness. Parallel Algorithms.
---
paper_title: Vertex packings: Structural properties and algorithms
paper_content:
We consider a binary integer programming formulation (VP) for the weighted vertex packing problem in a simple graph. A sufficient “local” optimality condition for (VP) is given and this result is used to derive relations between (VP) and the linear program (VLP) obtained by deleting the integrality restrictions in (VP). Our most striking result is that those variables which assume binary values in an optimum (VLP) solution retain the same values in an optimum (VP) solution. This result is of interest because variables are (0, 1/2, 1). valued in basic feasible solutions to (VLP) and (VLP) can be solved by a “good” algorithm. This relationship and other optimality conditions are incorporated into an implicit enumeration algorithm for solving (VP). Some computational experience is reported.
---
paper_title: Finding odd cycle transversals
paper_content:
We present an O(mn) algorithm to determine whether a graph G with m edges and n vertices has an odd cycle transversal of order at most k, for any fixed k. We also obtain an algorithm that determines, in the same time, whether a graph has a half integral packing of odd cycles of weight k.
---
paper_title: Linear kernels in linear time, or how to save k colors in O(n2) steps
paper_content:
This paper examines a parameterized problem that we refer to as n - k GRAPH COLORING, i.e., the problem of determining whether a graph G with n vertices can be colored using n - k colors. As the main result of this paper, we show that there exists a O(kn 2 + k 2 + 2 3.8161k ) = O(n 2 ) algorithm for n - k GRAPH COLORING for each fixed k. The core technique behind this new parameterized algorithm is kernalization via maximum (and certain maximal) matchings. The core technical content of this paper is a near linear-time kernelization algorithm for n-k CLIQUE COVERING. The near linear-time kernelization algorithm that we present for n - k CLIQUE COVERING produces a linear size (3k - 3) kernel in O(k(n + m)) steps on graphs with n vertices and m edges. The algorithm takes an instance (G, k) of CLIQUE COVERING that asks whether a graph G can be covered using |V| - k cliques and reduces it to the problem of determining whether a graph G' = (V',E') of size < 3k - 3 can be covered using |V'|-k' cliques. We also present a similar near linear-time algorithm that produces a 3k kernel for VERTEX COVER. This second kernelization algorithm is the crown reduction rule.
---
paper_title: SYSTEMATIC KERNELIZATION IN FPT ALGORITHM DESIGN
paper_content:
Data reduction is a preprocessing technique which makes huge and seemingly intractable instances of a problem small and tractable. This technique is often acknowledged as one of the most powerful methods to cope with the intractability of certain NP-complete problems. Heuristics for reducing data can often be seen as reduction rules, if considered from a parameterized complexity viewpoint. Using reduction rules to transform the instances of a parameterized problem into equivalent instances, with size bounded by a function of the parameter, is known as kernelization. This thesis introduces and develops an approach to designing FPT algorithms based on effective kernelizations. This method is called the method of extremal structure. The method operates following a paradigm presented by two lemmas, the kernelization and boundary lemmas. The boundary lemma is used to find theoretical bounds on the maximum size of an instance reduced under an adequate set of reduction rules. The kernelization lemma is invoked to decide the instances which are larger than for some function depending only on the parameter . The first aim of the method of extremal structure is to provide a systematic way to discover reduction rules for fixed-parameter tractable problems. The second is to devise an analytical way to find theoretical bounds for the size of kernels for those problems. These two aims are achieved with the aid of combinatorial extremal arguments. Furthermore, this thesis shows how the method of extremal structure can be applied to effectively solve several NP-complete problems, namely MAX CUT, MAX LEAF SPANNING TREE, NONBLOCKER, -STAR PACKING, EDGE-DISJOINT TRIANGLE PACKING, MAX INTERNAL SPANNING TREE and MINIMUM MAXIMAL MATCHING.
---
paper_title: Coordinatized Kernels and Catalytic Reductions: An Improved FPT Algorithm for Max Leaf Spanning Tree and Other Problems
paper_content:
We describe some new, simple and apparently general methods for designing FPT algorithms, and illustrate how these can be used to obtain a significantly improved FPT algorithm for the MAXIMUM LEAF SPANNING TREE problem. Furthermore, we sketch how the methods can be applied to a number of other well-known problems, including the parametric dual of DOMINATING SET (also known as NONBLOCKER), MATRIX DOMINATION, EDGE DOMINATING SET, and FEEDBACK VERTEX SET FOR UNDIRECTED GRAPHS. The main payoffs of these new methods are in improved functions f(k) in the FPT running times, and in general systematic approaches that seem to apply to a wide variety of problems.
---
paper_title: Call routing and the ratcatcher
paper_content:
Suppose we expect there to bep(ab) phone calls between locationsa andb, all choices ofa, b from some setL of locations. We wish to design a network to optimally handle these calls. More precisely, a “routing tree” is a treeT with set of leavesL, in which every other vertex has valency 3. It has “congestion” 0 form the edges of a planar graphG, there is an efficient, strongly polynomial algorithm.
---
paper_title: Fixed Parameter Algorithms for DOMINATING SET and Related Problems on Planar Graphs
paper_content:
We present an algorithm that constructively produces a solution to the k-DOMINATING SET problem for planar graphs in time O(c k n), where c = 4 634 . To obtain this result, we show that the treewidth of a planar graph with domination number y(G) is O(γ(G)), and that such a tree decomposition can be found in O(γ(G)n) time. The same technique can be used to show that the k-FACE COVER problem (find a size k set of faces that cover all vertices of a given plane graph) can be solved in O(c k 1n ) time, where c 1 = 3 3634 and k is the size of the face cover set. Similar results can be obtained in the planar case for some variants of k-DOMINATING SET, e.g., k-INDEPENDENT DOMINATING SET and k-WEIGHTED DOMINATING SET.
---
paper_title: Bidimensionality: new connections between FPT algorithms and PTASs
paper_content:
We demonstrate a new connection between fixed-parameter tractability and approximation algorithms for combinatorial optimization problems on planar graphs and their generalizations. Specifically, we extend the theory of so-called "bidimensional" problems to show that essentially all such problems have both subexponential fixed-parameter algorithms and PTASs. Bidimensional problems include e.g. feedback vertex set, vertex cover, minimum maximal matching, face cover, a series of vertex-removal problems, dominating set, edge dominating set, r-dominating set, diameter, connected dominating set, connected edge dominating set, and connected r-dominating set. We obtain PTASs for all of these problems in planar graphs and certain generalizations; of particular interest are our results for the two well-known problems of connected dominating set and general feedback vertex set for planar graphs and their generalizations, for which PTASs were not known to exist. Our techniques generalize and in some sense unify the two main previous approaches for designing PTASs in planar graphs, namely, the Lipton-Tarjan separator approach [FOCS'77] and the Baker layerwise decomposition approach [FOCS'83]. In particular, we replace the notion of separators with a more powerful tool from the bidimensionality theory, enabling the first approach to apply to a much broader class of minimization problems than previously possible; and through the use of a structural backbone and thickening of layers we demonstrate how the second approach can be applied to problems with a "nonlocal" structure.
---
| Title: An Overview of Techniques for Designing Parameterized Algorithms
Section 1: INTRODUCTION
Description 1: Provide a general introduction to the field of parameterized algorithms, mention the growth of the field, significant literature, and summarize the main techniques covered in the paper.
Section 2: BRANCHING ALGORITHMS
Description 2: Describe algorithms using a branching strategy, criteria for FPT running time, and the general concept of bounded search trees, greedy localization, and color coding.
Section 3: Bounded search trees
Description 3: Explain the method of bounded search trees as a branching algorithm technique, provide examples, and discuss its practical use.
Section 4: Greedy localization
Description 4: Detail the concept of greedy localization in parameterized algorithms, give an example, and discuss its theoretical underpinnings and practical applications.
Section 5: Color coding
Description 5: Introduce the color coding technique, explain its mechanisms, provide an example, and analyze its practical implications.
Section 6: KERNELIZATION
Description 6: Combine techniques that reduce an instance into a kernel. Differentiate between local reductions and global reductions and provide relevant examples.
Section 7: Local reductions
Description 7: Focus on reduction rules that identify and modify specific structures in an instance, proving the equivalence of the reduced instance.
Section 8: Global reduction rules-crown reduction
Description 8: Define crown decomposition and explain how it is used to reduce problem instances globally, providing examples and proofs.
Section 9: FPT BY INDUCTION
Description 9: Discuss techniques using mathematical induction for finding parameterized solutions, presenting iterative compression for minimization and the extremal method for maximization problems.
Section 10: For minimization-iterative compression
Description 10: Present the iterative compression technique suitable for parameterized minimization problems, including examples and proofs.
Section 11: For maximization-the extremal method
Description 11: Detail the extremal method for parameterized maximization problems, with examples and proofs.
Section 12: WIN/WIN
Description 12: Describe the win/win strategy, where solving an auxiliary problem helps in directly deciding the original problem, providing relevant examples.
Section 13: Well-quasi-ordering and graph minors
Description 13: Explain the concept of well-quasi-ordering and its application to parameterized problems, focusing on the role of graph minors. Include examples like the k-feedback vertex set.
Section 14: Imposing FPT structure and bounded treewidth
Description 14: Detail cases where a tree-like structure is imposed on graphs, showing that problems with bounded treewidth become solvable in FPT time. Provide an example with k-dominating set in planar graphs. |
A Survey of Fuzzy Clustering Algorithms for Pattern Recognition - Part 11 | 8 | ---
paper_title: Self-organizing neural network as a fuzzy classifier
paper_content:
This paper describes a self-organizing artificial neural network, based on Kohonen's model of self-organization, which is capable of handling fuzzy input and of providing fuzzy classification. Unlike conventional neural net models, this algorithm incorporates fuzzy set-theoretic concepts at various stages. The input vector consists of membership values for linguistic properties along with some contextual class membership information which is used during self-organization to permit efficient modeling of fuzzy (ambiguous) patterns. A new definition of gain factor for weight updating is proposed. An index of disorder involving mean square distance between the input and weight vectors is used to determine a measure of the ordering of the output space. This controls the number of sweeps required in the process. Incorporation of the concept of fuzzy partitioning allows natural self-organization of the input data, especially when they have ill-defined boundaries. The output of unknown test patterns is generated in terms of class membership values. Incorporation of fuzziness in input and output is seen to provide better performance as compared to the original Kohonen model and the hard version. The effectiveness of this algorithm is demonstrated on the speech recognition problem for various network array sizes, training sets and gain factors. >
---
paper_title: Fuzzy clustering: critical analysis of the contextual mechanisms employed by three neural network models
paper_content:
According to the following definition, taken from the literature, a fuzzy clustering mechanism allows the same input pattern to belong to multiple categories to different degrees. Many clustering neural network (NN) models claim to feature fuzzy properties, but several of them (like the Fuzzy ART model) do not satisfy this definition. Vice versa, we believe that Kohonen's Self-Organizing Map, SOM, satisfies the definition provided above, even though this NN model is well-known to (robustly) perform topologically ordered mapping rather than fuzzy clustering. This may sound as a paradox if we consider that several fuzzy NN models (such as the Fuzzy Learning Vector Quantization, FLVQ, which was first called Fuzzy Kohonen Clustering Network, FKCN) were originally developed to enhance Kohonen's models (such as SOM and the vector quantization model, VQ). The fuzziness of SOM indicates that a network of processing elements (PEs) can verify the fuzzy clustering definition when it exploits local rules which are biologically plausible (such as the Kohonen bubble strategy). This is equivalent to state that the exploitation of the fuzzy set theory in the development of complex systems (e.g., clustering NNs) may provide new mathematical tools (e.g., the definition of membership function) to simulate the behavior of those cooperative/competitive mechanisms already identified by neurophysiological studies. When a biologically plausible cooperative/competitive strategy is pursued effectively, neighboring PEs become mutually coupled to gain sensitivity to contextual effects. PEs which are mutually coupled are affected by vertical (inter-layer) as well as horizontal (intra-layer) connections. To summarize, we suggest to relate the study of fuzzy clustering mechanisms to the multi-disciplinary science of complex systems, with special regard to the investigation of the cooperative/competitive local rules employed by complex systems to gain sensitivity to contextual effects in cognitive tasks. In this paper, the FLVQ model is critically analyzed in order to stress the meaning of a fuzzy learning mechanism. This study leads to the development of a new NN model, termed the fuzzy competitive/cooperative Kohonen (FCCK) model, which replaces FLVQ. Then, the architectural differences amongst three NN algorithms and the relationships between their fuzzy clustering properties are discussed. These models, which all perform on-line learning, are: (1) SOM; (2) FCCK; and (3) improved neural-gas (INC).
---
paper_title: On the importance of sorting in "neural gas" training of vector quantizers
paper_content:
The paper considers the role of the sorting process in the well-known "neural gas" model for vector quantization. Theoretical derivations and experimental evidence show that complete sorting is not required for effective training, since limiting the sorted list to even a few top units performs effectively. This property has a significant impact on the implementation of the overall neural model at the local level.
---
paper_title: A possibilistic approach to clustering
paper_content:
The clustering problem is cast in the framework of possibility theory. The approach differs from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values can be interpreted as degrees of possibility of the points belonging to the classes, i.e., the compatibilities of the points with the class prototypes. An appropriate objective function whose minimum will characterize a good possibilistic partition of the data is constructed, and the membership and prototype update equations are derived from necessary conditions for minimization of the criterion function. The advantages of the resulting family of possibilistic algorithms are illustrated by several examples. >
---
paper_title: Comments on "A possibilistic approach to clustering"
paper_content:
In this comment, we report a difficulty with the-application of the possibilistic approach to fuzzy clustering (PCM) proposed by Keller and Krishnapuram (1993). In applying this algorithm we found that it has the undesirable tendency to produce coincidental clusters. Results illustrating this tendency are reported and a possible explanation for the PCM behavior is suggested.
---
paper_title: Fuzzy Kohonen clustering networks
paper_content:
The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >
---
paper_title: A Constructive, Incremental-Learning Network for Mixture Modeling and Classification
paper_content:
Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAM's representation is a gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known expectation-maximization (EM) approach to mixture modeling. GAM outper forms an EM classification algorithm on three classification benchmarks, thereby demonstrating the advantage of the ART match criterion for regulating learning and the ARTMAP match tracking operation for incorporating environmental feedback in supervised learning situations.
---
paper_title: A possibilistic approach to clustering
paper_content:
The clustering problem is cast in the framework of possibility theory. The approach differs from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values can be interpreted as degrees of possibility of the points belonging to the classes, i.e., the compatibilities of the points with the class prototypes. An appropriate objective function whose minimum will characterize a good possibilistic partition of the data is constructed, and the membership and prototype update equations are derived from necessary conditions for minimization of the criterion function. The advantages of the resulting family of possibilistic algorithms are illustrated by several examples. >
---
paper_title: Generalized clustering networks and Kohonen's self-organizing scheme
paper_content:
The relationship between the sequential hard c-means (SHCM) and learning vector quantization (LVQ) clustering algorithms is discussed. The impact and interaction of these two families of methods with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method but often lends ideas to clustering algorithms, are considered. A generalization of LVQ that updates all nodes for a given input vector is proposed. The network attempts to find a minimum of a well-defined objective function. The learning rules depend on the degree of distance match to the winner node; the lesser the degree of match with the winner, the greater the impact on nonwinner nodes. Numerical results indicate that the terminal prototypes generated by this modification of LVQ are generally insensitive to initialization and independent of any choice of learning coefficient. IRIS data obtained by E. Anderson's (1939) is used to illustrate the proposed method. Results are compared with the standard LVQ approach. >
---
paper_title: Gaussian ARTMAP: a neural network for fast incremental learning of noisy multidimensional maps
paper_content:
A new neural network architecture for incremental supervised learning of analog multidimensional maps is introduced. The architecture, called Gaussian ARTMAP, is a synthesis of a Gaussian classifier and an adaptive resonance theory (ART) neural network, achieved by defining the ART choice function as the discriminant function of a Gaussian classifier with separable distributions, and the ART match function as the same, but with the distributions normalized to a unit height. While Gaussian ARTMAP retains the attractive parallel computing and fast learning properties of fuzzy ARTMAP, it learns a more efficient internal representation of a mapping while being more resistant to noise than fuzzy ARTMAP on a number of benchmark databases. SSeveral simulations are presented which demonstrate that Gaussian ARTMAP consistently obtains a better trade-off of classification rate to number of categories than fuzzy ARTMAP. Results on a vowel classification problem are also presented which demonstrate that Gaussian ARTMAP outperforms many other classifiers. Copyright © 1996 Elsevier Science Ltd
---
paper_title: ;Neural-gas' network for vector quantization and its application to time-series prediction.
paper_content:
A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks.
---
paper_title: A Tutorial on Support Vector Machines for Pattern Recognition
paper_content:
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
---
paper_title: Competitive Learning Algorithms for Robust Vector Quantization
paper_content:
The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and communication of messages has to deal with different sources of noise and disturbances. We propose a unifying approach to data compression by robust vector quantization, which explicitly deals with channel noise, bandwidth limitations, and random elimination of prototypes. The resulting algorithm is able to limit the detrimental effect of noise in a very general communication scenario. In addition, the presented model allows us to derive a novel competitive neural networks algorithm, which covers topology preserving feature maps, the so-called neural-gas algorithm, and the maximum entropy soft-max rule as special cases. Furthermore, continuation methods based on these noise models improve the codebook design by reducing the sensitivity to local minima. We show an exemplary application of the novel robust vector quantization algorithm to image compression for a teleconferencing system.
---
paper_title: Optimal learning in artificial neural networks: A review of theoretical results
paper_content:
Abstract The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or near-optimal solutions and to generalize to new examples. This paper reviews some theoretical contributions to optimal learning in the attempt to provide a unified view and give the state of the art in the field. The focus of the review is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. Starting from this analysis, we briefly review proposals for discovering optimal solutions and suggest conditions for designing architectures tailored to a given task.
---
paper_title: Novel neural network model combining radial basis function, competitive Hebbian learning rule, and fuzzy simplified adaptive resonance theory
paper_content:
In the first part of this paper a new on-line fully self- organizing artificial neural network model (FSONN), pursuing dynamic generation and removal of neurons and synaptic links, is proposed. The model combines properties of the self- organizing map (SOM), fuzzy c-means (FCM), growing neural gas (GNG) and fuzzy simplified adaptive resonance theory (Fuzzy SART) algorithms. In the second part of the paper experimental results are provided and discussed. Our conclusion is that the proposed connectionist model features several interesting properties, such as the following: (1) the system requires no a priori knowledge of the dimension, size and/or adjacency structure of the network; (2) with respect to other connectionist models found in the literature, the system can be employed successfully in: (a) a vector quantization; (b) density function estimation; and (c) structure detection in input data to be mapped topologically correctly onto an output lattice pursuing dimensionality reduction; and (3) the system is computationally efficient, its processing time increasing linearly with the number of neurons and synaptic links.
---
paper_title: Complex Systems and Cognitive Processes
paper_content:
This book shows that the science of complex systems, which stresses the importance of self-organizing processes, can make a decisive contribution to solving many problems in artificial intelligence. Artificial cognitive systems are important in view of their potential applications, and it can be expected that their study will shed light on biological cognitive systems. The new "neurally inspired" information science proposed in this book is fast becoming a promising workshop for the construction of models capable of emulating cognitive behaviour. After a general introduction to the theory of complex systems, the book gives a thorough treatment of neural networks, which are the most successful and the most thoroughly studied dynamical cognitive systems. Attention is also devoted to other classes of artificial cognitive systems, in particular to classifier systems, which provide an important link between the dynamical and the inferential approach to artificial intelligence. The book can be used as a textbook, since it does not require previous knowledge of the topic, and should also be interesting for researchers in this field, since it links formerly separate lines of research.
---
paper_title: Learning internal representations by error propagation
paper_content:
This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion
---
paper_title: The self-organizing map
paper_content:
The self-organized map, an architecture suggested for artificial neural networks, is explained by presenting simulation experiments and practical applications. The self-organizing map has the property of effectively creating spatially organized internal representations of various features of input signals and their abstractions. One result of this is that the self-organization process can discover semantic relationships in sentences. Brain maps, semantic maps, and early work on competitive learning are reviewed. The self-organizing map algorithm (an algorithm which order responses spatially) is reviewed, focusing on best matching cell selection and adaptation of the weight vectors. Suggestions for applying the self-organizing map algorithm, demonstrations of the ordering process, and an example of hierarchical clustering of data are presented. Fine tuning the map by learning vector quantization is addressed. The use of self-organized maps in practical speech recognition and a simulation experiment on semantic mapping are discussed. >
---
paper_title: ;Neural-gas' network for vector quantization and its application to time-series prediction.
paper_content:
A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks.
---
paper_title: Optimal learning in artificial neural networks: A review of theoretical results
paper_content:
Abstract The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or near-optimal solutions and to generalize to new examples. This paper reviews some theoretical contributions to optimal learning in the attempt to provide a unified view and give the state of the art in the field. The focus of the review is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. Starting from this analysis, we briefly review proposals for discovering optimal solutions and suggest conditions for designing architectures tailored to a given task.
---
paper_title: A Tutorial on Support Vector Machines for Pattern Recognition
paper_content:
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
---
paper_title: The Nature of Statistical Learning Theory
paper_content:
Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.
---
paper_title: Optimal learning in artificial neural networks: A review of theoretical results
paper_content:
Abstract The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or near-optimal solutions and to generalize to new examples. This paper reviews some theoretical contributions to optimal learning in the attempt to provide a unified view and give the state of the art in the field. The focus of the review is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. Starting from this analysis, we briefly review proposals for discovering optimal solutions and suggest conditions for designing architectures tailored to a given task.
---
paper_title: A technique for the selection of kernel-function parameters in RBF neural networks for classification of remote-sensing images
paper_content:
A supervised technique for training radial basis function (RBF) neural network classifiers is proposed. Such a technique, unlike traditional ones, considers the class memberships of training samples to select the centers and widths of the kernel functions associated with the hidden neurons of an RBF network. The result is twofold: a significant reduction in the overall classification error made by the classifier and a more stable behavior of the classification error versus variations in both the number of hidden units and the initial parameters of the training process.
---
paper_title: Feature extraction and pattern classification of remote sensing data by a modular neural system
paper_content:
A modular neural network architecture has been used for the classification of remote sensed data in two experiments carried out to study two different but rather usual situations in real remote sensing applications. Such situations concern the availability of high-dimensional data in the first setting and an imperfect data set with a limited number of features in the second. The learning task of the supervised multilayer perceptron classifier has been made more efficient by preprocessing the input with unsupervised neural modules for feature discovery. The linear propagation network is introduced in the first experiment to evaluate the effectiveness of the neural data compression stage before classification, whereas in the second experiment data clustering before labeling is evaluated by the Kohonen self-organizing feature map network. The results of the two experiments confirm that modular learning performs better than nonmodular learning with respect to both learning quality and speed. © 1996 Society of Photo-Optical Instrumentation Engineers. Subject terms: neural network; remote sensing; classification; supervised and un- supervised learning.
---
paper_title: Optimal learning in artificial neural networks: A review of theoretical results
paper_content:
Abstract The effectiveness of connectionist models in emulating intelligent behaviour and solving significant practical problems is strictly related to the capability of the learning algorithms to find optimal or near-optimal solutions and to generalize to new examples. This paper reviews some theoretical contributions to optimal learning in the attempt to provide a unified view and give the state of the art in the field. The focus of the review is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. Starting from this analysis, we briefly review proposals for discovering optimal solutions and suggest conditions for designing architectures tailored to a given task.
---
paper_title: Complex Systems and Cognitive Processes
paper_content:
This book shows that the science of complex systems, which stresses the importance of self-organizing processes, can make a decisive contribution to solving many problems in artificial intelligence. Artificial cognitive systems are important in view of their potential applications, and it can be expected that their study will shed light on biological cognitive systems. The new "neurally inspired" information science proposed in this book is fast becoming a promising workshop for the construction of models capable of emulating cognitive behaviour. After a general introduction to the theory of complex systems, the book gives a thorough treatment of neural networks, which are the most successful and the most thoroughly studied dynamical cognitive systems. Attention is also devoted to other classes of artificial cognitive systems, in particular to classifier systems, which provide an important link between the dynamical and the inferential approach to artificial intelligence. The book can be used as a textbook, since it does not require previous knowledge of the topic, and should also be interesting for researchers in this field, since it links formerly separate lines of research.
---
paper_title: Learning without Local Minima in Radial Basis Function Networks
paper_content:
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backpropagation may get stuck in local minima, thus providing suboptimal solutions. For feedforward networks, optimal learning can be achieved provided that certain conditions on the network and the learning environment are met. This principle is investigated for the case of networks using radial basis functions (RBF). It is assumed that the patterns of the learning environment are separable by hyperspheres. In that case, we prove that the attached cost function is local minima free with respect to all the weights. This provides us with some theoretical foundations for a massive application of RBF in pattern recognition. >
---
paper_title: The self-organizing map
paper_content:
The self-organized map, an architecture suggested for artificial neural networks, is explained by presenting simulation experiments and practical applications. The self-organizing map has the property of effectively creating spatially organized internal representations of various features of input signals and their abstractions. One result of this is that the self-organization process can discover semantic relationships in sentences. Brain maps, semantic maps, and early work on competitive learning are reviewed. The self-organizing map algorithm (an algorithm which order responses spatially) is reviewed, focusing on best matching cell selection and adaptation of the weight vectors. Suggestions for applying the self-organizing map algorithm, demonstrations of the ordering process, and an example of hierarchical clustering of data are presented. Fine tuning the map by learning vector quantization is addressed. The use of self-organized maps in practical speech recognition and a simulation experiment on semantic mapping are discussed. >
---
paper_title: The Design and Evolution of Modular Neural Network Architectures
paper_content:
To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist of several modules, standard subnetworks that serve as higher order units with a distinct structure and function. The simulations rely on a particular network module called the categorizing and learning module. This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is presented. A number of small-scale simulation studies shows how intermodule connectivity patterns implement ''neural assemblies'' that induce a particular category structure in the network. Learning and categorization improves because the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary.
---
paper_title: A Growing Neural Gas Network Learns Topologies
paper_content:
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.
---
paper_title: Fuzzy Kohonen clustering networks
paper_content:
The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >
---
paper_title: Topology representing networks
paper_content:
Abstract A Hebbian adaptation rule with winner-take-all like competition is introduced. It is shown that this competitive Hebbian rule forms so-called Delaunay triangulations, which play an important role in computational geometry for efficiently solving proximity problems. Given a set of neural units i, i = 1,…, N, the synaptic weights of which can be interpreted as pointers wi, i = 1,…, N in RD, the competitive Hebbian rule leads to a connectivity structure between the units i that corresponds to the Delaunay triangulation of the set of pointers wi. Such competitive Hebbian rule develops connections (Cij > 0) between neural units i, j with neighboring receptive fields (Voronoi polygons) Vi, Vj, whereas between all other units i, j no connections evolve (Cij = 0). Combined with a procedure that distributes the pointers wi over a given feature manifold M, for example, a submanifold M ⊂ RD, the competitive Hebbian rule provides a novel approach to the problem of constructing topology preserving feature maps and representing intricately structured manifolds. The competitive Hebbian rule connects only neural units, the receptive fields (Voronoi polygons) Vi, Vj of which are adjacent on the given manifold M. This leads to a connectivity structure that defines a perfectly topology preserving map and forms a discrete, path preserving representation of M, also in cases where M has an intricate topology. This makes this novel approach particularly useful in all applications where neighborhood relations have to be exploited or the shape and topology of submanifolds have to be take into account.
---
paper_title: Fuzzy Kohonen clustering networks
paper_content:
The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >
---
paper_title: Neural networks and physical systems with emergent collective computational abilities
paper_content:
Abstract ::: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
---
paper_title: A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine
paper_content:
A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top—down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the ⅔ Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.
---
paper_title: Complex Systems and Cognitive Processes
paper_content:
This book shows that the science of complex systems, which stresses the importance of self-organizing processes, can make a decisive contribution to solving many problems in artificial intelligence. Artificial cognitive systems are important in view of their potential applications, and it can be expected that their study will shed light on biological cognitive systems. The new "neurally inspired" information science proposed in this book is fast becoming a promising workshop for the construction of models capable of emulating cognitive behaviour. After a general introduction to the theory of complex systems, the book gives a thorough treatment of neural networks, which are the most successful and the most thoroughly studied dynamical cognitive systems. Attention is also devoted to other classes of artificial cognitive systems, in particular to classifier systems, which provide an important link between the dynamical and the inferential approach to artificial intelligence. The book can be used as a textbook, since it does not require previous knowledge of the topic, and should also be interesting for researchers in this field, since it links formerly separate lines of research.
---
paper_title: The self-organizing map
paper_content:
The self-organized map, an architecture suggested for artificial neural networks, is explained by presenting simulation experiments and practical applications. The self-organizing map has the property of effectively creating spatially organized internal representations of various features of input signals and their abstractions. One result of this is that the self-organization process can discover semantic relationships in sentences. Brain maps, semantic maps, and early work on competitive learning are reviewed. The self-organizing map algorithm (an algorithm which order responses spatially) is reviewed, focusing on best matching cell selection and adaptation of the weight vectors. Suggestions for applying the self-organizing map algorithm, demonstrations of the ordering process, and an example of hierarchical clustering of data are presented. Fine tuning the map by learning vector quantization is addressed. The use of self-organized maps in practical speech recognition and a simulation experiment on semantic mapping are discussed. >
---
paper_title: Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps
paper_content:
A neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzberg's NGE systems and with Simpson's FMMC system. >
---
paper_title: Art 2: Self-Organization Of Stable Category Recognition Codes For Analog Input Patterns
paper_content:
Adaptive resonance architectures are neural networks that self-organize stable pattern recognition codes in real-time in response to arbitrary sequences of input patterns. This article introduces ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog of binary input patterns. In order to cope with arbitrary sequences of analog input patterns, ART 2 architectures embody solutions to a number of design principles, such as the stability-plasticity tradeoff, the search-direct access tradeoff, and the match-reset tradeoff. In these architectures, top-down learned expectation and matching mechanisms are critical in self-stabilizing the code learning process. A parallel search scheme updates itself adaptively as the learning process unfolds, and realizes a form of real-time hypothesis discovery, testing, learning, and recognition. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time for familiar inputs does not increase with the complexity of the learned code. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. A parameter called the attentional vigilance parameter determines how fine the categories will be. If vigilance increases (decreases) due to environmental feedback, then the system automatically searches for and learns finer (coarser) recognition categories. Gain control parameters enable the architecture to suppress noise up to a prescribed level. The architecture's global design enables it to learn effectively despite the high degree of nonlinearity of such mechanisms.
---
paper_title: Signal and Image Processing with Neural Networks: A C++ Sourcebook
paper_content:
The Role of Neural Networks in Signal and Image Processing Neurons in the Complex Domain Data Preparation for Neural Networks Frequency-Domain Techniques Time/Frequency Localization Time/Frequency Applications Image Processing in the Frequency Domain Moment-Based Image Features Tone/Texture Descriptors Using the MLFN Program Appendix Bibliography Index.
---
paper_title: Maximum likelihood from incomplete data via the EM algorithm
paper_content:
Vibratory power unit for vibrating conveyers and screens comprising an asynchronous polyphase motor, at least one pair of associated unbalanced masses disposed on the shaft of said motor, with the first mass of a pair of said unbalanced masses being rigidly fastened to said shaft and with said second mass of said pair being movably arranged relative to said first mass, means for controlling and regulating the conveying rate during conveyer operation by varying the rotational speed of said motor between predetermined minimum and maximum values, said second mass being movably outwardly by centrifugal force against the pressure of spring means, said spring means being prestressed in such a manner that said second mass is, at rotational motor speeds lower than said minimum speed, held in its initial position, and at motor speeds between said lower and upper values in positions which are radially offset with respect to the axis of said motor to an extent depending on the value of said rotational motor speed.
---
paper_title: Novel neural network model combining radial basis function, competitive Hebbian learning rule, and fuzzy simplified adaptive resonance theory
paper_content:
In the first part of this paper a new on-line fully self- organizing artificial neural network model (FSONN), pursuing dynamic generation and removal of neurons and synaptic links, is proposed. The model combines properties of the self- organizing map (SOM), fuzzy c-means (FCM), growing neural gas (GNG) and fuzzy simplified adaptive resonance theory (Fuzzy SART) algorithms. In the second part of the paper experimental results are provided and discussed. Our conclusion is that the proposed connectionist model features several interesting properties, such as the following: (1) the system requires no a priori knowledge of the dimension, size and/or adjacency structure of the network; (2) with respect to other connectionist models found in the literature, the system can be employed successfully in: (a) a vector quantization; (b) density function estimation; and (c) structure detection in input data to be mapped topologically correctly onto an output lattice pursuing dimensionality reduction; and (3) the system is computationally efficient, its processing time increasing linearly with the number of neurons and synaptic links.
---
paper_title: ;Neural-gas' network for vector quantization and its application to time-series prediction.
paper_content:
A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks.
---
paper_title: A Growing Neural Gas Network Learns Topologies
paper_content:
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.
---
paper_title: Fuzzy Kohonen clustering networks
paper_content:
The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >
---
| Title: A Survey of Fuzzy Clustering Algorithms for Pattern Recognition - Part 11
Section 1: FUZZY MEMBERSHIP AND PROBABILITY DENSITY FUNCTIONS
Description 1: This section proposes a brief review of probabilistic and possibilistic fuzzy membership concepts, to be compared with Bayes' view of posterior probability and likelihood.
Section 2: ON-LINE VERSUS OFF-LINE MODEL ADAPTATION
Description 2: This section discusses the principles of batch (off-line) and on-line learning methods, their advantages, and issues like real-time response, noise sensitivity, and parameter updates.
Section 3: PROTOTYPE VECTOR EDITING SCHEME
Description 3: This section covers the clustering algorithms employing two main schemes for reference vector generation, namely clustering-by-selection and clustering-by-replacement.
Section 4: BEYOND ERROR GRADIENT DESCENT: ADVANCED TECHNIQUES FOR LEARNING FROM DATA
Description 4: This section presents advanced learning techniques developed to deal more efficiently with local minima and generalization issues than traditional gradient descent methods.
Section 5: TOPOLOGICALLY CORRECT MAPPING
Description 5: This section explains topologically correct mapping and the relevant methodologies like the competitive Hebbian rule to generate synaptic connections in self-organizing neural network models.
Section 6: ARTIFICIAL COGNITIVE SYSTEMS AND ECOLOGICAL NETS
Description 6: This section discusses the requirements for artificial cognitive systems and the need to model neural networks alongside their environments, evolving towards ecological networks.
Section 7: ON FUZZY CLUSTERING ALGORITHMS
Description 7: This section reviews the types and principles of fuzzy clustering algorithms, highlighting soft competitive learning as core to the fuzzification of clustering schemes.
Section 8: CONCLUSION
Description 8: This section summarizes the paper's interpretation of fuzzy clustering as synonymous with soft competitive parameter adaptation and outlines the key features compared in the survey. |
A Review of Vision-Based Gait Recognition Methods for Human Identification | 9 | ---
paper_title: The humanID gait challenge problem: data sets, performance, and analysis
paper_content:
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.
---
paper_title: A hidden Markov model based framework for recognition of humans from gait sequences
paper_content:
In this paper we propose a generic framework based on hidden Markov models (HMMs) for recognition of individuals from their gait. The HMM framework is suitable, because the gait of an individual can be visualized as his adopting postures from a set, in a sequence which has an underlying structured probabilistic nature. The postures that the individual adopts can be regarded as the states of the HMM and are typical to that individual and provide a means of discrimination. The framework assumes that, during gait, the individual transitions between N discrete postures or states but it is not dependent on the particular feature vector used to represent the gait information contained in the postures. The framework, thus, provides flexibility in the selection of the feature vector. The statistical nature of the HMM lends robustness to the model. In this paper we use the binarized background-subtracted image as the feature vector and use different distance metrics, such as those based on the L/sub 1/ and L/sub 2/ norms of the vector difference, and the normalized inner product of the vectors, to measure the similarity between feature vectors. The results we obtain are better than the baseline recognition rates reported before.
---
paper_title: Technology review - Biometrics-Technology, Application, Challenge, and Computational Intelligence Solutions
paper_content:
Biometrics is the science of the measurement of unique human characteristics, both physical and behavioral. Various biometric technologies are available for identifying or verifying an individual by measuring fingerprint, hand, face, signature, voice, or a combination of these traits. This paper aims to assist readers as they consider biometric solutions by examining common biometric technologies, introducing different biometric applications, and reviewing recent CI solutions presented at the 2006 IEEE WCCI
---
paper_title: Gait Analysis for Recognition and Classification
paper_content:
We describe a representation of gait appearance for the purpose of person identification and classification. This gait representation is based on simple features such as moments extracted from orthogonal view video silhouettes of human walking motion. Despite its simplicity, the resulting feature vector contains enough information to perform well on human identification and gender classification tasks. We explore the recognition behaviors of two different methods to aggregate features over time under different recognition tasks. We demonstrate the accuracy of recognition using gait video sequences collected over different days and times and under varying lighting environments. In addition, we show results for gender classification based our gait appearance features using a support-vector machine.
---
paper_title: Silhouette Analysis-Based Gait Recognition for Human Identification
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on principal component analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.
---
paper_title: Individual recognition using gait energy image
paper_content:
In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches
---
paper_title: Motion-based recognition of people in EigenGait space
paper_content:
A motion-based, correspondence-free technique or human gait recognition in monocular video is presented. We contend that the planar dynamics of a walking person are encoded in a 2D plot consisting of the pairwise image similarities of the sequence of images of the person, and that gait recognition can be achieved via standard pattern classification of these plots. We use background modelling to track the person for a number of frames and extract a sequence of segmented images of the person. The self-similarity plot is computed via correlation of each pair of images in this sequence. For recognition, the method applies principal component analysis to reduce the dimensionality of the plots, then uses the k-nearest neighbor rule in this reduced space to classify an unknown person. This method is robust to tracking and segmentation errors, and to variation in clothing and background. It is also invariant to small changes in camera viewpoint and walking speed. The method is tested on outdoor sequences of 44 people with 4 sequences of each taken on two different days, and achieves a classification rate of 77%. It is also tested on indoor sequences of 7 people walking on a treadmill, taken from 8 different viewpoints and on 7 different days. A classification rate of 78% is obtained for near-fronto-parallel views, and 65% on average over all view.
---
paper_title: Gait recognition using active shape model and motion prediction
paper_content:
This study presents a novel, robust gait recognition algorithm for human identification from a sequence of segmented noisy silhouettes in a low-resolution video. The proposed recognition algorithm enables automatic human recognition from model-based gait cycle extraction based on the prediction-based hierarchical active shape model (ASM). The proposed algorithm overcomes drawbacks of existing works by extracting a set of relative model parameters instead of directly analysing the gait pattern. The feature extraction function in the proposed algorithm consists of motion detection, object region detection and ASM, which alleviate problems in the baseline algorithm such as background generation, shadow removal and higher recognition rate. Performance of the proposed algorithm has been evaluated by using the HumanID Gait Challenge data set, which is the largest gait benchmarking data set with 122 objects with different realistic parameters including viewpoint, shoe, surface, carrying condition and time.
---
paper_title: Frame difference energy image for gait recognition with incomplete silhouettes
paper_content:
The quality of human silhouettes has a direct effect on gait recognition performance. This paper proposes a robust dynamic gait representation scheme, frame difference energy image (FDEI), to suppress the influence of silhouette incompleteness. A gait cycle is first divided into clusters. The average image of each cluster is denoised and becomes the dominant energy image (DEI). FDEI representation of a frame is constructed by adding the corresponding cluster's DEI and the positive portion of the frame difference between the former frame and the current frame. FDEI representation can preserve the kinetic and static information of each frame, even when the silhouettes are incomplete. This proposed representation scheme is tested on the CMU Mobo gait database with synthesized occlusions and the CASIA gait database (dataset B). The frieze and wavelet features are adopted and hidden Markov model (HMM) is employed for recognition. Experimental results show the superiority of FDEI representation over binary silhouettes and some other algorithms when occlusion or body portion lost appears in the gait sequences.
---
paper_title: Automatic extraction and description of human gait models for recognition purposes
paper_content:
Using gait as a biometric is of emerging interest. We describe a new model-based moving feature extraction analysis is presented that automatically extracts and describes human gait for recognition. The gait signature is extracted directly from the evidence gathering process. This is possible by using a Fourier series to describe the motion of the upper leg and apply temporal evidence gathering techniques to extract the moving model from a sequence of images. Simulation results highlight potential performance benefits in the presence of noise. Classification uses the k-nearest neighbour rule applied to the Fourier components of the motion of the upper leg. Experimental analysis demonstrates that an improved classification rate is given by the phase-weighted Fourier magnitude information over the use of the magnitude information alone. The improved classification capability of the phase-weighted magnitude information is verified using statistical analysis of the separation of clusters in the feature space. Furthermore, the technique is shown to be able to handle high levels of occlusion, which is of especial importance in gait as the human body is self-occluding. As such, a new technique has been developed to automatically extract and describe a moving articulated shape, the human leg, and shown its potential in gait as a biometric.
---
paper_title: Background subtraction techniques: a review
paper_content:
Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.
---
paper_title: Human gait recognition based on matching of body components
paper_content:
This paper presents a novel approach for gait recognition based on the matching of body components. The human body components are studied separately and are shown to have unequal discrimination power. Several approaches are presented for the combination of the results obtained from different body components into a common distance metric for the evaluation of similarity between gait sequences. A method is also proposed for the determination of the weighting of the various body components based on their contribution to recognition performance. Using the best performing of the proposed methods, improved recognition performance is achieved.
---
paper_title: Gait Components and Their Application to Gender Recognition
paper_content:
Human gait is a promising biometrics resource. In this paper, the information about gait is obtained from the motions of the different parts of the silhouette. The human silhouette is segmented into seven components, namely head, arm, trunk, thigh, front-leg, back-leg, and feet. The leg silhouettes for the front-leg and the back-leg are considered separately because, during walking, the left leg and the right leg are in front or at the back by turns. Each of the seven components and a number of combinations of the components are then studied with regard to two useful applications: human identification (ID) recognition and gender recognition. More than 500 different experiments on human ID and gender recognition are carried out under a wide range of circumstances. The effectiveness of the seven human gait components for ID and gender recognition is analyzed.
---
paper_title: Automated Human Recognition by Gait using Neural Network
paper_content:
We describe a new method for recognizing humans by their gait using back-propagation neural network. Here, the gait motion is described as rhythmic and periodic motion, and a 2D stick figure is extracted from gait silhouette by motion information with topological analysis guided by anatomical knowledge. A sequential set of 2D stick figures is used to represent the gait signature that is primitive data for the feature extraction based on motion parameters. Then, a back-propagation neural network algorithm is used to recognize humans by their gait patterns. In experiments, higher gait recognition performances have been achieved.
---
paper_title: Automated person recognition by walking and running via model-based approaches
paper_content:
Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di cult to disguise.Current approaches are mostly statistical and concentrate on walking only.By analysing leg motion we show how we can recognise people not only by the walking gait,but also by the running gait.This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts.These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg,from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means.One approach is completely automated whereas the other requires speci cation of a single parameter to distinguish between walking and running.Results show that both gaits are potential biometrics,with running being more potent.By its basis in evidence gathering,this new technique can tolerate noise and low resolution.
---
paper_title: Gait recognition using static, activity-specific parameters
paper_content:
A gait-recognition technique that recovers static body and stride parameters of subjects as they walk is presented. This approach is an example of an activity-specific biometric: a method of extracting identifying properties of an individual or of an individual's behavior that is applicable only when a person is performing that specific action. To evaluate our parameters, we derive an expected confusion metric (related to mutual information), as opposed to reporting a percent correct with a limited database. This metric predicts how well a given feature vector will filter identity in a large population. We test the utility of a variety of body and stride parameters recovered in different viewing conditions on a database consisting of 15 to 20 subjects walking at both an angled and frontal-parallel view with respect to the camera, both indoors and out. We also analyze motion-capture data of the subjects to discover whether confusion in the parameters is inherently a physical or a visual measurement error property.
---
paper_title: Fusion of static and dynamic body biometrics for gait recognition
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. This paper aims to propose a visual recognition algorithm based upon fusion of static and dynamic body biometrics. For each sequence involving a walking figure, pose changes of the segmented moving silhouettes are represented as an associated sequence of complex vector configurations, and are then analyzed using the Procrustes shape analysis method to obtain a compact appearance representation, called static information of body. Also, a model-based approach is presented under a condensation framework to track the walker and to recover joint-angle trajectories of lower limbs, called dynamic information of gait. Both static and dynamic cues are respectively used for recognition using the nearest exemplar classifier. They are also effectively fused on decision level using different combination rules to improve the performance of both identification and verification. Experimental results on a dataset including 20 subjects demonstrate the validity of the proposed algorithm.
---
paper_title: Gait recognition from time-normalized joint-angle trajectories in the walking plane
paper_content:
This paper demonstrates gait recognition using only the trajectories of lower body joint angles projected into the walking plane. For this work, we begin with the position of 3D markers as projected into the sagittal or walking plane. We show a simple method for estimating the planar offsets between the markers and the underlying skeleton and joints; given these offsets we compute the joint angle trajectories. To compensate for systematic temporal variations from one instance to the next-predominantly distance and speed of walk-we fix the number of footsteps and time-normalize the trajectories by a variance compensated time warping. We perform recognition on two walking databases of 18 people (over 150 walk instances) using simple nearest neighbor algorithm with Euclidean distance as a measurement criteria. We also use the expected confusion metric as a means to estimate how well joint-angle signals will perform in a larger population.
---
paper_title: Automatic extraction and description of human gait models for recognition purposes
paper_content:
Using gait as a biometric is of emerging interest. We describe a new model-based moving feature extraction analysis is presented that automatically extracts and describes human gait for recognition. The gait signature is extracted directly from the evidence gathering process. This is possible by using a Fourier series to describe the motion of the upper leg and apply temporal evidence gathering techniques to extract the moving model from a sequence of images. Simulation results highlight potential performance benefits in the presence of noise. Classification uses the k-nearest neighbour rule applied to the Fourier components of the motion of the upper leg. Experimental analysis demonstrates that an improved classification rate is given by the phase-weighted Fourier magnitude information over the use of the magnitude information alone. The improved classification capability of the phase-weighted magnitude information is verified using statistical analysis of the separation of clusters in the feature space. Furthermore, the technique is shown to be able to handle high levels of occlusion, which is of especial importance in gait as the human body is self-occluding. As such, a new technique has been developed to automatically extract and describe a moving articulated shape, the human leg, and shown its potential in gait as a biometric.
---
paper_title: Stride and Cadence as a Biometric in Automatic Person Identification and Verification
paper_content:
Presents a correspondence-free method to automatically estimate the spatio-temporal parameters of gait (stride length and cadence) of a walking person from video. Stride and cadence are functions of body height, weight and gender, and we use these biometrics for identification and verification of people. The cadence is estimated using the periodicity of a walking person. Using a calibrated camera system, the stride length is estimated by first tracking the person and estimating their distance travelled over a period of time. By counting the number of steps (again using periodicity) and assuming constant-velocity walking, we are able to estimate the stride to within 1 cm for a typical outdoor surveillance configuration (under certain assumptions). With a database of 17 people and eight samples of each, we show that a person is verified with an equal error rate (EER) of 11%, and correctly identified with a probability of 40%. This method works with low-resolution images of people and is robust to changes in lighting, clothing and tracking errors. It is view-invariant, though performance is optimal in a near-fronto-parallel configuration.
---
paper_title: Gait History Image: A Novel Temporal Template for Gait Recognition
paper_content:
In this paper, an improved temporal template called gait history image (GHI) is proposed for human gait representation and recognition. Comparing with other temporal template methods, GHI models gait more comprehensively: Static and dynamic characteristics, as well as spatial and temporal variations, can be represented. The time duration of the GHI template is controlled by a finer period resolution (1/4 of a gait cycle). We use statistical approach to learn the discriminating features from GHIs, and gait recognition experiments were performed on the CASIA and USF gait databases. The methods using GHI template presented better recognition performances than the baseline algorithm and gait energy image (GEI) method.
---
paper_title: The humanID gait challenge problem: data sets, performance, and analysis
paper_content:
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.
---
paper_title: Silhouette Analysis-Based Gait Recognition for Human Identification
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on principal component analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.
---
paper_title: Silhouette quality quantification for gait sequence analysis and recognition
paper_content:
Most gait analysis and recognition algorithms are designed based on silhouette data. Their recognition performances may therefore be impaired by the poor quality silhouettes extracted from outdoor environments. In this paper, a silhouette quality quantification (SQQ) method is proposed to assess the quality of silhouette sequence. SQQ analyzes the sequence quality based on 1D foreground-sum signal modeling and signal processing technique. As an immediate application of SQQ, a general enhancement framework namely silhouette quality weighting (SQW) is designed toward improving most of the current gait recognition algorithms by taking into consideration sequence quality. The experiments are performed on the USF HumanID gait dataset v1.7 (with 71 subjects). Investigation using the SQQ criterion has revealed the baseline algorithm's mechanism of silhouette quality consideration. Two instantiations of the SQW algorithm based on gait energy image (GEI) are implemented. Improved recognition performances compared to the original GEI and baseline methods are obtained, which verifies the effectiveness of the proposed SQQ, SQW methods.
---
paper_title: Individual recognition using gait energy image
paper_content:
In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches
---
paper_title: Gait Recognition Using Wavelet Packet Silhouette Representation and Transductive Support Vector Machines
paper_content:
Gait is an idiosyncratic biometric that can be used for human identification at a distance and as a result gained growing interest in intelligent visual surveillance. In this pa- per, an efficient gait recognition method based on describing subject outer body contour deformations using wavelet packets is proposed. With the use of Matching Pursuit algorithm, k bases of wavelet packet tree that have maximum similarity to the signal are selected and corresponding coefficients are used as features. Finally, Transductive Support vector Machine (TSVM) classification is utilized on computed eigengait space for semi-supervised identification. The proposed method of selecting features which uses a complete orthogonal or near orthogonal basis from a wavelet packet library of bases and investigating the correlational structure of gait features for each individual using TSVM, result in encouraging identification performance.
---
paper_title: Automatic gait recognition based on statistical shape analysis
paper_content:
Gait recognition has recently gained significant attention from computer vision researchers. This interest is strongly motivated by the need for automated person identification systems at a distance in visual surveillance and monitoring applications. The paper proposes a simple and efficient automatic gait recognition algorithm using statistical shape analysis. For each image sequence, an improved background subtraction procedure is used to extract moving silhouettes of a walking figure from the background. Temporal changes of the detected silhouettes are then represented as an associated sequence of complex vector configurations in a common coordinate frame, and are further analyzed using the Procrustes shape analysis method to obtain mean shape as gait signature. Supervised pattern classification techniques, based on the full Procrustes distance measure, are adopted for recognition. This method does not directly analyze the dynamics of gait, but implicitly uses the action of walking to capture the structural characteristics of gait, especially the shape cues of body biometrics. The algorithm is tested on a database consisting of 240 sequences from 20 different subjects walking at 3 viewing angles in an outdoor environment. Experimental results are included to demonstrate the encouraging performance of the proposed algorithm.
---
paper_title: Gait Recognition Using Radon Transform and Linear Discriminant Analysis
paper_content:
A new feature extraction process is proposed for gait representation and recognition. The new system is based on the Radon transform of binary silhouettes. For each gait sequence, the transformed silhouettes are used for the computation of a template. The set of all templates is subsequently subjected to linear discriminant analysis and subspace projection. In this manner, each gait sequence is described using a low-dimensional feature vector consisting of selected Radon template coefficients. Given a test feature vector, gait recognition and verification is achieved by appropriately comparing it to feature vectors in a reference gait database. By using the new system on the Gait Challenge database, very considerable improvements in recognition performance are seen in comparison to state-of-the-art methods for gait recognition
---
paper_title: Frame difference energy image for gait recognition with incomplete silhouettes
paper_content:
The quality of human silhouettes has a direct effect on gait recognition performance. This paper proposes a robust dynamic gait representation scheme, frame difference energy image (FDEI), to suppress the influence of silhouette incompleteness. A gait cycle is first divided into clusters. The average image of each cluster is denoised and becomes the dominant energy image (DEI). FDEI representation of a frame is constructed by adding the corresponding cluster's DEI and the positive portion of the frame difference between the former frame and the current frame. FDEI representation can preserve the kinetic and static information of each frame, even when the silhouettes are incomplete. This proposed representation scheme is tested on the CMU Mobo gait database with synthesized occlusions and the CASIA gait database (dataset B). The frieze and wavelet features are adopted and hidden Markov model (HMM) is employed for recognition. Experimental results show the superiority of FDEI representation over binary silhouettes and some other algorithms when occlusion or body portion lost appears in the gait sequences.
---
paper_title: Gait-based recognition of humans using continuous HMMs
paper_content:
Gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. In this paper, we propose a view-based approach to recognize humans through gait. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of stances or key frames that occur during the walk cycle of an individual is chosen. Euclidean distances of a given image from this stance set are computed and a lower-dimensional observation vector is generated. A continuous hidden Markov model (HMM) is trained using several such lower-dimensional vector sequences extracted from the video. This methodology serves to compactly capture structural and transitional features that are unique to an individual. The statistical nature of the HMM renders overall robustness to gait representation and recognition. The human identification performance of the proposed scheme is found to be quite good when tested in natural walking conditions.
---
paper_title: Identification of humans using gait
paper_content:
We propose a view-based approach to recognize humans from their gait. Two different image features have been considered: the width of the outer contour of the binarized silhouette of the walking person and the entire binary silhouette itself. To obtain the observation vector from the image features, we employ two different methods. In the first method, referred to as the indirect approach, the high-dimensional image feature is transformed to a lower dimensional space by generating what we call the frame to exemplar (FED) distance. The FED vector captures both structural and dynamic traits of each individual. For compact and effective gait representation and recognition, the gait information in the FED vector sequences is captured in a hidden Markov model (HMM). In the second method, referred to as the direct approach, we work with the feature vector directly (as opposed to computing the FED) and train an HMM. We estimate the HMM parameters (specifically the observation probability B) based on the distance between the exemplars and the image features. In this way, we avoid learning high-dimensional probability density functions. The statistical nature of the HMM lends overall robustness to representation and recognition. The performance of the methods is illustrated using several databases.
---
paper_title: AN ANGULAR TRANSFORM OF GAIT SEQUENCES FOR GAIT ASSISTED RECOGNITION
paper_content:
A new system is proposccd for gait analysis and recognition applications. 'The new system is based on a dcnoising process and a new angular transform that are applicd on hinary silhouettes. Each human silhouette in a pait sequencc is transformed into a low dinicnsional feature vector consisting of average pixel distances from the center OS the silhouette. The sequence of feature vectors corresponding to a gait sequence is used for identification based on a minimum-distance criterion between test and refercncc scqucnces. By using the new system on the Gait Challenge database. improvements in rccognition performance are seen in comparison to other methods of similar or higher complexity.
---
paper_title: The Recognition of Human Movement Using Temporal Templates
paper_content:
A view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template-a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms.
---
paper_title: Infrared gait recognition based on wavelet transform and support vector machine
paper_content:
To detect human body and remove noises from complex background, illumination variations and objects, the infrared thermal imaging was applied to collect gait video and an infrared thermal gait database was established in this paper. Multi-variables gait feature was extracted according to a novel method combining integral model and simplified model. Also the wavelet transform, invariant moments and skeleton theory were used to extract gait features. The support vector machine was employed to classify gaits. This proposed method was applied to the infrared gait database and achieved 78%-91% for the probability of correct recognition. The recognition rates were insensitive for the items of holding ball and loading package. However, there was significant influence for the item of wearing heavy coat. The infrared thermal imaging was potential for better description of human body moving within image sequences.
---
paper_title: Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
paper_content:
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.
---
paper_title: Silhouette Analysis-Based Gait Recognition for Human Identification
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on principal component analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.
---
paper_title: Biologically inspired feature manifold for gait recognition
paper_content:
Using biometric resources to recognize a person has been a recent concentration on computer vision. Previously, biometric research has forced on utilizing iris, finger print, palm print, and shoe print to authenticate and authorized a human. However, these conventional biometric resources suffer from some obviously limitation, such as: strictly distance requirement, too many user cooperation requirement and so on. Compared with the difficulties of utilization through conventional biometric resources, human gait can be easily acquired and utilized in many fields. A human's walk image can reflect the walker's physical characteristics and psychological state, and therefore, the gait feature can be used to recognize a person. In order to achieve better performance of gait recognition we represent the gait image using C1 units, which correspond to the complex cells in human visual cortex, and use a maximum mechanism to keep only the maximum response of each local area of S1 units. To enhance the gait recognition rate, we take the label information into account and utilize the discriminative locality alignment (DLA) method to classify, which is a top level discriminate manifold learning based subspace learning algorithm. Experiment on University of South Florida (USF) dataset shows: (1) the proposed C1Gait+DLA algorithms can achieve better performance than the state-of-art algorithms and (2) DLA can duly preserve both the local geometry and the discriminative information for recognition.
---
paper_title: Individual recognition using gait energy image
paper_content:
In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches
---
paper_title: Orthogonal Diagonal Projections for Gait Recognition
paper_content:
Gait has received much attention from researchers in the vision field due to its utility in walker identification. One of the key issues in gait recognition is how to extract discriminative shape features from 2D human silhouette images. This paper deals with the problem of gait-based walker recognition using statistical shape features. First, we normalize walkers' silhouettes (to facilitate gait feature comparison) into a square form and use the orthogonal projections in the positive and negative diagonal directions to draw personal signatures contained in gait patterns. Then principal component analysis (PCA) and linear discriminant analysis (LDA) are applied to reduce the dimensionality of original gait features and to improve the topological structure in the feature space. Finally, this paper accomplishes the recognition of unknown gait features based on the nearest neighbor rule, with the discussion of the effect of distance metrics and scales on discriminating performance. Experimental results justify the potential of our method.
---
paper_title: General Tensor Discriminant Analysis and Gabor Features for Gait Recognition
paper_content:
Traditional image representations are not suited to conventional classification methods such as the linear discriminant analysis (LDA) because of the undersample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two-dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA, compared with existing preprocessing methods such as the principal components analysis (PCA) and 2DLDA, include the following: 1) the USP is reduced in subsequent classification by, for example, LDA, 2) the discriminative information in the training tensors is preserved, and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, whereas that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor-function-based image decompositions for image understanding and object recognition, we develop three different Gabor-function-based image representations: 1) GaborD is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS, and GaborSD representations are applied to the problem of recognizing people from their averaged gait images. A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS, or GaborSD image representation, then using GDTA to extract features and, finally, using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the University of South Florida (USF) HumanID Database. Experimental comparisons are made with nine state-of-the-art classification methods in gait recognition.
---
paper_title: Active energy image plus 2DLPP for gait recognition
paper_content:
This paper proposes a novel active energy image (AEI) method for gait recognition. Existing human gait feature representation methods, however, usually suffer from low quality of human silhouettes and insufficient dynamic characteristics. To this end, we apply the proposed AEI for gait representation. Given a gait silhouette sequence, we first extract the active regions by calculating the difference of two adjacent silhouette images, and construct an AEI by accumulating these active regions. Then, we project each AEI to a low-dimensional feature subspace via the newly proposed two-dimensional locality preserving projections (2DLPP) method to further improve the discriminative power of the extracted features. Experimental results on the CASIA gait database (dataset B and C) demonstrate the effectiveness of the proposed method.
---
paper_title: Gait Feature Subset Selection by Mutual Information
paper_content:
Feature subset selection is an important preprocessing step for pattern recognition, to discard irrelevant and redundant information, as well as to identify the most important attributes. In this paper, we investigate a computationally efficient solution to select the most important features for gait recognition. The specific technique applied is based on mutual information (MI), which evaluates the statistical dependence between two random variables and has an established relation with the Bayes classification error. Extending our earlier research, we show that a sequential selection method based on MI can provide an effective solution for high-dimensional human gait data. To assess the performance of the approach, experiments are carried out based on a 73-dimensional model-based gait features set and on a 64 by 64 pixels model-free gait symmetry map on the Southampton HiD Gait database. The experimental results confirm the effectiveness of the method, removing about 50% of the model-based features and 95% of the symmetry map's pixels without significant loss in recognition capability, which outperforms correlation and analysis-of-variance-based methods.
---
paper_title: Recursive spatiotemporal subspace learning for gait recognition
paper_content:
In this paper, we propose a new gait recognition method using recursive spatiotemporal subspace learning. In the first stage, periodic dynamic feature of gait over time is extracted by Principal Component Analysis (PCA) and gait sequences are represented in the form of Periodicity Feature Vector (PFV). In the second stage, shape feature of gait over space is extracted by Discriminative Locality Alignment (DLA) based on the PFV representation of gait sequences. After the recursive subspace learning, gait sequence data is compressed into a very compact vector named Gait Feature Vector (GFV) which is used for individual recognition. Compared to other gait recognition methods, GFV is an effective representation of gait because the recursive spatiotemporal subspace learning technique extracts both the shape features and the dynamic features. And at the same time, representing gait sequences in PFV form is an efficient way to save storage space and computational time. Experimental result shows that the proposed method achieves highly competitive performance with respect to the published gait recognition approaches on the USF HumanID gait database.
---
paper_title: Silhouette-based human identification from body shape and gait
paper_content:
Our goal is to establish a simple baseline method for human identification based on body shape and gait. This baseline recognition method provides a lower bound against which to evaluate more complicated procedures. We present a viewpoint-dependent technique based on template matching of body silhouettes. Cyclic gait analysis is performed to extract key frames from a test sequence. These frames are compared to training frames using normalized correlation, and subject classification is performed by nearest-neighbor matching among correlation scores. The approach implicitly captures biometric shape cues such as body height, width, and body-part proportions, as well as gait cues such as stride length and amount of arm swing. We evaluate the method on four databases with varying viewing angles, background conditions (indoors and outdoors), walking styles and pixels on target.
---
paper_title: Gait Recognition Using Wavelet Packet Silhouette Representation and Transductive Support Vector Machines
paper_content:
Gait is an idiosyncratic biometric that can be used for human identification at a distance and as a result gained growing interest in intelligent visual surveillance. In this pa- per, an efficient gait recognition method based on describing subject outer body contour deformations using wavelet packets is proposed. With the use of Matching Pursuit algorithm, k bases of wavelet packet tree that have maximum similarity to the signal are selected and corresponding coefficients are used as features. Finally, Transductive Support vector Machine (TSVM) classification is utilized on computed eigengait space for semi-supervised identification. The proposed method of selecting features which uses a complete orthogonal or near orthogonal basis from a wavelet packet library of bases and investigating the correlational structure of gait features for each individual using TSVM, result in encouraging identification performance.
---
paper_title: Automatic extraction and description of human gait models for recognition purposes
paper_content:
Using gait as a biometric is of emerging interest. We describe a new model-based moving feature extraction analysis is presented that automatically extracts and describes human gait for recognition. The gait signature is extracted directly from the evidence gathering process. This is possible by using a Fourier series to describe the motion of the upper leg and apply temporal evidence gathering techniques to extract the moving model from a sequence of images. Simulation results highlight potential performance benefits in the presence of noise. Classification uses the k-nearest neighbour rule applied to the Fourier components of the motion of the upper leg. Experimental analysis demonstrates that an improved classification rate is given by the phase-weighted Fourier magnitude information over the use of the magnitude information alone. The improved classification capability of the phase-weighted magnitude information is verified using statistical analysis of the separation of clusters in the feature space. Furthermore, the technique is shown to be able to handle high levels of occlusion, which is of especial importance in gait as the human body is self-occluding. As such, a new technique has been developed to automatically extract and describe a moving articulated shape, the human leg, and shown its potential in gait as a biometric.
---
paper_title: Infrared gait recognition based on wavelet transform and support vector machine
paper_content:
To detect human body and remove noises from complex background, illumination variations and objects, the infrared thermal imaging was applied to collect gait video and an infrared thermal gait database was established in this paper. Multi-variables gait feature was extracted according to a novel method combining integral model and simplified model. Also the wavelet transform, invariant moments and skeleton theory were used to extract gait features. The support vector machine was employed to classify gaits. This proposed method was applied to the infrared gait database and achieved 78%-91% for the probability of correct recognition. The recognition rates were insensitive for the items of holding ball and loading package. However, there was significant influence for the item of wearing heavy coat. The infrared thermal imaging was potential for better description of human body moving within image sequences.
---
paper_title: The humanID gait challenge problem: data sets, performance, and analysis
paper_content:
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.
---
paper_title: Gait analysis for human identification in frequency domain
paper_content:
In this paper, we analyze the spatio-temporal human characteristic of moving silhouettes in frequency domain, and find key Fourier descriptors that have better discriminatory capability for recognition than the other Fourier descriptors. A large number of experimental results and analysis show that the proposed algorithm based on the key Fourier descriptors can not only greatly reduce the gait data dimensionality, but also lighten the computation cost, with a satisfactory CCR. Besides that, classification performance can be further improved using feature fusion.
---
paper_title: Gait Analysis for Recognition and Classification
paper_content:
We describe a representation of gait appearance for the purpose of person identification and classification. This gait representation is based on simple features such as moments extracted from orthogonal view video silhouettes of human walking motion. Despite its simplicity, the resulting feature vector contains enough information to perform well on human identification and gender classification tasks. We explore the recognition behaviors of two different methods to aggregate features over time under different recognition tasks. We demonstrate the accuracy of recognition using gait video sequences collected over different days and times and under varying lighting environments. In addition, we show results for gender classification based our gait appearance features using a support-vector machine.
---
paper_title: Silhouette Analysis-Based Gait Recognition for Human Identification
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on principal component analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.
---
paper_title: The gait identification challenge problem: data sets and baseline algorithm
paper_content:
Recognition of people through gait analysis is an important research topic, with potential applications in video surveillance, tracking, and monitoring. Recognizing the importance of evaluating and comparing possible competing solutions to this problem, we previously introduced the HumanID challenge problem consisting of a set of experiments of increasing difficulty, a baseline algorithm, and a large set of video sequences (about 300 GB of data related to 452 sequences from 74 subjects) acquired to investigate important dimensions to this problem, such as variations due to viewpoint, footwear and walking surface. In this paper we present a detailed investigation of the baseline algorithm, quantify the dependence of the various covariates on gait-based identification, and update the previous baseline performance with optimized ones. We establish that the performance of the baseline algorithm is robust with respect to its various parameters. The overall identification performance is also stable with respect to the quality of the silhouettes. We find that the approximately lower 20% of the silhouette accounts for most of the recognition achieved. Viewpoint has barely statistically significant effect on identification rates, whereas footwear and surface-type does have significant effects with the effect due to surface-type being approximately 5 times that of shoe-type.
---
paper_title: Matching shape sequences in video with applications in human movement analysis
paper_content:
We present an approach for comparing two sequences of deforming shapes using both parametric models and nonparametric methods. In our approach, Kendall's definition of shape is used for feature extraction. Since the shape feature rests on a non-Euclidean manifold, we propose parametric models like the autoregressive model and autoregressive moving average model on the tangent space and demonstrate the ability of these models to capture the nature of shape deformations using experiments on gait-based human recognition. The nonparametric model is based on dynamic time-warping. We suggest a modification of the dynamic time-warping algorithm to include the nature of the non-Euclidean space in which the shape deformations take place. We also show the efficacy of this algorithm by its application to gait-based human recognition. We exploit the shape deformations of a person's silhouette as a discriminating feature and provide recognition results using the nonparametric model. Our analysis leads to some interesting observations on the role of shape and kinematics in automated gait-based person authentication.
---
paper_title: Statistical Motion Model Based on the Change of Feature Relationships: Human gait-based recognition
paper_content:
We offer a novel representation scheme for view-based motion analysis using just the change in the relational statistics among the detected image features, without the need for object models, perfect segmentation, or part-level tracking. We model the relational statistics using the probability that a random group of features in an image would exhibit a particular relation. To reduce the representational combinatorics of these relational distributions, we represent them in a Space of Probability Functions (SoPF), where the Euclidean distance is related to the Bhattacharya distance between probability functions. Different motion types sweep out different traces in this space. We demonstrate and evaluate the effectiveness of this representation in the context of recognizing persons from gait. In particular, on outdoor sequences: (1) we demonstrate the possibility of recognizing persons from not only walking gait, but running and jogging gaits as well; (2) we study recognition robustness with respect to view-point variation; and (3) we benchmark the recognition performance on a database of 71 subjects walking on soft grass surface, where we achieve around 90 percent recognition rates in the presence of viewpoint variation.
---
paper_title: A hidden Markov model based framework for recognition of humans from gait sequences
paper_content:
In this paper we propose a generic framework based on hidden Markov models (HMMs) for recognition of individuals from their gait. The HMM framework is suitable, because the gait of an individual can be visualized as his adopting postures from a set, in a sequence which has an underlying structured probabilistic nature. The postures that the individual adopts can be regarded as the states of the HMM and are typical to that individual and provide a means of discrimination. The framework assumes that, during gait, the individual transitions between N discrete postures or states but it is not dependent on the particular feature vector used to represent the gait information contained in the postures. The framework, thus, provides flexibility in the selection of the feature vector. The statistical nature of the HMM lends robustness to the model. In this paper we use the binarized background-subtracted image as the feature vector and use different distance metrics, such as those based on the L/sub 1/ and L/sub 2/ norms of the vector difference, and the normalized inner product of the vectors, to measure the similarity between feature vectors. The results we obtain are better than the baseline recognition rates reported before.
---
paper_title: Discriminative feature selection for hidden Markov models using Segmental Boosting
paper_content:
We address the feature selection problem for hidden Markov models (HMMs) in sequence classification. Temporal correlation in sequences often causes difficulty in applying feature selection tech niques. Inspired by segmental k-means segmentation (SKS) [B. Juang and L. Rabiner, 1990], we propose Segmentally Boosted HMMs (SBHMMs), where the state-optimized features are constructed in a segmental and discriminative manner. The contributions are twofold. First, we introduce a novel feature selection algorithm, where the temporal dynamics are decoupled from the static learning procedure by assuming that the sequential data are piecewise independent and identically distributed. Second, we show that the SBHMM consistently improves traditional HMM recognition in various domains. The reduction of error compared to traditional HMMs ranges from 17% to 70% in American Sign Language recognition, human gait identification, lip reading, and speech recognition.
---
paper_title: HMM based hand gesture recognition: A review on techniques and approaches
paper_content:
Gesture is one of the most natural and expressive ways of communications between human and computer in a virtual reality system. We naturally use various gestures to express our own intentions in everyday life. Hand gesture is one of the important methods of non-verbal communication for human beings for its freer in movements and much more expressive than any other body parts. Hand gesture recognition has a number of potential applications in human-computer interaction, machine vision, virtual reality, machine control in industry, and so on. As a gesture is a continuous motion on a sequential time series, the HMMs (Hidden Markov Models) must be a prominent recognition tool. The most important thing in hand gesture recognition is what the input features are that best represent the characteristics of the moving hand gesture.This paper presents part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using HMMs for vision-based approach.
---
paper_title: Gait Analysis For Human Identification Through Manifold Learning and HMM
paper_content:
With the increasing demands of visual surveillance systems, human identification at a distance has gained more interest. Gait is often used as an unobtrusive biometric offering the possibility to identify individuals at a distance without any interaction or co-operation with the subject. This paper presents a novel effectively method for automatic viewpoint and person identification by using only the sequence of gait silhouette. The gait silhouettes are nonlinearly transformed into low dimensional embedding and the dynamics in time-series images are modeled by HMM in the corresponding embedding space. The experimental results demonstrate that the proposed algorithm is an encouraging progress for automatic human identification.
---
paper_title: Improved gait recognition by gait dynamics normalization
paper_content:
Potential sources for gait biometrics can be seen to derive from two aspects: gait shape and gait dynamics. We show that improved gait recognition can be achieved after normalization of dynamics and focusing on the shape information. We normalize for gait dynamics using a generic walking model, as captured by a population hidden Markov model (pHMM) defined for a set of individuals. The states of this pHMM represent gait stances over one gait cycle and the observations are the silhouettes of the corresponding gait stances. For each sequence, we first use Viterbi decoding of the gait dynamics to arrive at one dynamics-normalized, averaged, gait cycle of fixed length. The distance between two sequences is the distance between the two corresponding dynamics-normalized gait cycles, which we quantify by the sum of the distances between the corresponding gait stances. Distances between two silhouettes from the same generic gait stance are computed in the linear discriminant analysis space so as to maximize the discrimination between persons, while minimizing the variations of the same subject under different conditions. The distance computation is constructed so that it is invariant to dilations and erosions of the silhouettes. This helps us handle variations in silhouette shape that can occur with changing imaging conditions. We present results on three different, publicly available, data sets. First, we consider the HumanID gait challenge data set, which is the largest gait benchmarking data set that is available (122 subjects), exercising five different factors, i.e., viewpoint, shoe, surface, carrying condition, and time. We significantly improve the performance across the hard experiments involving surface change and briefcase carrying conditions. Second, we also show improved performance on the UMD gait data set that exercises time variations for 55 subjects. Third, on the CMU Mobo data set, we show results for matching across different walking speeds. It is worth noting that there was no separate training for the UMD and CMU data sets.
---
paper_title: Factorial HMM and Parallel HMM for Gait Recognition
paper_content:
Information fusion offers a promising solution to the development of a high-performance classification system. In this paper, the problem of multiple gait features fusion is explored with the framework of the factorial hidden Markov model (FHMM). The FHMM has a multiple-layer structure and provides an alternative to combine several gait features without concatenating them into a single augmented feature. Besides, the feature concatenation is used to directly concatenate the features and the parallel HMM (PHMM) is introduced as a decision-level fusion scheme, which employs traditional fusion rules to combine the recognition results at decision level. To evaluate the recognition performances, McNemar's test is employed to compare the FHMM feature-level fusion scheme with the feature concatenation and the PHMM decision-level fusion scheme. Statistical numerical experiments are carried out on the Carnegie Mellon University motion of body and the Institute of Automation of the Chinese Academy of Sciences gait databases. The experimental results demonstrate that the FHMM feature-level fusion scheme and the PHMM decision-level fusion scheme outperform feature concatenation. The FHMM feature-level fusion scheme tends to perform better than the PHMM decision-level fusion scheme when only a few gait cycles are available for recognition.
---
paper_title: A tutorial on hidden Markov models and selected applications in speech recognition
paper_content:
This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >
---
paper_title: Individual recognition from periodic activity using hidden Markov models
paper_content:
We present a method for recognizing individuals from their walking and running gait. The method is based on Hu moments of the motion segmentation in each frame. Periodicity is detected in such a sequence of feature vectors by minimizing the sum of squared differences, and the individual is recognized from the feature vector sequence using hidden Markov models. Comparisons are made to earlier periodicity detection approaches and to earlier individual recognition approaches. Experiments show the successful recognition of individuals (and their gait) in frontoparallel sequences.
---
paper_title: Gait-based recognition of humans using continuous HMMs
paper_content:
Gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. In this paper, we propose a view-based approach to recognize humans through gait. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of stances or key frames that occur during the walk cycle of an individual is chosen. Euclidean distances of a given image from this stance set are computed and a lower-dimensional observation vector is generated. A continuous hidden Markov model (HMM) is trained using several such lower-dimensional vector sequences extracted from the video. This methodology serves to compactly capture structural and transitional features that are unique to an individual. The statistical nature of the HMM renders overall robustness to gait representation and recognition. The human identification performance of the proposed scheme is found to be quite good when tested in natural walking conditions.
---
paper_title: Analyzing Human Movements from Silhouettes Using Manifold Learning
paper_content:
A novel method for learning and recognizing sequential image data is proposed, and promising applications to vision-based human movement analysis are demonstrated. To find more compact representations of high-dimensional silhouette data, we exploit locality preserving projections (LPP) to achieve low-dimensional manifold embedding. Further, we present two kinds of methods to analyze and recognize learned motion manifolds. One is correlation matching based on the Hausdorrf distance, and the other is a probabilistic method using continuous hidden Markov models (HMM). Encouraging results are obtained in two representative experiments in the areas of human activity recognition and gait-based human identification.
---
paper_title: The humanID gait challenge problem: data sets, performance, and analysis
paper_content:
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.
---
paper_title: Identification of humans using gait
paper_content:
We propose a view-based approach to recognize humans from their gait. Two different image features have been considered: the width of the outer contour of the binarized silhouette of the walking person and the entire binary silhouette itself. To obtain the observation vector from the image features, we employ two different methods. In the first method, referred to as the indirect approach, the high-dimensional image feature is transformed to a lower dimensional space by generating what we call the frame to exemplar (FED) distance. The FED vector captures both structural and dynamic traits of each individual. For compact and effective gait representation and recognition, the gait information in the FED vector sequences is captured in a hidden Markov model (HMM). In the second method, referred to as the direct approach, we work with the feature vector directly (as opposed to computing the FED) and train an HMM. We estimate the HMM parameters (specifically the observation probability B) based on the distance between the exemplars and the image features. In this way, we avoid learning high-dimensional probability density functions. The statistical nature of the HMM lends overall robustness to representation and recognition. The performance of the methods is illustrated using several databases.
---
paper_title: Silhouette Analysis-Based Gait Recognition for Human Identification
paper_content:
Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on principal component analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.
---
paper_title: General Tensor Discriminant Analysis and Gabor Features for Gait Recognition
paper_content:
Traditional image representations are not suited to conventional classification methods such as the linear discriminant analysis (LDA) because of the undersample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two-dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA, compared with existing preprocessing methods such as the principal components analysis (PCA) and 2DLDA, include the following: 1) the USP is reduced in subsequent classification by, for example, LDA, 2) the discriminative information in the training tensors is preserved, and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, whereas that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor-function-based image decompositions for image understanding and object recognition, we develop three different Gabor-function-based image representations: 1) GaborD is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS, and GaborSD representations are applied to the problem of recognizing people from their averaged gait images. A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS, or GaborSD image representation, then using GDTA to extract features and, finally, using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the University of South Florida (USF) HumanID Database. Experimental comparisons are made with nine state-of-the-art classification methods in gait recognition.
---
paper_title: Active energy image plus 2DLPP for gait recognition
paper_content:
This paper proposes a novel active energy image (AEI) method for gait recognition. Existing human gait feature representation methods, however, usually suffer from low quality of human silhouettes and insufficient dynamic characteristics. To this end, we apply the proposed AEI for gait representation. Given a gait silhouette sequence, we first extract the active regions by calculating the difference of two adjacent silhouette images, and construct an AEI by accumulating these active regions. Then, we project each AEI to a low-dimensional feature subspace via the newly proposed two-dimensional locality preserving projections (2DLPP) method to further improve the discriminative power of the extracted features. Experimental results on the CASIA gait database (dataset B and C) demonstrate the effectiveness of the proposed method.
---
paper_title: The humanID gait challenge problem: data sets, performance, and analysis
paper_content:
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.
---
paper_title: Adaptive Fusion of Gait and Face for Human Identification in Video
paper_content:
Most work on multi-biometric fusion is based on static fusion rules which cannot respond to the changes of the environment and the individual users. This paper proposes adaptive multi-biometric fusion, which dynamically adjusts the fusion rules to suit the real-time external conditions. As a typical example, the adaptive fusion of gait and face in video is studied. Two factors that may affect the relationship between gait and face in the fusion are considered, i.e., the view angle and the subject-to-camera distance. Together they determine the way gait and face are fused at an arbitrary time. Experimental results show that the adaptive fusion performs significantly better than not only single biometric traits, but also those widely adopted static fusion rules including SUM, PRODUCT, MIN, and MAX.
---
paper_title: Uncorrelated discriminant simplex analysis for view-invariant gait signal computing
paper_content:
Human gait is a useful biometric signature and has recently gained growing interest from computer vision researchers. This interest is strongly driven by the need for automatic human identification and gender recognition at a distance in many surveillance applications. Existing human gait analysis methods, however, are sensitive to the view of the gait sequences, and their performances are poor when the view of the training gait sequences is different from that of the testing ones. In this paper, we propose a new supervised manifold learning algorithm, called uncorrelated discriminant simplex analysis (UDSA), for view-invariant gait signal computing. The aim of UDSA is to seek a mapping to project human gait sequences collected from different views into a low-dimensional feature subspace, such that intraclass geometrical structures are preserved and interclass distances of gait sequences are maximized simultaneously. Moreover, we impose an uncorrelated constraint to make the extracted features statistically uncorrelated. Experimental results are presented to demonstrate the efficacy of the proposed approach.
---
paper_title: Infrared gait recognition based on wavelet transform and support vector machine
paper_content:
To detect human body and remove noises from complex background, illumination variations and objects, the infrared thermal imaging was applied to collect gait video and an infrared thermal gait database was established in this paper. Multi-variables gait feature was extracted according to a novel method combining integral model and simplified model. Also the wavelet transform, invariant moments and skeleton theory were used to extract gait features. The support vector machine was employed to classify gaits. This proposed method was applied to the infrared gait database and achieved 78%-91% for the probability of correct recognition. The recognition rates were insensitive for the items of holding ball and loading package. However, there was significant influence for the item of wearing heavy coat. The infrared thermal imaging was potential for better description of human body moving within image sequences.
---
| Title: A Review of Vision-Based Gait Recognition Methods for Human Identification
Section 1: Introduction
Description 1: Introduce the importance of individual identification, various biometric technologies, and the unique advantages of gait recognition. Outline the general framework of automatic gait recognition and the scope of the paper.
Section 2: Model-based Approaches
Description 2: Discuss model-based approaches for gait recognition, which involve obtaining static or dynamic body parameters through modeling or tracking body components. Highlight their advantages, challenges, and primary methods discussed in this section.
Section 3: Model-Free Approaches
Description 3: Explain model-free approaches that focus on silhouettes or overall motion rather than modeling specific body parts. Discuss their relative computational efficiency and robustness, as well as detailed methods within this category.
Section 4: Feature Dimensionality Reduction
Description 4: Address the necessity of reducing the high dimensionality of gait features. Review various linear and non-linear dimensionality reduction methods typically used in gait recognition.
Section 5: Direct Classification
Description 5: Present methods that classify gait based on single representations or key frames extracted from gait sequences. Introduce various classifiers such as nearest neighbor and support vector machines.
Section 6: Similarity of Temporal Sequences
Description 6: Discuss classification methods that measure the similarity of temporal gait sequences. Include methods like dynamic time warping (DTW) and frequency analysis.
Section 7: State-space Model: HMM
Description 7: Elaborate on the use of Hidden Markov Models (HMM) in gait recognition. Explain how HMMs model gait as a sequence of states and review various HMM-based recognition methods.
Section 8: Public Gait Datasets
Description 8: Describe standard publicly available gait datasets used for comparing and evaluating gait recognition algorithms. Provide details about notable datasets such as the USF Dataset, CMU Mobo Dataset, Southampton Dataset, and CASIA Gait Dataset.
Section 9: Conclusions and Future work
Description 9: Summarize the key points of the paper, the major issues in gait recognition, and the comparative strengths of different approaches. Discuss future research directions and potential improvements in the field. |
As time goes by: Constraint Handling Rules - A survey of CHR research from 1998 to 2007 | 10 | ---
paper_title: Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) is both a theoretical formalism based on logic and a practical programming language based on rules. This book, written by the creator of CHR, describes the theory of CHR and how to use it in practice. It is supported by a website containing teaching materials, online demos, and free downloads of the language. After a basic tutorial, the author describes in detail the CHR language and discusses guaranteed properties of CHR programs. The author then compares CHR with other formalisms and languages and illustrates how it can capture their essential features. Finally, larger programs are introduced and analyzed in detail. The book is ideal for graduate students and lecturers, and for more experienced programmers and researchers, who can use it for self-study. Exercises with selected solutions, and bibliographic remarks are included at the ends of chapters. The book is the definitive reference on the subject.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Projection in adaptive Constraint Handling
paper_content:
Constraint solving in dynamic environments requires an immediate adaptation of the solutions if the constraint problems are changing. Constraint solving with Constraint Handling Rules (CHR) is extended with incremental algorithms, thus supporting the solution of dynamic constraint satisfaction problems (DCSPs). Unfortunately, constraint processing with CHR introduces a lot of new variables which require additional memory space and reduce run-time performance. Most of the variables may be eliminated without any loss of information. Thus, memory may be kept rather small and run-time performance may be improved. This paper describes the use of projection with CHR in order to eliminate irrelevant variable bindings and maintain the constraint store quite small. In detail, some projection algorithms are presented to eliminate variables which are introduced during constraint processing with CHR. Projection is called early projection if it is applied together with each rule application, thus eliminating recently introduced irrelevant variable bindings while keeping the derived constraint store quite small. This kind of projection is well-suited when solving Dynamic Constraint Satisfaction Problems, especially after constraint deletion, when many superfluous variable binding have to be deleted as well. Consequently, the modifications that are required for an adaptation are reduced. This may result in an improved performance of the adaptation algorithms and a better performance for non-adaptive constraint processing with CHR is also expected.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Essentials of Constraint Programming
paper_content:
1. Introduction.- I. Constraint Programming.- 2. Algorithm = Logic + Control.- 3. Preliminaries of Syntax and Semantics.- 4. Logic Programming.- 5. Constraint Logic Programming.- 6. Concurrent Constraint Logic Programming.- 7. Constraint Handling Rules.- II. Constraint Systems.- 8. Constraint Systems and Constraint Solvers.- 9. Boolean Algebra B.- 10. Rational Trees RT.- 11. Linear Polynomial Equations R.- 12. Finite Domains FD.- 13. Non-linear Equations I.- III. Applications.- 14. Market Overview.- 15. Optimal Sender Placement for Wireless Communication.- 16. The Munich Rent Advisor.- 17. University Course Timetabling.- IV. Appendix.- A. Foundations from Logic.- A.1 First-Order Logic: Syntax and Semantics.- A.2 Basic Calculi and Normal Forms.- A.2.1 Substitutions.- A.2.2 Negation Normal Form and Prenex Form.- A.2.3 Skolemization.- A.2.4 Clauses.- A.2.5 Resolution.- List of Figures.- References.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: A linear-logic semantics for constraint handling rules
paper_content:
One of the attractive features of the Constraint Handling Rules (CHR) programming language is its declarative semantics where rules are read as formulae in first-order predicate logic. However, the more CHR is used as a general-purpose programming language, the more the limitations of that kind of declarative semantics in modelling change become apparent. We propose an alternative declarative semantics based on (intuitionistic) linear logic, establishing strong theorems on both soundness and completeness of the new declarative semantics w.r.t. operational semantics.
---
paper_title: A Unified Semantics for Constraint Handling Rules in Transaction Logic
paper_content:
Reasoning on Constraint Handling Rules (CHR) programs and their executional behaviour is often ad-hoc and outside of a formal system. This is a pity, because CHR subsumes a wide range of important automated reasoning services. Mapping CHR to Transaction Logic (τ R) combines CHR rule specification, CHR rule application, and reasoning on CHR programs and CHR derivations inside one formal system which is executable. This new τ R semantics obviates the need for disjoint declarative and operational semantics.
---
paper_title: Unfolding in CHR
paper_content:
Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption and more generally to optimize a given program. Essentially it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. One of the basic operations which is used by most program transformation systems is unfolding which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages and, to the best of our knowledge, no one has considered unfolding of CHR programs. This paper is a first attempt to define a correct unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning.
---
paper_title: Aggregates for Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) [2,3,4] is a general-purpose programming language based on committed-choice, multi-headed, guarded multiset rewrite rules. As the head of each CHR rule only considers a fixed number of constraints, any form of aggregation over unbounded parts of the constraint store necessarily requires explicit encoding, using auxiliary constraints and rules.
---
paper_title: Abstract Critical Pairs and Confluence of Arbitrary Binary Relations
paper_content:
In a seminal paper, Huet introduced abstract properties of term rewriting systems, and the confluence analysis of terminating term rewriting systems by critical pairs computation. In this paper, we provide an abstract notion of critical pair for arbitrary binary relations and context operators. We show how this notion applies to the confluence analysis of various transition systems, ranging from classical term rewriting systems to production rules with constraints and partial control strategies, such as the Constraint Handling Rules language CHR. Interestingly, we show in all these cases that some classical critical pairs can be disregarded. The crux of these analyses is the ability to compute critical pairs between states built with general context operators, on which a bounded, not necessarily well-founded, ordering is assumed.
---
paper_title: Probabilistic Constraint Handling Rules
paper_content:
Abstract Classical Constraint Handling Rules (CHR) provide a powerful tool for specifying and implementing constraint solvers and programs. The rules of CHR rewrite constraints (non-deterministically) into simpler ones until they are solved. In this paper we introduce an extension of Constraint Handling Rules (CHR), namely Probabilistic CHRs (PCHR). These allow the probabilistic “weighting” of rules, specifying the probability of their application. In this way we are able to formalise various randomised algorithms such as for example Simulated Annealing. The implementation is based on source-to-source transformation (STS). Using a recently developed prototype for STS for CHR, we could implement probabilistic CHR in a concise way with a few lines of code in less than one hour.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: A compositional semantics for CHR
paper_content:
Constraint Handling Rules (CHR) are a committed-choice declarative language which has been designed for writing constraint solvers. A CHR program consists of multi-headed guarded rules which allow one to rewrite constraints into simpler ones until a solved form is reached.CHR has received a considerable attention, both from the practical and from the theoretical side. Nevertheless, due the use of multi-headed clauses, there are several aspects of the CHR semantics which have not been clarified yet. In particular, no compositional semantics for CHR has been defined so far.In this paper we introduce a fix-point semantics which characterizes the input/output behavior of a CHR program and which is and-compositional, that is, which allows to retrieve the semantics of a conjunctive query from the semantics of its components. Such a semantics can be used as a basis to define incremental and modular analysis and verification tools.
---
paper_title: Observable confluence for constraint handling rules
paper_content:
Constraint Handling Rules (CHR) are a powerful rule based language for specifying constraint solvers. Critical for any rule based language is the notion of confluence, and for terminating CHR programs there is a decidable test for confluence. But many CHR programs that are in practice confluent fail this confluence test. The problem is that the states that illustrate non-confluence are not observable from the initial goals of interest. In this paper we introduce the notion of observable confluence, a more general notion of confluence which takes into account whether states are observable. We devise a test for observable confluence which allows us to verify observable confluence for a range of CHR programs dealing with agents, type systems, and the union-find algorithm.
---
paper_title: Abstract interpretation for constraint handling rules
paper_content:
Program analysis is essential for the optimized compilation of Constraint Handling Rules (CHRs) as well as the inference of behavioral properties such as confluence and termination. Up to now all program analyses for CHRs have been developed in an ad hoc fashion.In this work we bring the general program analysis methodology of abstract interpretation to CHRs: we formulate an abstract interpretation framework over the call-based operational semantics of CHRs. The abstract interpretation framework is non-obvious since it needs to handle the highly non-deterministic execution of CHRs. The use of the framework is illustrated with two instantiations: the CHR-specific late storage analysis and the more generally known groundness analysis. In addition, we discuss optimizations based on these analyses and present experimental results.
---
paper_title: Compiling Constraint Handling Rules for Efficient Tabled Evaluation
paper_content:
Tabled resolution, which alleviates some of Prolog's termination problems, makes it possible to create practical applications from high-level declarative specifications. Constraint Handling Rules (CHR) is an elegant framework for implementing constraint solvers from high-level specifications, and is available in many Prolog systems. However, applications combining the power of these two declarative paradigms have been impractical since traditional CHR implementations interact poorly with tabling. In this paper we present a new (set-based) semantics for CHR which enables efficient integration with tabling. The new semantics coincides with the traditional (multi-set-based) semantics for a large class of CHR programs. We describe CHRd, an implementation based on the new semantics. CHRd uses a distributed constraint store that can be directly represented in tables. Although motivated by tabling, CHRd works well also on non-tabled platforms. We present experimental results which show that, relative to traditional implementations, CHRd performs significantly better on tabled programs, and yet shows comparable results on non-tabled benchmarks.
---
paper_title: The Refined Operational Semantics of Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHRs) are a high-level rule-based programming language commonly used to write constraint solvers. The theoretical operational semantics for CHRs is highly non-deterministic and relies on writing confluent programs to have a meaningful behaviour. Implementations of CHRs use an operational semantics which is considerably finer than the theoretical operational semantics, but is still non-deterministic (from the user’s perspective). This paper formally defines this refined operational semantics and proves it implements the theoretical operational semantics. It also shows how to create a (partial) confluence checker capable of detecting programs which are confluent under this semantics, but not under the theoretical operational semantics. This supports the use of new idioms in CHR programs.
---
paper_title: Guard and Continuation Optimization for Occurrence Representations of CHR
paper_content:
Constraint Handling Rules (CHR) is a high-level rule-based language extension, commonly embedded in Prolog. We introduce a new occurrence representation of CHR programs, and a new operational semantics for occurrence representations, equivalent to the widely implemented refined operational semantics. The occurrence representation allows in a natural way to express guard and continuation optimizations, which remove redundant guards and eliminate redundant code for subsumed occurrences. These optimizations allow CHR programmers to write self-documented rules with a clear logical reading. We show correctness of both optimizations, present an implementation in the K.U.Leuven CHR compiler, and discuss speedup measurements.
---
paper_title: User-definable rule priorities for CHR
paper_content:
This paper introduces CHRrp: Constraint Handling Rules with user-definable rule priorities. CHRrp offers flexible execution control which is lacking in CHR. A formal operational semantics for the extended language is given and is shown to be an instance of the theoretical operational semantics of CHR. It is discussed how the CHR rp semantics influences confluence results. A translation scheme for CHRrp programs with static rule priorities into (regular) CHR is presented. The translation is proven correct and bench-mark results are given. CHRrp is related to priority systems in other constraint programming and rule based languages.
---
paper_title: On Completion of Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) is a high-level language for writing constraint solvers either from scratch or by modifying existing solvers. An important property of any constraint solver is confluence: The result of a computation should be independent from the order in which constraints arrive and in which rules are applied. In previous work [1], a sufficient and necessary condition for the confluence of terminating CHR programs was given by adapting and extending results about conditional term rewriting systems. In this paper we investigate so-called completion methods that make a non-confluent CHR program confluent by adding new rules. As it turns out, completion can also exhibit inconsistency of a CHR program. Moreover, as shown in this paper, completion can be used to define new constraints in terms of already existing constraints and to derive constraint solvers for them.
---
paper_title: Abstract Critical Pairs and Confluence of Arbitrary Binary Relations
paper_content:
In a seminal paper, Huet introduced abstract properties of term rewriting systems, and the confluence analysis of terminating term rewriting systems by critical pairs computation. In this paper, we provide an abstract notion of critical pair for arbitrary binary relations and context operators. We show how this notion applies to the confluence analysis of various transition systems, ranging from classical term rewriting systems to production rules with constraints and partial control strategies, such as the Constraint Handling Rules language CHR. Interestingly, we show in all these cases that some classical critical pairs can be disregarded. The crux of these analyses is the ability to compute critical pairs between states built with general context operators, on which a bounded, not necessarily well-founded, ordering is assumed.
---
paper_title: Integration and Optimization of Rule-based Constraint Solvers
paper_content:
One lesson learned from practical constraint solving applications is that constraints are often heterogeneous. Solving such constraints requires a collaboration of constraint solvers. In this paper, we introduce a methodology for the tight integration of CHR constraint programs into one such program. CHR is a high-level rule-based language for writing constraint solvers and reasoning systems. A constraint solver is well-behaved if it is terminating and confluent. When merging constraint solvers, this property may be lost. Based on previous results on CHR program analysis and transformation we show how to utilize completion to regain well-behavedness. We identify a class of solvers whose union is always confluent and we show that for preserving termination such a class is hard to find. The merged and completed constraint solvers may contain redundant rules. Utilizing the notion of operational equivalence, which is decidable for well-behaved CHR programs, we present a method to detect redundant rules in a CHR program.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Proving termination of CHR in Prolog : A transformational approach
paper_content:
Constraint Handling Rules (CHR) is a concurrent, comitted-choice, logic programming language. It is constraint-based and has guarded rules that rewrite multisets of atomic formulas [1]. Its simple syntax and semantics make it wellsuited for implementing custom constraint solvers. Despite the amount of work directed to CHR, not much has been done on termination analysis. To the best of our knowledge, there were no attempts made to automate termination proofs. The first and, until now, only contribution to termination analysis of CHR is reported in [2] and shows termination of CHR programs under the theoretical semantics [1] of CHR. Termination is shown, using a ranking function, mapping sets of constraints to a well-founded order. A ranking condition, on the level of the rules, implies termination. Although termination conditions in CHR take a different form than in Logic Programs (LP) and Term-Rewrite Systems (TRS), [2] shows that achievements from the work on termination of LP and TRS are relevant and adaptable to the CHR context. In this paper, we present a termination preserving transformation of CHR to Prolog. This allows the direct reuse of termination proof methods from LP and TRS for CHR, yielding the first fully automatic termination proving for CHR. We implemented the transformation and used existing termination tools for LP and TRS on a set of CHR programs to demonstrate the usefulness of our approach. In [3], we formalize the transformation and prove soundness w.r.t. termination.
---
paper_title: A new approach to termination analysis of Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) is a concurrent, committed-choice constraint programming language (see [2]). It is a rule-based language, in which multisets of atomic constraints are rewritten using guarded rules. It has a simple syntax and declarative semantics, and is very suitable for implementing constraint solvers. Although the language is strongly related to Logic Programming (LP) and to lesser extent also to Term-Rewrite Systems (TRS), termination analysis of CHR has received little attention. To the best of our knowledge, the only contribution so far is reported in [3]. This study is limited to CHR programs with only one type of rules: the so called ’simplification rules’. The work shows that, for this class of programs, termination analysis techniques developed for LP (see [1]) can be adapted to CHR. In this paper, we present a new approach to termination analysis of CHR which is applicable to a much larger class of CHR programs. We propose a new termination condition and show its applicability to CHR programs with rules that are not only of the simplification type. We have successfully tested the condition on a benchmark of programs, using a prototype analyser.
---
paper_title: The computational power and complexity of constraint handling rules
paper_content:
Constraint Handling Rules (CHR) is a high-level rule-based programming language which is increasingly used for general-purpose programming. We introduce the CHR machine, a model of computation based on the operational semantics of CHR. Its computational power and time complexity properties are compared to those of the well-understood Turing machine and Random Access Memory machine. This allows us to prove the interesting result that every algorithm can be implemented in CHR with the best known time and space complexity. We also investigate the practical relevance of this result and the constant factors involved. Finally we expand the scope of the discussion to other (declarative) programming languages.
---
paper_title: Logical Algorithms
paper_content:
Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic.
---
paper_title: Memory Reuse for CHR
paper_content:
Two Constraint Handling Rules compiler optimizations that drastically reduce the memory footprint of CHR programs are introduced. The reduction is the result of reusing suspension terms, the internal CHR constraint representation, and avoiding the overhead of constraint removal followed by insertion. The optimizations are defined formally and their correctness is proved. Both optimizations were implemented in the K.U.Leuven CHR system. Significant memory savings and speedups were measured on classical and well-known benchmarks.
---
paper_title: Complexity of a CHR solver for existentially quantified conjunctions of equations over trees
paper_content:
Constraint Handling Rules (CHR) is a concurrent, committed- choice, rule-based language. One of the first CHR programs is the classic constraint solver for syntactic equality of rational trees that performs unification. We first prove its exponential complexity in time and space for non-flat equations and deduce from this proof a quadratic complexity for flat equations. We then present an extended CHR solver for solving existentially quantified conjunctions of non-flat equations in the theory of finite or infinite trees. We reach a quadratic complexity by first flattening the equations and introducing new existentially quantified variables, then using the classic solver, and finally eliminating particular equations and quantified variables.
---
paper_title: As Time Goes By II: More Automatic Complexity Analysis of Concurrent Rule Programs
paper_content:
Abstract In previous papers we showed that from a suitable termination order (called ranking) one can automatically compute the worst-case time complexity of a CHR constraint simplification rule program from its program text. We combined the worst-case derivation length of a query predicted from its ranking with a worst-case estimate of the number and cost of rule application attempts and the cost of rule applications to obtain the desired meta-theorem. Here we generalize the approach presented in these papers and use it to analyse several non-trivial rule-based constraint solver programs. These results also hold for naive CHR implementations. We also present empirical evidence through test runs that the actual run-time of a state-of-the-art CHR implementation is much better due to optimizations like indexing.
---
paper_title: Logical rules for a lexicographic order constraint solver
paper_content:
We give an executable specification of the global constraint of lexicographic order in Constraint Handling Rules (CHR) language. In contrast to previous approaches, the implementation is short and concise without giving up on linear time worst case time complexity. It is incremental and concurrent by nature of CHR. It is provably correct and confluent. It is independent of the underlying constraint system, and therefore not restricted to finite domains. We also show completeness of constraint propagation, i.e. that all possible consequences of the constraint are generated by the implementation. Our algorithm is encoded by three pairs of rules, two corresponding to base cases, two performing the obvious traversal of the sequences to be compared and two covering a not so obvious special case when the lexicographic constraint has a unique solution.
---
paper_title: Dijkstra’s algorithm with Fibonacci heaps: An executable description
paper_content:
We construct a readable, compact and efficient implementation of Dijkstra’s shortest path algorithm and Fibonacci heaps using Constraint Handling Rules (CHR), which is increasingly used as a high-level rule-based general-purpose programming language. We measure its performance in different CHR systems, investigating both the theoretical asymptotic complexity and the constant factors realized in practice.
---
paper_title: Optimizing compilation of CHR with rule priorities
paper_content:
Constraint Handling Rules were recently extended with user-definable rule priorities. This paper shows how this extended language can be efficiently compiled into the underlying host language. It extends previous work by supporting rules with dynamic priorities and by introducing various optimizations. The effects of the optimizations are empirically evaluated and the new compiler is compared with the state-of-the-art K.U. Leuven CHR system.
---
paper_title: A prolog constraint handling rules compiler and runtime system
paper_content:
The most recent and advanced implementation of constraint handling rules (CHR) is introduced in a logic programming language. The Prolog implementation consists of a runtime system and a compiler. The runtime system utilizes attributed variables for the realization of the constraint store with efficient retrieval and update mechanisms. Rules describing the interactions between constraints are compiled into Prolog clauses by a compiler, the core of which comprises a small number of compact code generating templates in the form of definite clause grammar rules.
---
paper_title: Constraint Handling Rules and tabled execution
paper_content:
Both Constraint Handling Rules (CHR) and tabling – as implemented in XSB – are powerful enhancements of Prolog systems, based on fix point computation. Until now they have only been implemented in separate systems. This paper presents the work involved in porting a CHR system to XSB and in particular the technical issues related to the integration of CHR with tabled resolution. These issues include call abstraction, answer projection, entailment checking, answer combination and tabled constraint store representations. Different optimizations related to tabling constraints are evaluated empirically. The integration requires no changes to the tabling engine. We also show that the performance of CHR programs without tabling is not affected. Now, with the combined power of CHR and tabling, it is possible to easily introduce constraint solvers in applications using tabling, or to use tabling in constraint solvers.
---
paper_title: The K.U.Leuven CHR system: implementation and application
paper_content:
We present the K.U.Leuven CHR system: what started out as a validation of a new attributed variables implementation, has become a part of three different Prolog systems with an increasing userbase. In this paper we highlight the particular implementation aspects of the K.U.Leuven CHR system, and a few CHR applications that we have built with our system.
---
paper_title: CHR for XSB
paper_content:
XSB is a highly declarative programming system consisting of Prolog extended with tabled resolution. It is useful for many tasks, some of which require constraint solving. Thus flexible and high level support for constraint systems is required. Constraint Handling Rules is exactly such a high level language embedded in Prolog for writing application tailored constraint solvers. In this paper we present the integration of a CHR system in the XSB system and especially our findings on how to integrate CHR with tabled resolution, such as how to deal with issues as call abstraction of constraints, constraint store merging, answer store projection and constraint store representations for tabling. We illustrate the power of the XSB-CHR combination with two examples in the field of model checking. It is indeed possible to quickly write application specific constraint solvers, experiment with them and achieve a reasonable performance and high readability. The combination of XSB’s goal-driven fixpoint execution model with CHR’s committed choice bottom-up approach has proven not only feasible, but considerably useful as well.
---
paper_title: Optimizing compilation of constraint handling rules in HAL
paper_content:
In this paper we discuss the optimizing compilation of Constraint Handling Rules (CHRs). CHRs are a multi-headed committed choice constraint language, commonly applied for writing incremental constraint solvers. CHRs are usually implemented as a language extension that compiles to the underlying language. In this paper we show how we can use different kinds of information in the compilation of CHRs to obtain access efficiency, and a better translation of the CHR rules into the underlying language, which in this case is HAL. The kinds of information used include the types, modes, determinism, functional dependencies and symmetries of the CHR constraints. We also show how to analyze CHR programs to determine this information about functional dependencies, symmetries and other kinds of information supporting optimizations.
---
paper_title: Adaptive Constraint Handling with CHR in Java
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) is introduced in the imperative object-oriented programming language Java. The presented Java implementation consists of a compiler and a run-time system, all implemented in Java. The run-time system implements data structures like sparse bit vectors, logical variables and terms as well as an adaptive unification and an adaptive entailment algorithm. Approved technologies like attributed variables for constraint storage and retrieval as well as code generation for each head constraint are used. Also implemented are theoretically sound algorithms for adapting of rule derivations and constraint stores after arbitrary constraint deletions. The presentation is rounded off with some novel applications of CHR in constraint processing: simulated annealing for the n queens problem and intelligent backtracking for some SAT benchmark problems.
---
paper_title: Intelligent search strategies based on adaptive Constraint Handling Rules
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP). This presentation compares an improved version of conflict-directed backjumping and two variants of dynamic backtracking with respect to chronological backtracking on some of the AIM instances which are a benchmark set of random 3-SAT problems. A CHR implementation of a Boolean constraint solver combined with these different search strategies in Java is thus being compared with a CHR implementation of the same Boolean constraint solver combined with chronological backtracking in SICStus Prolog. This comparison shows that the addition of “intelligence” to the search process may reduce the number of search steps dramatically. Furthermore, the runtime of their Java implementations is in most cases faster than the implementations of chronological backtracking. More specifically, conflict-directed backjumping is even faster than the SICStus Prolog implementation of chronological backtracking, although our Java implementation of CHR lacks the optimisations made in the SICStus Prolog system.
---
paper_title: CHR for imperative host languages
paper_content:
In this paper, we address the different conceptual and technical difficulties encountered when embedding CHR into an imperative host language. We argue that a tight, natural integration leads to a powerful programming language extension, intuitive to both CHR and imperative programmers. We show how to compile CHR to highly optimized imperative code. To this end, we first review the well-established CHR compilation scheme, and survey the large body of possible optimizations. We then show that this scheme, when used for compilation to imperative target languages, leads to stack overflows. We therefore introduce new optimizations that considerably improve the performance of recursive CHR programs. Rules written using tail calls are even guaranteed to run in constant space. We implemented systems for both Java and C, following the language design principles and compilation scheme presented in this paper, and show that our implementations outperform other state-of-the-art CHR compilers by several orders of magnitude.
---
paper_title: Adaptive Constraint Handling with CHR in Java
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) is introduced in the imperative object-oriented programming language Java. The presented Java implementation consists of a compiler and a run-time system, all implemented in Java. The run-time system implements data structures like sparse bit vectors, logical variables and terms as well as an adaptive unification and an adaptive entailment algorithm. Approved technologies like attributed variables for constraint storage and retrieval as well as code generation for each head constraint are used. Also implemented are theoretically sound algorithms for adapting of rule derivations and constraint stores after arbitrary constraint deletions. The presentation is rounded off with some novel applications of CHR in constraint processing: simulated annealing for the n queens problem and intelligent backtracking for some SAT benchmark problems.
---
paper_title: The Refined Operational Semantics of Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHRs) are a high-level rule-based programming language commonly used to write constraint solvers. The theoretical operational semantics for CHRs is highly non-deterministic and relies on writing confluent programs to have a meaningful behaviour. Implementations of CHRs use an operational semantics which is considerably finer than the theoretical operational semantics, but is still non-deterministic (from the user’s perspective). This paper formally defines this refined operational semantics and proves it implements the theoretical operational semantics. It also shows how to create a (partial) confluence checker capable of detecting programs which are confluent under this semantics, but not under the theoretical operational semantics. This supports the use of new idioms in CHR programs.
---
paper_title: Projection in adaptive Constraint Handling
paper_content:
Constraint solving in dynamic environments requires an immediate adaptation of the solutions if the constraint problems are changing. Constraint solving with Constraint Handling Rules (CHR) is extended with incremental algorithms, thus supporting the solution of dynamic constraint satisfaction problems (DCSPs). Unfortunately, constraint processing with CHR introduces a lot of new variables which require additional memory space and reduce run-time performance. Most of the variables may be eliminated without any loss of information. Thus, memory may be kept rather small and run-time performance may be improved. This paper describes the use of projection with CHR in order to eliminate irrelevant variable bindings and maintain the constraint store quite small. In detail, some projection algorithms are presented to eliminate variables which are introduced during constraint processing with CHR. Projection is called early projection if it is applied together with each rule application, thus eliminating recently introduced irrelevant variable bindings while keeping the derived constraint store quite small. This kind of projection is well-suited when solving Dynamic Constraint Satisfaction Problems, especially after constraint deletion, when many superfluous variable binding have to be deleted as well. Consequently, the modifications that are required for an adaptation are reduced. This may result in an improved performance of the adaptation algorithms and a better performance for non-adaptive constraint processing with CHR is also expected.
---
paper_title: Extending arbitrary solvers with constraint handling rules
paper_content:
Constraint Handling Rules (CHRs) are a high-level committed choice programming language commonly used to write constraint solvers. While the semantic basis of CHRs allows them to extend arbitrary underlying constraint solvers, in practice, all current implementations only extend Herbrand equation solvers. In this paper we show how to define CHR programs that extend arbitrary solvers and fully interact with them. In the process, we examine how to compile such programs to perform as little recomputation as possible, and describe how to build index structures for CHR constraints that are modified automatically when variables in the underlying solver change. We report on the implementation of these techniques in the HAL compiler, and give empirical results illustrating their benefits.
---
paper_title: Compiling Constraint Handling Rules into Prolog with Attributed Variables
paper_content:
We introduce the most recent and advanced implementation of constraint handling rules (CHR) in a logic programming language, which improves both on previous implementations (in terms of completeness, flexibility and efficiency) and on the principles that should guide such a Prolog implementation consisting of a runtime system and a compiler. The runtime system utilizes attributed variables for the realization of the constraint store with efficient retrieval and update mechanisms. Rules describing the interactions between constraints are compiled into Prolog clauses by a multi-phase compiler, the core of which comprises a small number of compact code generating templates in the form of definite clause grammar rules.
---
paper_title: Translating constraint handling rules into action rules
paper_content:
CHR is a popular high-level language for implementing constraint solvers and other general purpose applications. It has a wellestablished operational semantics and quite a number of different implementations, prominently in Prolog. However, there is still much room for exploring the compilation of CHR to Prolog. Nearly all implementations rely on attributed variables. In this paper, we explore a different implementation target for CHR: B-Prolog’s Action Rules (ARs). As a rule-based language, it is a good match for particular aspects of CHR. However, the strict adherence to CHR’s refined operational semantics poses some difficulty. We report on our work in progress: a novel compilation schema, required changes to the AR language and the preliminary benchmarks and experiences.
---
paper_title: CHR for imperative host languages
paper_content:
In this paper, we address the different conceptual and technical difficulties encountered when embedding CHR into an imperative host language. We argue that a tight, natural integration leads to a powerful programming language extension, intuitive to both CHR and imperative programmers. We show how to compile CHR to highly optimized imperative code. To this end, we first review the well-established CHR compilation scheme, and survey the large body of possible optimizations. We then show that this scheme, when used for compilation to imperative target languages, leads to stack overflows. We therefore introduce new optimizations that considerably improve the performance of recursive CHR programs. Rules written using tail calls are even guaranteed to run in constant space. We implemented systems for both Java and C, following the language design principles and compilation scheme presented in this paper, and show that our implementations outperform other state-of-the-art CHR compilers by several orders of magnitude.
---
paper_title: Optimizing compilation of CHR with rule priorities
paper_content:
Constraint Handling Rules were recently extended with user-definable rule priorities. This paper shows how this extended language can be efficiently compiled into the underlying host language. It extends previous work by supporting rules with dynamic priorities and by introducing various optimizations. The effects of the optimizations are empirically evaluated and the new compiler is compared with the state-of-the-art K.U. Leuven CHR system.
---
paper_title: User-definable rule priorities for CHR
paper_content:
This paper introduces CHRrp: Constraint Handling Rules with user-definable rule priorities. CHRrp offers flexible execution control which is lacking in CHR. A formal operational semantics for the extended language is given and is shown to be an instance of the theoretical operational semantics of CHR. It is discussed how the CHR rp semantics influences confluence results. A translation scheme for CHRrp programs with static rule priorities into (regular) CHR is presented. The translation is proven correct and bench-mark results are given. CHRrp is related to priority systems in other constraint programming and rule based languages.
---
paper_title: The computational power and complexity of constraint handling rules
paper_content:
Constraint Handling Rules (CHR) is a high-level rule-based programming language which is increasingly used for general-purpose programming. We introduce the CHR machine, a model of computation based on the operational semantics of CHR. Its computational power and time complexity properties are compared to those of the well-understood Turing machine and Random Access Memory machine. This allows us to prove the interesting result that every algorithm can be implemented in CHR with the best known time and space complexity. We also investigate the practical relevance of this result and the constant factors involved. Finally we expand the scope of the discussion to other (declarative) programming languages.
---
paper_title: Memory Reuse for CHR
paper_content:
Two Constraint Handling Rules compiler optimizations that drastically reduce the memory footprint of CHR programs are introduced. The reduction is the result of reusing suspension terms, the internal CHR constraint representation, and avoiding the overhead of constraint removal followed by insertion. The optimizations are defined formally and their correctness is proved. Both optimizations were implemented in the K.U.Leuven CHR system. Significant memory savings and speedups were measured on classical and well-known benchmarks.
---
paper_title: Guard and Continuation Optimization for Occurrence Representations of CHR
paper_content:
Constraint Handling Rules (CHR) is a high-level rule-based language extension, commonly embedded in Prolog. We introduce a new occurrence representation of CHR programs, and a new operational semantics for occurrence representations, equivalent to the widely implemented refined operational semantics. The occurrence representation allows in a natural way to express guard and continuation optimizations, which remove redundant guards and eliminate redundant code for subsumed occurrences. These optimizations allow CHR programmers to write self-documented rules with a clear logical reading. We show correctness of both optimizations, present an implementation in the K.U.Leuven CHR compiler, and discuss speedup measurements.
---
paper_title: Optimizing compilation of constraint handling rules in HAL
paper_content:
In this paper we discuss the optimizing compilation of Constraint Handling Rules (CHRs). CHRs are a multi-headed committed choice constraint language, commonly applied for writing incremental constraint solvers. CHRs are usually implemented as a language extension that compiles to the underlying language. In this paper we show how we can use different kinds of information in the compilation of CHRs to obtain access efficiency, and a better translation of the CHR rules into the underlying language, which in this case is HAL. The kinds of information used include the types, modes, determinism, functional dependencies and symmetries of the CHR constraints. We also show how to analyze CHR programs to determine this information about functional dependencies, symmetries and other kinds of information supporting optimizations.
---
paper_title: CHR for imperative host languages
paper_content:
In this paper, we address the different conceptual and technical difficulties encountered when embedding CHR into an imperative host language. We argue that a tight, natural integration leads to a powerful programming language extension, intuitive to both CHR and imperative programmers. We show how to compile CHR to highly optimized imperative code. To this end, we first review the well-established CHR compilation scheme, and survey the large body of possible optimizations. We then show that this scheme, when used for compilation to imperative target languages, leads to stack overflows. We therefore introduce new optimizations that considerably improve the performance of recursive CHR programs. Rules written using tail calls are even guaranteed to run in constant space. We implemented systems for both Java and C, following the language design principles and compilation scheme presented in this paper, and show that our implementations outperform other state-of-the-art CHR compilers by several orders of magnitude.
---
paper_title: Dijkstra’s algorithm with Fibonacci heaps: An executable description
paper_content:
We construct a readable, compact and efficient implementation of Dijkstra’s shortest path algorithm and Fibonacci heaps using Constraint Handling Rules (CHR), which is increasingly used as a high-level rule-based general-purpose programming language. We measure its performance in different CHR systems, investigating both the theoretical asymptotic complexity and the constant factors realized in practice.
---
paper_title: Unfolding in CHR
paper_content:
Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption and more generally to optimize a given program. Essentially it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. One of the basic operations which is used by most program transformation systems is unfolding which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages and, to the best of our knowledge, no one has considered unfolding of CHR programs. This paper is a first attempt to define a correct unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning.
---
paper_title: Aggregates for CHR through program transformation
paper_content:
We propose an extension of Constraint Handling Rules (CHR) with aggregates such as sum , count , findall , and min . This new feature significantly improves the conciseness and expressiveness of the language. In this paper, we describe an implementation based on source-to-source transformations to CHR (extended with some low-level compiler directives). We allow user-defined aggregates and nested aggregate expressions over arbitrary guarded conjunctions of constraints. Both an on-demand and an incremental aggregate computation strategy are supported.
---
paper_title: Specialization of Concurrent Guarded Multi-Set Transformation Rules
paper_content:
Program transformation and in particular partial evaluation are appealing techniques for declarative programs to improve not only their performance. This paper presents the first step towards developing program transformation techniques for a concurrent constraint programming language where guarded rules rewrite and augment multi-sets of atomic formulae, called Constraint Handling Rules (CHR). ::: ::: We study the specialization of rules with regard to a given goal (query). We show the correctness of this program transformation: Adding and removing specialized rules in a program does not change the program's operational semantics. Furthermore termination and confluence of the program are shown to be preserved.
---
paper_title: A type system for CHR
paper_content:
The language of Constraint Handling Rules (CHR) of T. Fruhwirth [1] is a successful rule-based language for implementing constraint solvers in a wide variety of domains. It is an extension of a host language, such as Prolog [2], Java or Haskell [3], allowing the introduction of new constraints in a declarative way. One peculiarity of CHR is that it allows multiple heads in rules. For the sake of simplicity, we consider only simpli.cation rules, since the distinction of propagation and simpagation rules [1] is not needed for typing purposes. A simpli.cation rule is of the form $H_{1},...,H_{i} \Leftrightarrow G_{1},...,G_{j}|B_{1},...,B_{k}$, where H1,...,Hi is a nonempty sequence of CHR constraints, the guard G1,...,Gj is a sequence of native constraints and the body B1,...,Bk is a sequence of CHR and native constraints.
---
paper_title: Probabilistic Constraint Handling Rules
paper_content:
Abstract Classical Constraint Handling Rules (CHR) provide a powerful tool for specifying and implementing constraint solvers and programs. The rules of CHR rewrite constraints (non-deterministically) into simpler ones until they are solved. In this paper we introduce an extension of Constraint Handling Rules (CHR), namely Probabilistic CHRs (PCHR). These allow the probabilistic “weighting” of rules, specifying the probability of their application. In this way we are able to formalise various randomised algorithms such as for example Simulated Annealing. The implementation is based on source-to-source transformation (STS). Using a recently developed prototype for STS for CHR, we could implement probabilistic CHR in a concise way with a few lines of code in less than one hour.
---
paper_title: Constraint Handling Rules and tabled execution
paper_content:
Both Constraint Handling Rules (CHR) and tabling – as implemented in XSB – are powerful enhancements of Prolog systems, based on fix point computation. Until now they have only been implemented in separate systems. This paper presents the work involved in porting a CHR system to XSB and in particular the technical issues related to the integration of CHR with tabled resolution. These issues include call abstraction, answer projection, entailment checking, answer combination and tabled constraint store representations. Different optimizations related to tabling constraints are evaluated empirically. The integration requires no changes to the tabling engine. We also show that the performance of CHR programs without tabling is not affected. Now, with the combined power of CHR and tabling, it is possible to easily introduce constraint solvers in applications using tabling, or to use tabling in constraint solvers.
---
paper_title: Adaptive Constraint Handling with CHR in Java
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) is introduced in the imperative object-oriented programming language Java. The presented Java implementation consists of a compiler and a run-time system, all implemented in Java. The run-time system implements data structures like sparse bit vectors, logical variables and terms as well as an adaptive unification and an adaptive entailment algorithm. Approved technologies like attributed variables for constraint storage and retrieval as well as code generation for each head constraint are used. Also implemented are theoretically sound algorithms for adapting of rule derivations and constraint stores after arbitrary constraint deletions. The presentation is rounded off with some novel applications of CHR in constraint processing: simulated annealing for the n queens problem and intelligent backtracking for some SAT benchmark problems.
---
paper_title: Projection in adaptive Constraint Handling
paper_content:
Constraint solving in dynamic environments requires an immediate adaptation of the solutions if the constraint problems are changing. Constraint solving with Constraint Handling Rules (CHR) is extended with incremental algorithms, thus supporting the solution of dynamic constraint satisfaction problems (DCSPs). Unfortunately, constraint processing with CHR introduces a lot of new variables which require additional memory space and reduce run-time performance. Most of the variables may be eliminated without any loss of information. Thus, memory may be kept rather small and run-time performance may be improved. This paper describes the use of projection with CHR in order to eliminate irrelevant variable bindings and maintain the constraint store quite small. In detail, some projection algorithms are presented to eliminate variables which are introduced during constraint processing with CHR. Projection is called early projection if it is applied together with each rule application, thus eliminating recently introduced irrelevant variable bindings while keeping the derived constraint store quite small. This kind of projection is well-suited when solving Dynamic Constraint Satisfaction Problems, especially after constraint deletion, when many superfluous variable binding have to be deleted as well. Consequently, the modifications that are required for an adaptation are reduced. This may result in an improved performance of the adaptation algorithms and a better performance for non-adaptive constraint processing with CHR is also expected.
---
paper_title: CHR: A Flexible Query Language
paper_content:
We show how the language Constraint Handling Rules (CHR), a high-level logic language for the implementation of constraint solvers, can be slightly extended to become a general-purpose logic programming language with an expressive power subsuming the expressive ower of Horn clause programs with SLD resolution. The extended language, called “CHR∀”, retains however the extra features of CHR, e.g., committed choice and matching, which axe important for other purposes, especially for efficiently solving constraints. CHR∀ turns out to be a very flexible query language in the sense that it supports several (constraint) logic programming paradigms and allows to mix them in a single program. In particular, it supports top-down query evaluation and also bottom-up evaluation as it is frequently used in (disjunctive) deductive databases.
---
paper_title: Constraint Programming Architectures : Review and a New Proposal
paper_content:
Most automated reasoning tasks with practical applications can be automatically reformulated into a constraint solving task. A constraint programming platform can thus act as a unique, underlying engine to be reused for multiple automated reasoning tasks in intelligent agents and systems. We identify six key requirements for such platform: expressive task modeling language, rapid solving method customization and combination, adaptive solving method, user-friendly solution explanation, efficient execution, and seamless integration within larger systems and practical applications. We then propose a novel, model-driven, component and rule-based architecture for such a platform that better satisfies as a whole this set of requirements than those of currently available platforms.
---
paper_title: Intelligent search strategies based on adaptive Constraint Handling Rules
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP). This presentation compares an improved version of conflict-directed backjumping and two variants of dynamic backtracking with respect to chronological backtracking on some of the AIM instances which are a benchmark set of random 3-SAT problems. A CHR implementation of a Boolean constraint solver combined with these different search strategies in Java is thus being compared with a CHR implementation of the same Boolean constraint solver combined with chronological backtracking in SICStus Prolog. This comparison shows that the addition of “intelligence” to the search process may reduce the number of search steps dramatically. Furthermore, the runtime of their Java implementations is in most cases faster than the implementations of chronological backtracking. More specifically, conflict-directed backjumping is even faster than the SICStus Prolog implementation of chronological backtracking, although our Java implementation of CHR lacks the optimisations made in the SICStus Prolog system.
---
paper_title: Adaptive CHR meets CHR ∨ : An extended refined operational semantics for CHR ∨ based on justifications
paper_content:
A system and method for electronically exchanging information related to telecommunication services includes separating data representing the information to be exchanged into predefined segments corresponding to telecommunication services, associating a segment identification code with each segment, and grouping each segment identification code with corresponding data. The system and method also include concatenating the segment identification codes and associated data according to a predefined sequence to form an electronic transaction and transmitting the electronic message to a telecommunications wholesaler or reseller. Preferably, the information is exchanged over a TCP/IP connection in an interactive, transaction-based exchange.
---
paper_title: Aggregates for CHR through program transformation
paper_content:
We propose an extension of Constraint Handling Rules (CHR) with aggregates such as sum , count , findall , and min . This new feature significantly improves the conciseness and expressiveness of the language. In this paper, we describe an implementation based on source-to-source transformations to CHR (extended with some low-level compiler directives). We allow user-defined aggregates and nested aggregate expressions over arbitrary guarded conjunctions of constraints. Both an on-demand and an incremental aggregate computation strategy are supported.
---
paper_title: Aggregates for Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) [2,3,4] is a general-purpose programming language based on committed-choice, multi-headed, guarded multiset rewrite rules. As the head of each CHR rule only considers a fixed number of constraints, any form of aggregation over unbounded parts of the constraint store necessarily requires explicit encoding, using auxiliary constraints and rules.
---
paper_title: Extending arbitrary solvers with constraint handling rules
paper_content:
Constraint Handling Rules (CHRs) are a high-level committed choice programming language commonly used to write constraint solvers. While the semantic basis of CHRs allows them to extend arbitrary underlying constraint solvers, in practice, all current implementations only extend Herbrand equation solvers. In this paper we show how to define CHR programs that extend arbitrary solvers and fully interact with them. In the process, we examine how to compile such programs to perform as little recomputation as possible, and describe how to build index structures for CHR constraints that are modified automatically when variables in the underlying solver change. We report on the implementation of these techniques in the HAL compiler, and give empirical results illustrating their benefits.
---
paper_title: A Unified Semantics for Constraint Handling Rules in Transaction Logic
paper_content:
Reasoning on Constraint Handling Rules (CHR) programs and their executional behaviour is often ad-hoc and outside of a formal system. This is a pity, because CHR subsumes a wide range of important automated reasoning services. Mapping CHR to Transaction Logic (τ R) combines CHR rule specification, CHR rule application, and reasoning on CHR programs and CHR derivations inside one formal system which is executable. This new τ R semantics obviates the need for disjoint declarative and operational semantics.
---
paper_title: A linear-logic semantics for constraint handling rules
paper_content:
One of the attractive features of the Constraint Handling Rules (CHR) programming language is its declarative semantics where rules are read as formulae in first-order predicate logic. However, the more CHR is used as a general-purpose programming language, the more the limitations of that kind of declarative semantics in modelling change become apparent. We propose an alternative declarative semantics based on (intuitionistic) linear logic, establishing strong theorems on both soundness and completeness of the new declarative semantics w.r.t. operational semantics.
---
paper_title: ACD Term Rewriting
paper_content:
In this paper we introduce Associative Commutative Distributive Term Rewriting (ACDTR), a rewriting language for rewriting logical formulae. ACDTR extends AC term rewriting by adding distribution of conjunction over other operators. Conjunction is vital for expressive term rewriting systems since it allows us to require that multiple conditions hold for a term rewriting rule to be used. ACDTR uses the notion of a conjunctive context, which is the conjunction of constraints that must hold in the context of a term, to enable the programmer to write very expressive and targeted rewriting rules. ACDTR can be seen as a general logic programming language that extends Constraint Handling Rules and AC term rewriting. In this paper we define the semantics of ACDTR and describe our prototype implementation.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Logical Algorithms
paper_content:
Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic.
---
paper_title: Observable confluence for constraint handling rules
paper_content:
Constraint Handling Rules (CHR) are a powerful rule based language for specifying constraint solvers. Critical for any rule based language is the notion of confluence, and for terminating CHR programs there is a decidable test for confluence. But many CHR programs that are in practice confluent fail this confluence test. The problem is that the states that illustrate non-confluence are not observable from the initial goals of interest. In this paper we introduce the notion of observable confluence, a more general notion of confluence which takes into account whether states are observable. We devise a test for observable confluence which allows us to verify observable confluence for a range of CHR programs dealing with agents, type systems, and the union-find algorithm.
---
paper_title: Optimal placement of base stations in wireless indoor telecommunication
paper_content:
Planning of local wireless communication networks is about installing base stations (small radio transmitters) to provide wireless devices with strong enough signals. POPULAR is an advanced industrial prototype that allows to compute the minimal number of base stations and their location given a blue-print of the installation site and information about the materials used for walls and ceilings. It does so by simulating the propagation of radio-waves using ray tracing and by subsequent optimization of the number of base stations needed to cover the whole building. Taking advantage of state-of-the-art techniques for programmable application-oriented constraint solving, POPULAR is among the first practical tools that can optimally plan wireless communication networks.
---
paper_title: The Munich Rent Advisor: A Success for Logic Programming on the Internet
paper_content:
Most cities in Germany regularly publish a booklet called the Mietspiegel. It basically contains a verbal description of an expert system. It allows the calculation of the estimated fair rent for a flat. By hand, one may need a weekend to do this task. With our computerized version, the Munich Rent Advisor, the user just fills in a form in a few minutes, and the rent is calculated immediately. We also extended the functionality and applicability of the Mietspiegel so that the user need not answer all questions on the form. The key to computing with partial information using high-level programming was to use constraint logic programming. We rely on the Internet, and more specifically the World Wide Web, to provide this service to a broad user group, the citizens of Munich and the people who are planning to move to Munich. To process the answers from the questionnaire and return its result, we wrote a small simple stable special-purpose web server directly in ECLiPSe. More than 10,000 people have used our service in the last three years. This article describes the experiences in implementing and using the Munich Rent Advisor. Our results suggest that logic programming with constraints can be an important ingredient in intelligent internet systems.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: A constraint solver for sequences and its applications
paper_content:
Constraint programming techniques are successfully used in various areas of software engineering for industry, commerce, transport, finance etc. Constraint solvers for different data types are applied in validation and verification of programs containing data elements of these types. A general constraint solver for sequences is necessary to take into account this data type in the existing validation and verification tools. In this work, we present an original constraint solver for sequences implemented in CHR and based on T. Fruhwirth's solver for lists with the propagation of two constraints: generalized concatenation and size. The applications of the solver (with the validation and verification tool BZTT) to different software engineering problems are illustrated by the example of a waiting room model.
---
paper_title: Toward a first-order extension of Prolog's unification using CHR: a CHR first-order constraint solver over finite or infinite trees
paper_content:
Prolog, which stands for PROgramming in LOGic, is the most widely used language in the logic programming paradigm. One of its main concepts is unification. It represents the mechanism of binding the contents of variables and can be seen as solving conjunctions of equations over finite or infinite trees. We present in this paper an idea of a first-order extension of Prolog's unification by giving a general algorithm for solving any first-order constraint in the theory T of finite or infinite trees, extended by a relation which allows to distinguish between finite and infinite trees. The algorithm is given in the form of 16 rewriting rules which transform any first-order formula ' into an equivalent disjunction D of simple formulas in which the solutions of the free variables are expressed in a clear and explicit way. We end this paper describing a CHR implementation of our algorithm. CHR (Constraint Handling Rules) has originally been developed for writing constraint solvers, but the constraints here go much beyond implicitly quantified conjunctions of atomic constraints and are considered as arbitrary first-order formulas built on the signature of T. We discuss how we implement nested local constraint stores and what programming patterns and language features we found useful in the CHR implementation of our algorithm.
---
paper_title: Complexity of a CHR solver for existentially quantified conjunctions of equations over trees
paper_content:
Constraint Handling Rules (CHR) is a concurrent, committed- choice, rule-based language. One of the first CHR programs is the classic constraint solver for syntactic equality of rational trees that performs unification. We first prove its exponential complexity in time and space for non-flat equations and deduce from this proof a quadratic complexity for flat equations. We then present an extended CHR solver for solving existentially quantified conjunctions of non-flat equations in the theory of finite or infinite trees. We reach a quadratic complexity by first flattening the equations and introducing new existentially quantified variables, then using the classic solver, and finally eliminating particular equations and quantified variables.
---
paper_title: Adaptive Constraint Handling with CHR in Java
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) is introduced in the imperative object-oriented programming language Java. The presented Java implementation consists of a compiler and a run-time system, all implemented in Java. The run-time system implements data structures like sparse bit vectors, logical variables and terms as well as an adaptive unification and an adaptive entailment algorithm. Approved technologies like attributed variables for constraint storage and retrieval as well as code generation for each head constraint are used. Also implemented are theoretically sound algorithms for adapting of rule derivations and constraint stores after arbitrary constraint deletions. The presentation is rounded off with some novel applications of CHR in constraint processing: simulated annealing for the n queens problem and intelligent backtracking for some SAT benchmark problems.
---
paper_title: On Incremental Adaptation of CHR Derivations
paper_content:
Constraint-solving in dynamic environments requires the immediate adaptation of solutions of constraint satisfaction problems if these problems are changing. After any change, an adapted solution is preferred which is stable, i.e., as close as possible to the original solution. A wide range of incremental constraint-solving methods for dynamic, especially finite-domain, constraint satisfaction problems (DCSPs) are known, which more or less satisfy this additional requirement. Adaptation of DCSPs after constraint additions is generally simple and successfully solved, but adaptation after arbitrary constraint deletions is not. How constraint handling rules (CHRs), a high level language extension for implementing constraint solvers, can be improved in this respect is investigated. A new incremental algorithm is presented, which adapts an arbitrary CHR derivation after eliminations of constraints in the processed DCSPs. Thus, the existing solvers - there are several dozens of them - for various kinds of const...
---
paper_title: Soft Constraint Propagation and Solving in Constraint Handling Rules
paper_content:
Soft constraints are a generalization of classical constraints, which allow for the description of preferences rather than strict requirements. In soft constraints, constraints and partial assignments are given preference or importance levels, and constraints are combined according to combinators which express the desired optimization criteria. On the other hand, constraint handling rules (CHR) constitute a high-level natural formalism to specify constraint solvers and propagation algorithms. We present a framework to design and specify soft constraint solvers by using CHR. In this way, we extend the range of applicability of CHR to soft constraints rather than just classical ones, and we provide a straightforward implementation for soft constraint solvers.
---
paper_title: Qualitative Spatial Reasoning: Theory and Practice: Application to Robot Navigation
paper_content:
With the aim of automatically reasoning with spatial aspects in a cognitive way, several qualitative models have been developed recently in the field of qualitative spatial reasoning. However, most of these models simplify spatial objects to points, and, to date, there is no model to reason with several spatial aspects in a uniform way. This work provides an approach for integrating the qualitative concepts of orientation, distance, and cardinal directions, using points as well as extended objects as primitive of reasoning, based on Constraint Logic Programming. The resulting model has been applied to build a qualitative Navigation Simulator on the structured environment of the city of Castellon.
---
paper_title: A Framework Based on CLP Extended with CHRs for Reasoning with Qualitative Orientation and Positional Information
paper_content:
Several qualitative models have been developed in the last years with the aim of simulating the spatial reasoning process used by humans. However, up to now no model has been developed to represent and reason with several aspects of space (such as orientation, distance and cardinal directions, for instance) in a uniform way, i.e. by referring to the same spatial objects. An approach for the integration of several aspects of space into the same model, based on constraint logic programming extended with constraint handling rules, is proposed in this article. As an example of this approach, the integration of orientation and positional information in the same model is explained.
---
paper_title: Modeling Motion by the Integration of Topology and Time
paper_content:
A qualitative representational model and the corresponding reasoning process for integrating time and topological information is developed in this paper. In the calculus presented, topological information in function of the point of the time in which it is true is represented as an instance of the Constraint Satisfaction Problem. The resulting method can be applied to qualitative navigation of autonomous agents. The model presented in this paper will help us during the path planning task by describing the sequence of topological situations that the agent should find during its way to the target objective. A preliminary result of that application has been obtained by using qualitative representation of such spatial aspects for the autonomous simulated navigation of a Nomad-200 robot, on a structured environment of an easy corridor in a building.
---
paper_title: Reasoning about Actions with CHRs and Finite Domain Constraints
paper_content:
We present a CLP-based approach to reasoning about actions in the presence of incomplete states. Constraints expressing negative and disjunctive state knowledge are processed by a set of special Constraint Handling Rules. In turn, these rules reduce to standard finite domain constraints when handling variable arguments of single state components. Correctness of the approach is proved against the general action theory of the Fluent Calculus. The constraint solver is used as the kernel of a high-level programming language for agents that reason and plan. Experiments have shown that the constraint solver exhibits excellent computational behavior and scales up well.
---
paper_title: M.: Fluxplayer: A successful general game player
paper_content:
General Game Playing (GGP) is the art of designing programs that are capable of playing previously unknown games of a wide variety by being told nothing but the rules of the game. This is in contrast to traditional computer game players like Deep Blue, which are designed for a particular game and can't adapt automatically to modifications of the rules, let alone play completely different games. General Game Playing is intended to foster the development of integrated cognitive information processing technology. In this article we present an approach to General Game Playing using a novel way of automatically constructing a position evaluation function from a formal game description. Our system is being tested with a wide range of different games. Most notably, it is the winner of the AAAI GGP Competition 2006.
---
paper_title: Observable confluence for constraint handling rules
paper_content:
Constraint Handling Rules (CHR) are a powerful rule based language for specifying constraint solvers. Critical for any rule based language is the notion of confluence, and for terminating CHR programs there is a decidable test for confluence. But many CHR programs that are in practice confluent fail this confluence test. The problem is that the states that illustrate non-confluence are not observable from the initial goals of interest. In this paper we introduce the notion of observable confluence, a more general notion of confluence which takes into account whether states are observable. We devise a test for observable confluence which allows us to verify observable confluence for a range of CHR programs dealing with agents, type systems, and the union-find algorithm.
---
paper_title: Semantic Web Reasoning for Ontology-Based Integration of Resources
paper_content:
The Semantic Web should enhance the current World Wide Web with reasoning capabilities for enabling automated processing of possibly distributed information. In this paper we describe an architecture for Semantic Web reasoning and query answering in a very general setting involving several heterogeneous information sources, as well as domain ontologies needed for offering a uniform and source-independent view on the data. Since querying a Web source is very costly in terms of response time, we focus mainly on the query planner of such a system, as it may allow avoiding the access to query-irrelevant sources or combinations of sources based on knowledge about the domain and the sources.
---
paper_title: Information Integration Using Contextual Knowledge and Ontology Merging
paper_content:
With the advances in telecommunications, and the introduction of the Internet, information systems achieved physical connectivity, but have yet to establish logical connectivity. Lack of logical connectivity is often inviting disaster as in the case of Mars Orbiter, which was lost because one team used metric units, the other English while exchanging a critical maneuver data. In this Thesis, we focus on the two intertwined sub problems of logical connectivity, namely data extraction and data interpretation in the domain of heterogeneous information systems. ::: The first challenge, data extraction, is about making it possible to easily exchange data among semi-structured and structured information systems. We describe the design and implementation of a general purpose, regular expression based Cameleon wrapper engine with an integrated capabilities-aware planner/optimizer/executioner. ::: The second challenge, data interpretation, deals with the existence of heterogeneous contexts, whereby each source of information and potential receiver of that information may operate with a different context, leading to large-scale semantic heterogeneity. We extend the existing formalization of the COIN framework with new logical formalisms and features to handle larger set of heterogeneities between data sources. This extension, named Extended Context Interchange (ECOIN), is motivated by our analysis of financial information systems that indicates that there are three fundamental types of heterogeneities in data sources: contextual, ontological, and temporal. ::: While COIN framework was able to deal with the contextual heterogeneities, ECOIN framework expands the scope to include ontological heterogeneities as well. In particular, we are able to deal with equational ontological conflicts (EOC), which refer to the heterogeneity in the way data items are calculated from other data items in terms of definitional equations. ECOIN provides a context-based solution to the EOC problem based on a novel approach that integrates abductive reasoning and symbolic equation solving techniques in a unified framework. ::: Furthermore, we address the merging of independently built ECOIN applications, which involves merging disparate ontologies and contextual knowledge. The relationship between ECOIN and the Semantic Web is also discussed. ::: Finally, we demonstrate the feasibility and features of our integration approach with a prototype implementation that provides mediated access to heterogeneous information systems. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
---
paper_title: APPLICATION-SPECIFIC CONSTRAINTS FOR MULTIMEDIA PRESENTATION GENERATION
paper_content:
A multimedia presentation can be viewed as a collection of multimedia items (such as image, text, video and audio), along with detailed information that describes the spatial and temporal placement of the items as part of the presentation. Manual multimedia authoring involves explicitly stating the placement of each media item in the spatial and temporal dimensions. The drawback of this approach is that resulting presentations are hard to adapt to different target platforms, network resources, and user preferences. An approach to solving this problem is to abstract from the low-level presentation details, for example by specifying the high-level semantic relations between the media items. The presentation itself can then be generated from the semantic relations along with a generic set of transformation rules, specifying how each semantic relation can be conveyed using multimedia constructs. These constructs may differ depending on the target platform, current network conditions or user preferences. We are thus able to automatically adapt the presentation to a wide variety of different circumstances while ensuring that the underlying message of the presentation remains the same. This approach requires an execution environment in which transformation rules, resulting in a set of constraints, are derived from a given semantic description. The resulting set of constraints can then be solved to create a final multimedia presentation. The paper describes the design and implementation of such a system. It explains the advantages of using constraint logic programming to realize the implementation of both the transformation rules and the constraints system. It also demonstrates the need for two different types of constraints. Quantitative constraints are needed to verify whether the final form presentation meets all the numeric constraints that are required by the environment. Qualitative constraints are needed to facilitate high-level reasoning and presentation encoding. While the quantitative constraints can be handled by off-the-shelf constraint solvers, the qualitative constraints needed are specific to the multimedia domain and need to be defined explicitly.
---
paper_title: Answering Queries in Context
paper_content:
The emergence of the Internet as the de facto Global Information Infrastructure enables the construction of decision support systems that leverage on the panoply of on-line information sources. This highly dynamic environment presents a critical need for a flexible and scalable strategy for integrating the disparate information sources while respecting their autonomy.
---
paper_title: ARM: Automatic Rule Miner
paper_content:
Rule-based formalisms are ubiquitous in computer science. However, a difficulty that arises frequently when specifying or programming the rules is to determine which effects should be propagated by these rules. In this paper, we present a tool called ARM (Automatic Rule Miner) that generates rules for relations over finite domains. ::: ::: ARM offers a rich functionality to provide the user with the possibility of specifying the admissible syntactic forms of the rules. ::: ::: Furthermore, we show that our approach performs well on various examples, e.g. generation of firewall rules or generation of rule-based constraint solvers. Thus, it is suitable for users from different fields.
---
paper_title: Automatic generation of rule-based constraint solvers over finite domains
paper_content:
A general approach to implement propagation and simplification of constraints consists of applying rules over these constraints. However, a difficulty that arises frequently when writing a constraint solver is to determine the constraint propagation algorithm. In this article, we propose a method for generating propagation and simplification rules for constraints over finite domains defined extensionally by, for example, a truth table or their tuples. The generation of rules is performed in two steps. First, propagation rules are generated. Propagation rules do not rewrite constraints but add new ones. Thus, the constraint store may contain superfluous constraints. Removing these constraints not only allows saving of space but also decreases the cost of constraint solving. Constraints can be removed using simplification rules. Thus, in a second step, some propagation rules are transformed into simplification rules.Furthermore, we show that our approach performs well on various examples, including Boolean constraints, multivalued logic, and Allen's qualitative approach to temporal logic. Moreover, an application taken from the field of digital circuit design shows that our approach is of practical use.
---
paper_title: Constraint Programming viewed as Rule-based Programming
paper_content:
We study here a natural situation when constraint programming can be entirely reduced to rule-based programming. To this end we explain first how one can compute on constraint satisfaction problems using rules represented by simple first-order formulas. Then we consider constraint satisfaction problems that are based on predefined, explicitly given constraints. To solve them we first derive rules from these explicitly given constraints and limit the computation process to a repeated application of these rules, combined with labeling. We consider two types of rule here. The first type, that we call equality rules, leads to a new notion of local consistency, called rule consistency that turns out to be weaker than arc consistency for constraints of arbitrary arity (called hyper-arc consistency in Marriott & Stuckey (1998)). For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. The second type of rules, that we call membership rules, yields a rule-based characterization of arc consistency. To show feasibility of this rule-based approach to constraint programming, we show how both types of rules can be automatically generated, as CHR rules of Fruhwirth (1995). This yields an implementation of this approach to programming by means of constraint logic programming. We illustrate the usefulness of this approach to constraint programming by discussing various examples, including Boolean constraints, two typical examples of many valued logics, constraints dealing with Waltz's language for describing polyhedral scenes, and Allen's qualitative approach to temporal logic.
---
paper_title: The computational power and complexity of constraint handling rules
paper_content:
Constraint Handling Rules (CHR) is a high-level rule-based programming language which is increasingly used for general-purpose programming. We introduce the CHR machine, a model of computation based on the operational semantics of CHR. Its computational power and time complexity properties are compared to those of the well-understood Turing machine and Random Access Memory machine. This allows us to prove the interesting result that every algorithm can be implemented in CHR with the best known time and space complexity. We also investigate the practical relevance of this result and the constant factors involved. Finally we expand the scope of the discussion to other (declarative) programming languages.
---
paper_title: Dijkstra’s algorithm with Fibonacci heaps: An executable description
paper_content:
We construct a readable, compact and efficient implementation of Dijkstra’s shortest path algorithm and Fibonacci heaps using Constraint Handling Rules (CHR), which is increasingly used as a high-level rule-based general-purpose programming language. We measure its performance in different CHR systems, investigating both the theoretical asymptotic complexity and the constant factors realized in practice.
---
paper_title: Principal type inference for GHCstyle multi-parameter type classes
paper_content:
We observe that the combination of multi-parameter type classes with existential types and type annotations leads to a loss of principal types and undecidability of type inference. This may be a surprising fact for users of these popular features. We conduct a concise investigation of the problem and are able to give a type inference procedure which, if successful, computes principal types under the conditions imposed by the Glasgow Haskell Compiler (GHC). Our results provide new insights on how to perform type inference for advanced type extensions.
---
paper_title: Polymorphic algebraic data type reconstruction
paper_content:
One of the disadvantages of statically typed languages is the programming overhead caused by writing all the necessary type information: Both type declarations and type definitions are typically required. Traditional type inference aims at relieving the programmer from the former.We present a rule-based constraint rewriting algorithm that reconstructs both type declarations and type definitions, allowing the programmer to effectively program type-less in a strictly typed language. This effectively combines strong points of dynamically typed languages (rapid prototyping) and statically typed ones (documentation, optimized compilation). Moreover it allows to quickly port code from a statically untyped to a statically typed setting.Our constraint-based algorithm reconstructs uniform polymorphic definitions of algebraic data types and simultaneously infers the types of all expressions and functions (supporting polymorphic recursion) in the program. The declarative nature of the algorithm allows us to easily show that it has a number of highly desirable properties such as soundness, completeness and various optimality properties. Moreover, we show how to easily extend and adapt it to suit a number of different language constructs and type system features
---
paper_title: A flow-based approach for variant parametric types
paper_content:
A promising approach for type-safe generic codes in the object-oriented paradigm is variant parametric type, which allows covariant and contravariant subtyping on fields where appropriate. Previous approaches formalise variant type as a special case of the existential type system. In this paper, we present a new framework based on flow analysis and modular type checking to provide a simple but accurate model for capturing generic types. Our scheme stands to benefit from past (and future) advances in flow analysis and subtyping constraints. Furthermore, it fully supports casting for variant types with a special reflection mechanism, called cast capture, to handle objects with unknown types. We have built a constraint-based type checker and have proven its soundness. We have also successfully annotated a suite of Java libraries and client code with our flow-based variant type system.
---
paper_title: Type inference and type error diagnosis for Hind-ley/Milner with extensions
paper_content:
An improved tool in the form of a tooth forming rack for cold working a cylindrical workpiece to form teeth thereon to produce, for example, a toothed gear is provided. The rack comprises a plurality of releasable inserts mounted on a tool holder. The inserts are of the throw-away type which enables replacement of only that portion of the rack that is worn or broken, thus extending the ueful life of the entire rack. In a first embodiment the inserts are releasably attached to the tool holder by threaded capscrews. In an alternate embodiment a milled slot within the inserts permits retention by means of tapered retainers attached to the tool holder by threaded capscrews.
---
paper_title: Interpreting Abduction in CLP
paper_content:
Constraint Logic Programming (CLP) and Abductive Logic Programming (ALP) share the important concept of conditional answer. We exploit their deep similarities to implement an ecient abductive solver where abducibles are treated as constraints. We propose two pos- sible implementations, in which integrity constraints are exploited either (i) as the definition of a CLP solver on an abductive domain, or (ii) as constraintsla CLP. Both the solvers are implemented on top of CLP(Bool), that typically have impressively ecient propagation en- gines.
---
paper_title: An Experimental CLP Platform for Integrity Constraints and Abduction
paper_content:
Integrity constraint and abduction are important in query-answering systems for enhanced query processing and for expressing knowledge in databases. A straightforward characterization of the two is given in a subset of the language CHRv, originally intended for writing constraint solvers to be applied for CLP languages. This subset has a strikingly simple computational model that can be executed using existing, Prolog-based technology. Together with earlier results, this confirms CHRv as a multiparadigm platform for experimenting with combinations of top-down and bottom-up evaluation, disjunctive databases and, as shown here, integrity constraint and abduction
---
paper_title: HYPROLOG: A New Logic Programming Language with Assumptions and Abduction
paper_content:
We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the “pure” sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide for abduction, the most efficient of known implementations; the price, however, is a limited use of negations. The main difference wrt. previous implementations of abduction is that we avoid any level of metainterpretation by having Prolog execute the deductive steps directly and by treating abducibles (and assumptions as well) as CHR constraints.
---
paper_title: Balanced parentheses in NL texts : a useful cue in the syntax/semantics interface
paper_content:
Balanced parentheses on text sentences can be obtained from information on particular morphemes -- the introducers -- and on inflected verbal forms. From balanced parentheses, a partial graph of the sentence in the semantics interface can be deduced, along with other information. The hypothesis and its expression with CHR constraints are presented.
---
paper_title: Coordination Revisited – A Constraint Handling Rule Approach
paper_content:
Coordination in natural language (as in “Tom and Jerry”, “John built but Mary painted the cupboard”, “publish or perish”) is one of the most difficult computational linguistic problems. Not only can it affect any type of constituent, but it often involves “guessing” material left implicit. We introduce a CHR methodology for extending a user’s grammar not including coordination with a metagrammatical treatment of same category coordination. Our methodology relies on the input grammar describing its semantics compositionally. It involves reifying grammar symbols into arguments of a generic symbol constituent, and adding three more rules to the user’s grammar. These three rules can coordinate practically any kind of constituent while reconstructing any missing material at the appropriate point. With respect to previous work, this is powerfully laconic as well as surprisingly minimal in the transformations and overhead required.
---
paper_title: Semantic Property Grammars for Knowledge Extraction from Biomedical Text
paper_content:
We present Semantic Property Grammars, designed to extract concepts and relations from biomedical texts. The implementation adapts a CHRG parser we designed for Property Grammars [1], which views linguistic constraints as properties between sets of categories and solves them by constraint satisfaction, can handle incomplete or erroneous text, and extract phrases of interest selectively. We endow it with concept and relation extraction abilities as well.
---
paper_title: Meaning in Context
paper_content:
A model for context-dependent natural language semantics is proposed and formalized in terms of possible worlds. The meaning of a sentence depends on context and at the same time affects that context representing the knowledge about the world collected from a discourse. The model fits well with a “flat” semantic representation as first proposed by Hobbs (1985), consisting basically of a conjunction of atomic predications in which all variables are existentially quantified with the widest possible scope; in our framework, this provides very concise semantic terms as compared with other representations. There is a natural correspondence between the possible worlds semantics and a constraint solver, and it is shown how such a semantics can be defined using the programming language of Constraint Handling Rules (Fruhwirth, 1995). Discourse analysis is clearly a process of abduction in this framework, and it is shown that the mentioned constraint solvers serve as effective and efficient abductive engines for the purpose.
---
paper_title: Chart Parsing And Constraint Programming
paper_content:
In this paper, parsing-as-deduction and constraint programming are brought together to outline a procedure for the specification of constraint-based chart parsers. Following the proposal in Shieber et al. (1995), we show how to directly realize the inference rules for deductive parsers as Constraint Handling Rules (Fruhwirth, 1998) by viewing the items of a chart parser as constraints and the constraint base as a chart. This allows the direct use of constraint resolution to parse sentences.
---
paper_title: A constraint parser for contextual rules
paper_content:
In this paper we describe a constraint analyser for contextual rules. Contextual rules constitute a rule-based formalism that allows rewriting of terminals and/or non terminal sequences taking in account their context. The formalism allows also to refer to portions of text by means of exclusion zones, that is, patterns that are only specified by a maximum length and a set of not allowed categories. The constraint approach reveals particularly useful for this type of rules, as decisions can be taken under the hypothesis of non existence of the excluded categories. If these categories are finally deduced, all other categories inferred upon the non existence of the former ones are automatically eliminated. The parser has been implemented using Constraint, Handling Rules. Some results with a set of rules oriented to the segmentation of texts in propositions are shown.
---
paper_title: CHR: A Flexible Query Language
paper_content:
We show how the language Constraint Handling Rules (CHR), a high-level logic language for the implementation of constraint solvers, can be slightly extended to become a general-purpose logic programming language with an expressive power subsuming the expressive ower of Horn clause programs with SLD resolution. The extended language, called “CHR∀”, retains however the extra features of CHR, e.g., committed choice and matching, which axe important for other purposes, especially for efficiently solving constraints. CHR∀ turns out to be a very flexible query language in the sense that it supports several (constraint) logic programming paradigms and allows to mix them in a single program. In particular, it supports top-down query evaluation and also bottom-up evaluation as it is frequently used in (disjunctive) deductive databases.
---
paper_title: Constraint Based Methods for Biological Sequence Analysis
paper_content:
The need for processing biological information is rapidly growing, owing to the masses of new information in digital form being produced at this time. Old method- ologies for processing it can no longer keep up with this rate of growth. The methods of Artificial Intelligence (AI) in general and of language processing in particular can offer much towards solving this problem. However, interdisciplinary research between language processing and molecular biology is not yet widespread, partly because of the effort needed for each specialist to understand the other one's jargon. We argue that by looking at the problems of molecular biology from a language processing perspec- tive, and using constraint based logic methodologies we can shorten the gap and make interdisciplinary collaborations more effective. We shall discuss several sequence anal- ysis problems in terms of constraint based formalisms such Concept Formation Rules, Constraint Handling Rules (CHR) and their grammatical counterpart, CHRG. We pos- tulate that genetic structure analysis can also benefit from these methods, for instance to reconstruct from a given RNA secondary structure, a nucleotide sequence that folds into it. Our proposed methodologies lend direct executability to high level descriptions of the problems at hand and thus contribute to rapid while efficient prototyping.
---
paper_title: An abductive treatment of long distance dependencies in CHR
paper_content:
We propose a CHR treatment for long distance dependencies which abduces the missing elements while relating the constituents that must be related. We discuss our ideas both from a classical analysis point of view, and from a property-grammar based perspective. We exemplify them through relative clauses first, and next through the more challenging case of natural language coordination. We present an abductive rule schema for conjunction from which appropriate rule instances materialize through the normal process of unification in a given sentence's parse, and which fills in (abduces) any information that may be missing in one conjunct through semantic and syntactic information found in the other. Thus semantic criteria, which in most previous approaches was understressed in favour of syntactic criteria, regains its due importance, within our encompassing while retaining economic abductive formulation.
---
paper_title: Extracting Selected Phrases through Constraint Satisfaction
paper_content:
We present in this paper a CHR based parsing methodology for parsing Property Grammars. This approach constitutes a flexible parsing technology in which the notions of derivation and hierarchy give way to the more flexible notion of constraint satisfaction between categories. It becomes then possible to describe the syntactic characteristics of a category in terms of satisfied and violated constraints.Different applications can take advantage of such flexibility, in particular in the case where information comes from part of the input and requires the identification of selected phrases such as NP, PP, etc. Our method presents two main advantages: first, there is no need to build an entire syntactic structure, only the selected phrases can be extracted. Moreover, such extraction can be done even from incomplete or erroneous text: indication of possible kinds of error or incompleteness can be given together with the proposed analysis for the phrases being sought.
---
paper_title: Logic Grammars for Diagnosis and Repair
paper_content:
We propose an abductive model based on Constraint Handling Rule Grammars (CHRGs) for detecting and correcting errors in problem domains that can be described in terms of strings of words accepted by a logic grammar. We provide a proof of concept for the specific problem of detecting and repairing natural language errors, in particular, those concerning feature agreement. Our methodology relies on grammar and string transformation in accordance with a user-defined dictionary of possible repairs. This transformation also serves as top-down guidance for our essentially bottom-up parser. With respect to previous approaches to error detection and repair, including those that also use constraints and/or abduction, our methodology is surprisingly simple while far-reaching and efficient.
---
paper_title: JmmSolve: A Generative Java Memory Model Implemented in Prolog and CHR
paper_content:
The memory model of a programming language specifies the interaction between multiple threads and main memory. Basically, the model says for every value obtained by a read operation in a program by what write operation it has been produced. In a multi-threaded unsynchronized program this need not be a deterministic linking from reads to writes. For a multi-platform language such as Java, a memory model is essential to guarantee portability of programs.
---
paper_title: Model-based testing for real
paper_content:
Model-based testing relies on abstract behavior models for test case generation. These models are abstractions, i.e., simplifications. For deterministic reactive systems, test cases are sequences of input and expected output. To bridge the different levels of abstraction, input must be concretized before being applied to the system under test. The system’s output must then be abstracted before being compared to the output of the model.The concepts are discussed along the lines of a feasibility study, an inhouse smart card case study. We describe the modeling concepts of the CASE tool AutoFocus and an approach to model-based test case generation that is based on symbolic execution with Constraint Logic Programming.Different search strategies and algorithms for test case generation are discussed. Besides validating the model itself, generated test cases were used to verify the actual hardware with respect to these traces.
---
paper_title: Using CHRs to generate functional test cases for the Java Card Virtual Machine
paper_content:
Automated functional testing consists in deriving test cases from the specification model of a program to detect faults within an implementation. In our work, we investigate using Constraint Handling Rules (CHRs) to automate the test cases generation process of functional testing. Our case study is a formal model of the Java Card Virtual Machine (JCVM) written in a sub-language of the Coq proof assistant. In this paper we define an automated translation from this formal model into CHRs and propose to generate test cases for each bytecode definition of the JCVM. The originality of our approach resides in the use of CHRs to faithfully model the formally specified operational semantics of the JCVM. The approach has been implemented in Eclipse Prolog and a full set of test cases have been generated for testing the JCVM.
---
paper_title: Security Policy Consistency
paper_content:
With the advent of wide security platforms able to express simultaneously all the policies comprising an organization's global security policy, the problem of inconsistencies within security policies become harder and more relevant. ::: We have defined a tool based on the CHR language which is able to detect several types of inconsistencies within and between security policies and other specifications, namely workflow specifications. ::: Although the problem of security conflicts has been addressed by several authors, to our knowledge none has addressed the general problem of security inconsistencies, on its several definitions and target specifications.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: Proving termination of CHR in Prolog : A transformational approach
paper_content:
Constraint Handling Rules (CHR) is a concurrent, comitted-choice, logic programming language. It is constraint-based and has guarded rules that rewrite multisets of atomic formulas [1]. Its simple syntax and semantics make it wellsuited for implementing custom constraint solvers. Despite the amount of work directed to CHR, not much has been done on termination analysis. To the best of our knowledge, there were no attempts made to automate termination proofs. The first and, until now, only contribution to termination analysis of CHR is reported in [2] and shows termination of CHR programs under the theoretical semantics [1] of CHR. Termination is shown, using a ranking function, mapping sets of constraints to a well-founded order. A ranking condition, on the level of the rules, implies termination. Although termination conditions in CHR take a different form than in Logic Programs (LP) and Term-Rewrite Systems (TRS), [2] shows that achievements from the work on termination of LP and TRS are relevant and adaptable to the CHR context. In this paper, we present a termination preserving transformation of CHR to Prolog. This allows the direct reuse of termination proof methods from LP and TRS for CHR, yielding the first fully automatic termination proving for CHR. We implemented the transformation and used existing termination tools for LP and TRS on a set of CHR programs to demonstrate the usefulness of our approach. In [3], we formalize the transformation and prove soundness w.r.t. termination.
---
paper_title: Adaptive Constraint Handling with CHR in Java
paper_content:
The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) is introduced in the imperative object-oriented programming language Java. The presented Java implementation consists of a compiler and a run-time system, all implemented in Java. The run-time system implements data structures like sparse bit vectors, logical variables and terms as well as an adaptive unification and an adaptive entailment algorithm. Approved technologies like attributed variables for constraint storage and retrieval as well as code generation for each head constraint are used. Also implemented are theoretically sound algorithms for adapting of rule derivations and constraint stores after arbitrary constraint deletions. The presentation is rounded off with some novel applications of CHR in constraint processing: simulated annealing for the n queens problem and intelligent backtracking for some SAT benchmark problems.
---
paper_title: Unfolding in CHR
paper_content:
Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption and more generally to optimize a given program. Essentially it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. One of the basic operations which is used by most program transformation systems is unfolding which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages and, to the best of our knowledge, no one has considered unfolding of CHR programs. This paper is a first attempt to define a correct unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning.
---
paper_title: Abstract interpretation for constraint handling rules
paper_content:
Program analysis is essential for the optimized compilation of Constraint Handling Rules (CHRs) as well as the inference of behavioral properties such as confluence and termination. Up to now all program analyses for CHRs have been developed in an ad hoc fashion.In this work we bring the general program analysis methodology of abstract interpretation to CHRs: we formulate an abstract interpretation framework over the call-based operational semantics of CHRs. The abstract interpretation framework is non-obvious since it needs to handle the highly non-deterministic execution of CHRs. The use of the framework is illustrated with two instantiations: the CHR-specific late storage analysis and the more generally known groundness analysis. In addition, we discuss optimizations based on these analyses and present experimental results.
---
paper_title: On Incremental Adaptation of CHR Derivations
paper_content:
Constraint-solving in dynamic environments requires the immediate adaptation of solutions of constraint satisfaction problems if these problems are changing. After any change, an adapted solution is preferred which is stable, i.e., as close as possible to the original solution. A wide range of incremental constraint-solving methods for dynamic, especially finite-domain, constraint satisfaction problems (DCSPs) are known, which more or less satisfy this additional requirement. Adaptation of DCSPs after constraint additions is generally simple and successfully solved, but adaptation after arbitrary constraint deletions is not. How constraint handling rules (CHRs), a high level language extension for implementing constraint solvers, can be improved in this respect is investigated. A new incremental algorithm is presented, which adapts an arbitrary CHR derivation after eliminations of constraints in the processed DCSPs. Thus, the existing solvers - there are several dozens of them - for various kinds of const...
---
paper_title: Extending arbitrary solvers with constraint handling rules
paper_content:
Constraint Handling Rules (CHRs) are a high-level committed choice programming language commonly used to write constraint solvers. While the semantic basis of CHRs allows them to extend arbitrary underlying constraint solvers, in practice, all current implementations only extend Herbrand equation solvers. In this paper we show how to define CHR programs that extend arbitrary solvers and fully interact with them. In the process, we examine how to compile such programs to perform as little recomputation as possible, and describe how to build index structures for CHR constraints that are modified automatically when variables in the underlying solver change. We report on the implementation of these techniques in the HAL compiler, and give empirical results illustrating their benefits.
---
paper_title: Integration and Optimization of Rule-based Constraint Solvers
paper_content:
One lesson learned from practical constraint solving applications is that constraints are often heterogeneous. Solving such constraints requires a collaboration of constraint solvers. In this paper, we introduce a methodology for the tight integration of CHR constraint programs into one such program. CHR is a high-level rule-based language for writing constraint solvers and reasoning systems. A constraint solver is well-behaved if it is terminating and confluent. When merging constraint solvers, this property may be lost. Based on previous results on CHR program analysis and transformation we show how to utilize completion to regain well-behavedness. We identify a class of solvers whose union is always confluent and we show that for preserving termination such a class is hard to find. The merged and completed constraint solvers may contain redundant rules. Utilizing the notion of operational equivalence, which is decidable for well-behaved CHR programs, we present a method to detect redundant rules in a CHR program.
---
paper_title: Theory and Practice of Constraint Handling Rules
paper_content:
Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR.
---
paper_title: A new approach to termination analysis of Constraint Handling Rules
paper_content:
Constraint Handling Rules (CHR) is a concurrent, committed-choice constraint programming language (see [2]). It is a rule-based language, in which multisets of atomic constraints are rewritten using guarded rules. It has a simple syntax and declarative semantics, and is very suitable for implementing constraint solvers. Although the language is strongly related to Logic Programming (LP) and to lesser extent also to Term-Rewrite Systems (TRS), termination analysis of CHR has received little attention. To the best of our knowledge, the only contribution so far is reported in [3]. This study is limited to CHR programs with only one type of rules: the so called ’simplification rules’. The work shows that, for this class of programs, termination analysis techniques developed for LP (see [1]) can be adapted to CHR. In this paper, we present a new approach to termination analysis of CHR which is applicable to a much larger class of CHR programs. We propose a new termination condition and show its applicability to CHR programs with rules that are not only of the simplification type. We have successfully tested the condition on a benchmark of programs, using a prototype analyser.
---
paper_title: Specialization of Concurrent Guarded Multi-Set Transformation Rules
paper_content:
Program transformation and in particular partial evaluation are appealing techniques for declarative programs to improve not only their performance. This paper presents the first step towards developing program transformation techniques for a concurrent constraint programming language where guarded rules rewrite and augment multi-sets of atomic formulae, called Constraint Handling Rules (CHR). ::: ::: We study the specialization of rules with regard to a given goal (query). We show the correctness of this program transformation: Adding and removing specialized rules in a program does not change the program's operational semantics. Furthermore termination and confluence of the program are shown to be preserved.
---
paper_title: A compositional semantics for CHR
paper_content:
Constraint Handling Rules (CHR) are a committed-choice declarative language which has been designed for writing constraint solvers. A CHR program consists of multi-headed guarded rules which allow one to rewrite constraints into simpler ones until a solved form is reached.CHR has received a considerable attention, both from the practical and from the theoretical side. Nevertheless, due the use of multi-headed clauses, there are several aspects of the CHR semantics which have not been clarified yet. In particular, no compositional semantics for CHR has been defined so far.In this paper we introduce a fix-point semantics which characterizes the input/output behavior of a CHR program and which is and-compositional, that is, which allows to retrieve the semantics of a conjunctive query from the semantics of its components. Such a semantics can be used as a basis to define incremental and modular analysis and verification tools.
---
paper_title: Observable confluence for constraint handling rules
paper_content:
Constraint Handling Rules (CHR) are a powerful rule based language for specifying constraint solvers. Critical for any rule based language is the notion of confluence, and for terminating CHR programs there is a decidable test for confluence. But many CHR programs that are in practice confluent fail this confluence test. The problem is that the states that illustrate non-confluence are not observable from the initial goals of interest. In this paper we introduce the notion of observable confluence, a more general notion of confluence which takes into account whether states are observable. We devise a test for observable confluence which allows us to verify observable confluence for a range of CHR programs dealing with agents, type systems, and the union-find algorithm.
---
| Title: As time goes by: Constraint Handling Rules - A survey of CHR research from 1998 to 2007
Section 1: Introduction
Description 1: Provide an introduction to the survey, detailing the foundation and evolution of Constraint Handling Rules (CHR) over the specified period.
Section 2: Historical Overview
Description 2: Outline the development and important milestones of CHR research from its early stages to 2007.
Section 3: Constraint Handling Rules
Description 3: Introduce the syntax and informal semantics of CHR to make the survey self-contained for readers who may not be familiar with CHR.
Section 4: Semantics
Description 4: Discuss both the logical (declarative) semantics and the operational semantics of CHR, including classical, linear, and transaction logic semantics.
Section 5: Program Analysis
Description 5: Explore the important properties and analyses of CHR programs such as confluence, termination, and complexity.
Section 6: Systems and Implementation
Description 6: Provide an overview of the numerous CHR systems, their implementation, and the compilation techniques, including important optimizations to enhance performance.
Section 7: Extensions and Variants
Description 7: Discuss various extensions and variants of CHR that address identified weaknesses and limitations, including probabilistic CHR, adaptive CHR, and CHR with disjunctions.
Section 8: Relation to Other Formalisms
Description 8: Draw connections between CHR and other formalisms, comparing capabilities and commonalities with different logic and term rewriting systems, production rules, and graph-based formalisms.
Section 9: Applications
Description 9: Review significant applications of CHR across various domains such as constraint solvers, spatio-temporal reasoning, multi-agent systems, and programming language development.
Section 10: Conclusions
Description 10: Summarize the survey findings, assess the coverage of CHR research, reflect on past research topics from earlier surveys, and identify grand challenges for future CHR research. |
A Review on Biometric Recognition | 7 | ---
paper_title: A feature-based approach to automatic injection mold generation
paper_content:
This paper presents a feature-based approach to the automatic generation of an injection mold. In the approach, all the molding features of a molded part are recognized first using a universal hint-based feature recognition algorithm, and the optimal parting direction is determined based on the feature model. Then, all the faces related with the parting line are classified into three groups according to the determined parting direction, and two kinds of parting lines are automatically determined by extracting the largest profile of each face group. Finally, the parting surface is generated based on the optimal pacing line, thereafter the mold core and mold cavity are automatically set up by splitting the mold box of the molded part with the generated parting surface. By using recognized molding features, the approach makes it possible to identify all undercuts, to determine internal parting lines and internal parting surfaces, to generate the mold core and cavity of a complex molded part with undercuts as well.
---
paper_title: Evaluation of multiclass support vector machine classifiers using optimum threshold-based pruning technique
paper_content:
Support vector machine (SVM) is the state-of-the-art classifier used in real world pattern recognition applications. One of the design objectives of SVM classifiers using non-linear kernels is reducing the number of support vectors without compromising the classification accuracy. To meet this objective, decision-tree approach and pruning techniques are proposed in the literature. In this study, optimum threshold (OT)-based pruning technique is applied to different decision-tree-based SVM classifiers and their performances are compared. In order to assess the performance, SVM-based isolated digit recognition system is implemented. The performances are evaluated by conducting various experiments using speaker-dependent and multispeaker-dependent TI46 database of isolated digits. Based on this study, it is found that the application of OT technique reduces the minimum time required for recognition by a factor of 1.54 and 1.31, respectively, for speaker-dependent and multispeaker-dependent cases. The proposed approach is also applicable for other SVM-based multiclass pattern recognition systems such as target recognition, fingerprint classification, character recognition and face recognition.
---
paper_title: Local Feature Analysis: A general statistical theory for object representation
paper_content:
Low-dimensional representations of sensory signals are key to solving many of the computational problems encountered in high-level vision. Principal component analysis (PCA) has been used in the past to derive practically useful compact representations for different classes of objects. One major objection to the applicability of PCA is that it invariably leads to global, non-topographic representations that are not amenable to further processing and are not biologically plausible. In this paper we present a new mathematical construction, local feature analysis (LFA), for deriving local topographic representations for any class of objects. The LFA representations are sparse-distributed and, hence, are effectively low-dimensional and retain all the advantages of the compact representations of the PCA. But, unlike the global eigenmodes, they give a description of objects in terms of statistically derived local features and their positions. We illustrate the theory by using it to extract local features for th...
---
paper_title: Generalized Discriminant Analysis Using a Kernel Approach
paper_content:
We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.
---
paper_title: 67 Biometrics Verification: a Literature Survey
paper_content:
Biometric verification refers to an automatic verification of a person based on some specific biometric features derived from his/her physiological and/or behavioral characteristics. A biometric verification system has more capability to reliably distinguish between an authorized person and an imposter than the traditional systems that use a card or a password. In biometrics, a person could be recognized based on who he/she is rather than what he/she has (ID card) or what he/she knows (password). Currently, biometrics finds use in ATMs, computers, security installations, mobile phones, credit cards, health and social services. The future in biometrics seems to belong to the multimodal biometrics (a biometric system using more than one biometric feature) as a unimodal biometric system (biometric system using single biometric feature) has to contend with a number of problems. In this paper, a survey of some of the unimodal biometrics will be presented that are either currently in use across a range of environments or those still in limited use or under development, or still in the research realm.
---
paper_title: Face Recognition : A Convolutional Neural Network Approach
paper_content:
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
---
paper_title: Eigenfaces for Recognition
paper_content:
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
---
paper_title: A Human Identification Technique Using Images of the Iris and Wavelet Transform
paper_content:
A new approach for recognizing the iris of the human eye is presented. Zero-crossings of the wavelet transform at various resolution levels are calculated over concentric circles on the iris, and the resulting one-dimensional (1-D) signals are compared with model features using different dissimilarity functions.
---
paper_title: Image understanding for iris biometrics : A survey
paper_content:
This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics.
---
paper_title: Use of one-dimensional iris signatures to rank iris pattern similarities
paper_content:
A one-dimensional approach to iris recognition is presented. It is translation-, rotation-, illumination-, and scale-invariant. Traditional iris recognition systems typically use a two-dimensional iris signature that requires circular rotation for pattern matching. The new approach uses the Du measure as a matching mechanism, and generates a set of the most probable matches (ranks) instead of only the best match. Since the method generates one-dimensional signatures that are rotation-invariant, the system could work with eyes that are tilted. Moreover, the system will work with less of the iris than commercial systems, and thus could enable partial-iris recognition. In addition, this system is more tolerant of noise. Finally, this method is simple to implement, and its computational complexity is relatively low.
---
paper_title: Iris recognition: an emerging biometric technology
paper_content:
In this paper, iris recognition as one of the important method of biometrics-based identification systems and iris recognition algorithm is described. As technology advances and information and intellectual properties are wanted by many unauthorized personnel. As a result many organizations have being searching ways for more secure authentication methods for the user access. In network security there is a vital emphasis on the automatic personal identification. Due to its inherent advantages biometric based verification especially iris identification is gaining a lot of attention. Iris recognition uses iris patterns for personnel identification. The system steps are capturing iris patterns; determining the location of iris boundaries; converting the iris boundary to the stretched polar coordinate system; extracting iris code based on texture analysis. The system has been implemented and tested using dataset of number of samples of iris data with different contrast quality. The developed algorithm performs satisfactorily on the images, provides 93% accuracy. Experimental results show that the proposed method has an encouraging performance.
---
paper_title: Improving Iris Recognition Accuracy via Cascaded Classifiers
paper_content:
As a reliable approach to human identification, iris recognition has received increasing attention in recent years. The most distinguishing feature of an iris image comes from the fine spatial changes of the image structure. So iris pattern representation must characterize the local intensity variations in iris signals. However, the measurements from minutiae are easily affected by noise, such as occlusions by eyelids and eyelashes, iris localization error, nonlinear iris deformations, etc. This greatly limits the accuracy of iris recognition systems. In this paper, an elastic iris blob matching algorithm is proposed to overcome the limitations of local feature based classifiers (LFC). In addition, in order to recognize various iris images efficiently a novel cascading scheme is proposed to combine the LFC and an iris blob matcher. When the LFC is uncertain of its decision, poor quality iris images are usually involved in intra-class comparison. Then the iris blob matcher is resorted to determine the input iris' identity because it is capable of recognizing noisy images. Extensive experimental results demonstrate that the cascaded classifiers significantly improve the system's accuracy with negligible extra computational cost.
---
paper_title: Iris Recognition System Using Fractal Dimensions of Haar Patterns
paper_content:
Classification of iris templates based on their texture patterns is one of the most effective methods in iris recognition systems. This paper proposes a novel algorithm for automatic iris classification based on fractal dimensions of Haar wavelet transforms is presented. Fractal dimensions obtained from multiple scale features are used to characterize the textures completely. Haar wavelet is applied in order to extract the multiple scale features at different resolutions from the iris image. Fractal dimensions are estimated from these patterns and a classifier is used to recognize the given image from a data base. Performance comparison was made among different classifiers.
---
paper_title: Iris verification using correlation filters
paper_content:
Iris patterns are believed to be an important class of biometrics suitable for subject verification and identification applications. Earlier methods proposed for iris recognition were based on generating iris codes from features generated by applying Gabor wavelet processing to iris images. Another approach to image recognition is the use of correlation filters. Correlation filter methods differ from many image-based recognition approaches in that two-dimensional Fourier transforms of the images are used in this approach. In correlation filter methods, normal variations in an authentic iris image can be accommodated by designing a frequency-domain array (called a correlation filter) that captures the consistent part of iris images while deemphasizing the varying parts. Correlation filters also offer other benefits such as shift-invariance, graceful degradation and closed-form solutions. In this paper, we discuss the basics of correlation filters and show how they can be used for iris verification.
---
paper_title: Localized Iris Image Quality Using 2-D Wavelets
paper_content:
The performance of an iris recognition system can be undermined by poor quality images and result in high false reject rates (FRR) and failure to enroll (FTE) rates. In this paper, a wavelet-based quality measure for iris images is proposed. The merit of the this approach lies in its ability to deliver good spatial adaptivity and determine local quality measures for different regions of an iris image. Our experiments demonstrate that the proposed quality index can reliably predict the matching performance of an iris recognition system. By incorporating local quality measures in the matching algorithm, we also observe a relative matching performance improvement of about 20% and 10% at the equal error rate (EER), respectively, on the CASIA and WVU iris databases.
---
paper_title: Text-Independent Writer Identification Based on Fusion of Dynamic and Static Features
paper_content:
Handwriting recognition is a traditional and natural approach for personal authentication. Compared to signature verification, text-independent writer identification has gained more attention for its advantage of denying imposters in recent years. Dynamic features and static features of the handwriting are usually adopted for writer identification separately. For text-independent writer identification, by using a single classifier with the dynamic or the static feature, the accuracy is low, and many characters are required (more than 150 characters on average). In this paper, we developed a writer identification method to combine the matching results of two classifiers which employs the static feature (texture) and dynamic features individually. Sum-Rule, Common Weighted Sum-Rule and User-specific Sum-Rule are applied as the fusion strategy. Especially, we gave an improvement for the user-specific Sum-Rule algorithm by using an error-score. Experiments were conducted on the NLPR handwriting database involving 55 persons. The results show that the combination methods can improve the identification accuracy and reduce the number of characters required.
---
paper_title: ECG to identify individuals
paper_content:
The electrocardiogram (ECG also called EKG) trace expresses cardiac features that are unique to an individual. The ECG processing followed a logical series of experiments with quantifiable metrics. Data filters were designed based upon the observed noise sources. Fiducial points were identified on the filtered data and extracted digitally for each heartbeat. From the fiducial points, stable features were computed that characterize the uniqueness of an individual. The tests show that the extracted features are independent of sensor location, invariant to the individual's state of anxiety, and unique to an individual.
---
paper_title: ECG analysis: a new approach in human identification
paper_content:
A new approach in human identification is investigated. For this purpose, a standard 12-lead electrocardiogram (ECG) recorded during rest is used. Selected features extracted from the ECG are used to identify a person in a predetermined group. Multivariate analysis is used for the identification task. Experiments show that it is possible to identify a person by features extracted from one lead only. Hence, only three electrodes have to be attached on the person to be identified. This makes the method applicable without too much effort.
---
paper_title: 67 Biometrics Verification: a Literature Survey
paper_content:
Biometric verification refers to an automatic verification of a person based on some specific biometric features derived from his/her physiological and/or behavioral characteristics. A biometric verification system has more capability to reliably distinguish between an authorized person and an imposter than the traditional systems that use a card or a password. In biometrics, a person could be recognized based on who he/she is rather than what he/she has (ID card) or what he/she knows (password). Currently, biometrics finds use in ATMs, computers, security installations, mobile phones, credit cards, health and social services. The future in biometrics seems to belong to the multimodal biometrics (a biometric system using more than one biometric feature) as a unimodal biometric system (biometric system using single biometric feature) has to contend with a number of problems. In this paper, a survey of some of the unimodal biometrics will be presented that are either currently in use across a range of environments or those still in limited use or under development, or still in the research realm.
---
paper_title: ECG Biometric Recognition Without Fiducial Detection
paper_content:
Security concerns increase as the technology for falsification advances. There are strong evidences that a difficult to falsify biometric, the human heartbeat, can be used for identity recognition. Existing approaches address the problem by using electrocardiogram (ECG) data and the fiducials of the different parts of the heartrate. However, the current fiducial detection tools are inadequate for this application since the boundaries of waveforms are difficult to detect, locate and define. In this paper, an ECG biometric recognition method that does not require any waveform detections is introduced based on classification of coefficients from the discrete cosine transform (DCT) of the Autocorrelation (AC) sequence of ECG data segments. Low false negative rates, low false positive rates and a 100% subject recognition rate for healthy subjects can be achieved for parameters that are suitable for the database.
---
paper_title: ECG signal based human identification method using features in temporal and wavelet domains
paper_content:
This paper presents an effective method for human identification using temporal and wavelet domain features extracted from electrocardiogram (ECG) signal. Instead of directly using the ECG data of a person as the feature, first, it is shown that a few number of reflection coefficients extracted from the autocorrelation function of the data can efficiently perform the recognition task. Next, the discrete wavelet transform (DWT) coefficients are utilized as features, which offer even a better recognition performance. A combination of these two features is found to demonstrate a high within class compactness and between class separation. A pre-processing scheme is incorporated prior to feature extraction to reduce the effect of different noises and artefacts. In the recognition phase, a linear discriminant based classifier is employed, where the two features are jointly utilized. The proposed human identification method has been tested on a standard ECG database and high recognition accuracy is achieved with a low feature dimension.
---
paper_title: User-Customized Password Speaker Verification Using Multiple Reference and Background Models
paper_content:
This paper discusses and optimizes an HMM/GMM based User-Customized Password Speaker Verification (UCP-SV) system. Unlike text-dependent speaker verification, in UCP-SV systems, customers can choose their own passwords with no lexical constraints. The password has to be pronounced a few times during the enrollment step to create a customer dependent model. Although potentially more ``user-friendly'', such systems are less understood and actually exhibit several practical issues, including automatic HMM inference, speaker adaptation, and efficient likelihood normalization. In our case, HMM inference (HMM topology) is performed using hybrid HMM/MLP systems, while the parameters of the inferred model, as well as their adaptation, will use GMMs. However, the evaluation of a UCP-SV baseline system shows that the background model used for likelihood normalization is the main difficulty. Therefore, to circumvent this problem, the main contribution of the paper is to investigate the use of multiple reference models for customer acoustic modeling and multiple background models for likelihood normalization. In this framework, several scoring techniques are investigated, such as Dynamic Model Selection (DMS) and fusion techniques. Results on two different experimental protocols show that an appropriate selection criteria for customer and background models can improve significantly the UCP-SV performance, making the UCP-SV system quite competitive with a text-dependent SV system. Finally, as customers' passwords are short, a comparative experiment using the conventional GMM-UBM text-independent approach is also conducted.
---
paper_title: Text-Dependent Speaker Recognition
paper_content:
Text-dependent speaker recognition characterizes a speaker recognition task, such as verification or identification, in which the set of words (or lexicon) used during the testing phase is a subset of the ones present during the enrollment phase. The restricted lexicon enables very short enrollment (or registration) and testing sessions to deliver an accurate solution but, at the same time, represents scientific and technical challenges. Because of the short enrollment and testing sessions, text-dependent speaker recognition technology is particularly well suited for deployment in large-scale commercial applications. These are the bases for presenting an overview of the state of the art in text-dependent speaker recognition as well as emerging research avenues. In this chapter, we will demonstrate the intrinsic dependence that the lexical content of the password phrase has on the accuracy. Several research results will be presented and analyzed to show key techniques used in text-dependent speaker recognition systems from different sites. Among these, we mention multichannel speaker model synthesis and continuous adaptation of speaker models with threshold tracking. Since text-dependent speaker recognition is the most widely used voice biometric in commercial deployments, several
---
paper_title: 67 Biometrics Verification: a Literature Survey
paper_content:
Biometric verification refers to an automatic verification of a person based on some specific biometric features derived from his/her physiological and/or behavioral characteristics. A biometric verification system has more capability to reliably distinguish between an authorized person and an imposter than the traditional systems that use a card or a password. In biometrics, a person could be recognized based on who he/she is rather than what he/she has (ID card) or what he/she knows (password). Currently, biometrics finds use in ATMs, computers, security installations, mobile phones, credit cards, health and social services. The future in biometrics seems to belong to the multimodal biometrics (a biometric system using more than one biometric feature) as a unimodal biometric system (biometric system using single biometric feature) has to contend with a number of problems. In this paper, a survey of some of the unimodal biometrics will be presented that are either currently in use across a range of environments or those still in limited use or under development, or still in the research realm.
---
paper_title: A new approach to utterance verification based on neighborhood information in model space
paper_content:
We propose to use neighborhood information in model space to perform utterance verification (UV). At first, we present a nested-neighborhood structure for each underlying model in model space and assume the underlying model's competing models sit in one of these neighborhoods, which is used to model alternative hypothesis in UV. Bayes factors (BF) is first introduced to UV and used as a major tool to calculate confidence measures based on the above idea. Experimental results in the Bell Labs communicator system show that the new method has dramatically improved verification performance when verifying correct words against mis-recognized words in the recognizer's output, relatively more than 20% reduction in equal error rate (EER) when comparing with the standard approach based on likelihood ratio testing and anti-models.
---
paper_title: A Tutorial on Principal Component Analysis
paper_content:
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
---
paper_title: Palmprint identification using feature-level fusion
paper_content:
In this paper, we propose a feature-level fusion approach for improving the efficiency of palmprint identification. Multiple elliptical Gabor filters with different orientations are employed to extract the phase information on a palmprint image, which is then merged according to a fusion rule to produce a single feature called the Fusion Code. The similarity of two Fusion Codes is measured by their normalized hamming distance. A dynamic threshold is used for the final decisions. A database containing 9599 palmprint images from 488 different palms is used to validate the performance of the proposed method. Comparing our previous non-fusion approach and the proposed method, improvement in verification and identification are ensured.
---
paper_title: Palmprint Matching Using LBP
paper_content:
Automatic person verification has become a crucial problem in networked society. Providing authorized access to a person is a challenging task. To resolve all these issues, biometrics has emerged as an important and effective solution. Although many biometrics have been proposed, public acceptability and privacy concerns remain a block to implementation in public services. Another concern is computational overhead in real-time scenario. This paper discusses a new method and its related analysis that based on palm print verification using local binary pattern (LBP) to capture the palm print texture. Experimental results demonstrate that this technique is simple, highly accurate and takes less time to process the palm print image.
---
paper_title: 67 Biometrics Verification: a Literature Survey
paper_content:
Biometric verification refers to an automatic verification of a person based on some specific biometric features derived from his/her physiological and/or behavioral characteristics. A biometric verification system has more capability to reliably distinguish between an authorized person and an imposter than the traditional systems that use a card or a password. In biometrics, a person could be recognized based on who he/she is rather than what he/she has (ID card) or what he/she knows (password). Currently, biometrics finds use in ATMs, computers, security installations, mobile phones, credit cards, health and social services. The future in biometrics seems to belong to the multimodal biometrics (a biometric system using more than one biometric feature) as a unimodal biometric system (biometric system using single biometric feature) has to contend with a number of problems. In this paper, a survey of some of the unimodal biometrics will be presented that are either currently in use across a range of environments or those still in limited use or under development, or still in the research realm.
---
paper_title: Competitive coding scheme for palmprint verification
paper_content:
There is increasing interest in the development of reliable, rapid and non-intrusive security control systems. Among the many approaches, biometrics such as palmprints provide highly effective automatic mechanisms for use in personal identification. This paper presents a new method for extracting features from palmprints using the competitive coding scheme and angular matching. The competitive coding scheme uses multiple 2-D Gabor filters to extract orientation information from palm lines. This information is then stored in a feature vector called the competitive code. The angular matching with an effective implementation is then defined for comparing the proposed codes, which can make over 9,000 comparisons within 1s. In our testing database of 7,752 palmprint samples from 386 palms, we can achieve a high genuine acceptance rate of 98.4% and a low false acceptance rate of 3/spl times/10/sup -6/%. The execution time for the whole process of verification, including preprocessing, feature extraction and final matching, is 1s.
---
paper_title: Palm vein recognition with Local Binary Patterns and Local Derivative Patterns
paper_content:
Palm vein feature extraction from near infrared images is a challenging problem in hand pattern recognition. In this paper, a promising new approach based on local texture patterns is proposed. First, operators and histograms of multi-scale Local Binary Patterns (LBPs) are investigated in order to identify new efficient descriptors for palm vein patterns. Novel higher-order local pattern descriptors based on Local Derivative Pattern (LDP) histograms are then investigated for palm vein description. Both feature extraction methods are compared and evaluated in the framework of verification and identification tasks. Extensive experiments on CASIA Multi-Spectral Palmprint Image Database V1.0 (CASIA database) identify the LBP and LDP descriptors which are better adapted to palm vein texture. Tests on the CASIA datasets also show that the best adapted LDP descriptors consistently outperform their LBP counterparts in both palm vein verification and identification.
---
| Title: A Review on Biometric Recognition
Section 1: Introduction
Description 1: Introduce the concept of biometric identification, its importance over traditional methods, and how a biometric system operates.
Section 2: Face Recognition
Description 2: Discuss various methods and algorithms used for facial recognition, their advancements over time, and their respective advantages and disadvantages.
Section 3: Iris Recognition
Description 3: Explain the process of iris recognition and review various techniques and historical developments in this field.
Section 4: ECG Recognition System
Description 4: Describe the use of electrocardiogram (ECG) signals in biometric recognition, including feature extraction methods and classification techniques.
Section 5: Voice Recognition
Description 5: Cover the principles of voice recognition, various system types (isolated word, connected word, continuous speech), and different feature extraction and classification techniques.
Section 6: Palm Recognition
Description 6: Explain palm recognition techniques, highlight different approaches for palm verification, and discuss the challenges and advancements in palm print and palm vein recognition.
Section 7: Conclusion
Description 7: Summarize the importance of various biometric recognition systems and discuss their applications in enhancing security and identification processes. |
Ubiquitous HealthCare in Wireless Body Area Networks - A Survey | 12 | ---
paper_title: Body Area Networks for Ubiquitous Healthcare Applications: Opportunities and Challenges
paper_content:
Body Area Networks integrated into mHealth systems are becoming a mature technology with unprecedented opportunities for personalized health monitoring and management. Potential applications include early detection of abnormal conditions, supervised rehabilitation, and wellness management. Such integrated mHealth systems can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Automatic integration of collected information and user's inputs into research databases can provide medical community with opportunity to search for personalized trends and group patterns, allowing insights into disease evolution, the rehabilitation process, and the effects of drug therapy. A new generation of personalized monitoring systems will allow users to customize their systems and user interfaces and to interact with their social networks. With emergence of first commercial body area network systems, a number of system design issues are still to be resolved, such as seamless integration of information and ad-hoc interaction with ambient sensors and other networks, to enable their wider acceptance. In this paper we present state of technology, discuss promising new trends, opportunities and challenges of body area networks for ubiquitous health monitoring applications.
---
paper_title: Ubiquitous Mobile Health Monitoring System for Elderly (UMHMSE)
paper_content:
Recent research in ubiquitous computing uses technologies of Body Area Networks (BANs) to monitor the person's kinematics and physiological parameters. In this paper we propose a real time mobile health system for monitoring elderly patients from indoor or outdoor environments. The system uses a bio- signal sensor worn by the patient and a Smartphone as a central node. The sensor data is collected and transmitted to the intelligent server through GPRS/UMTS to be analyzed. The prototype (UMHMSE) monitors the elderly mobility, location and vital signs such as Sp02 and Heart Rate. Remote users (family and medical personnel) might have a real time access to the collected information through a web application.
---
paper_title: A Distributed Algorithm for Coverage Management in Wireless Sensor Networks
paper_content:
Lifetime management of sensors is defined as one of the most critical and vital concepts of targets coverage in all kinds of wireless sensor networks. One of the most popular methods in order to increase a network’s lifetime is dividing the sensors of that network into some temporary subsets in which each one could independently cover all the targets while they keep their connectivity with the sink node. Therefore a sensor network could easily use an activated subset of its sensors to cover its targets and let the other sensors sleep or stand by at the same time. Moreover, a network’s lifetime could be efficiently managed by controlling of its coverage circumstances. In this paper, an improved distributed algorithm is proposed for the problem of connected partial targets coverage. Our experimental results demonstrate that the lifetime of a wireless sensor network is highly increased while it uses the proposed method in this paper.
---
paper_title: DESIGN AND EVALUATION OF NEW INTELLIGENT SENSOR PLACEMENT ALGORITHM TO IMPROVE COVERAGE PROBLEM IN WIRELESS SENSOR NETWORKS
paper_content:
Adequate coverage is one of the main problems for Sensor Networks. The effectiveness of distributed wireless sensor networks highly depends on the sensor deployment scheme. Given a finite number of sensors, optimizing the sensor deployment will provide sufficient sensor coverage and save cost of sensors for locating in grid points. In many working environments, for achieving good coverage, we must able to placement of sensors in adequate places. In this article we apply the simulated annealing, genetic and learning automata as intelligent methods for solving the sensor placement in distributed sensor networks. In the distributed sensor networks, the sensor placement is a NP-complete problem for arbitrary sensor fields and it is one of the most important issues in the research fields, so the proposed algorithm is going to solve this problem by considering two factors: one is the complete coverage and the second one is the minimum cost. The proposed method is examined in different areas by C language. The results not only confirm the successes of using new method in sensor placement, also they show that the new method is more efficiently in large areas compared to other methods like PBIL.
---
paper_title: Body Area Networks for Ubiquitous Healthcare Applications: Opportunities and Challenges
paper_content:
Body Area Networks integrated into mHealth systems are becoming a mature technology with unprecedented opportunities for personalized health monitoring and management. Potential applications include early detection of abnormal conditions, supervised rehabilitation, and wellness management. Such integrated mHealth systems can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Automatic integration of collected information and user's inputs into research databases can provide medical community with opportunity to search for personalized trends and group patterns, allowing insights into disease evolution, the rehabilitation process, and the effects of drug therapy. A new generation of personalized monitoring systems will allow users to customize their systems and user interfaces and to interact with their social networks. With emergence of first commercial body area network systems, a number of system design issues are still to be resolved, such as seamless integration of information and ad-hoc interaction with ambient sensors and other networks, to enable their wider acceptance. In this paper we present state of technology, discuss promising new trends, opportunities and challenges of body area networks for ubiquitous health monitoring applications.
---
paper_title: Body Area Networks for Ubiquitous Healthcare Applications: Opportunities and Challenges
paper_content:
Body Area Networks integrated into mHealth systems are becoming a mature technology with unprecedented opportunities for personalized health monitoring and management. Potential applications include early detection of abnormal conditions, supervised rehabilitation, and wellness management. Such integrated mHealth systems can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Automatic integration of collected information and user's inputs into research databases can provide medical community with opportunity to search for personalized trends and group patterns, allowing insights into disease evolution, the rehabilitation process, and the effects of drug therapy. A new generation of personalized monitoring systems will allow users to customize their systems and user interfaces and to interact with their social networks. With emergence of first commercial body area network systems, a number of system design issues are still to be resolved, such as seamless integration of information and ad-hoc interaction with ambient sensors and other networks, to enable their wider acceptance. In this paper we present state of technology, discuss promising new trends, opportunities and challenges of body area networks for ubiquitous health monitoring applications.
---
paper_title: Ubiquitous Mobile Health Monitoring System for Elderly (UMHMSE)
paper_content:
Recent research in ubiquitous computing uses technologies of Body Area Networks (BANs) to monitor the person's kinematics and physiological parameters. In this paper we propose a real time mobile health system for monitoring elderly patients from indoor or outdoor environments. The system uses a bio- signal sensor worn by the patient and a Smartphone as a central node. The sensor data is collected and transmitted to the intelligent server through GPRS/UMTS to be analyzed. The prototype (UMHMSE) monitors the elderly mobility, location and vital signs such as Sp02 and Heart Rate. Remote users (family and medical personnel) might have a real time access to the collected information through a web application.
---
paper_title: P-rake receivers in different measured WBAN hospital channels
paper_content:
In wireless applications, power consumption has been, and will be, one of the important characteristics when designing any wireless device. This is the case especially in sensor networks where a single sensor may be functioning, hopefully for very long time, without external power source. Generally, the architecture complexity reduces the battery life, but the performance increases with complexity. The best performance is achieved with the most complex devices which, however, consume a lot of power. Rake receivers can offer a good tradeoff between complexity and performance. In the near future, due to the aging of population, personal medical applications are most likely increasing in number and gaining more attention in industry. This paper presents simulation results for IEEE 802.15.4a ultra wideband (UWB) rake receivers in measured hospital channel. Oulu University Hospital in Oulu, Finland, was the location of the wireless body area network (WBAN) channel model measurements.
---
| Title: Ubiquitous HealthCare in Wireless Body Area Networks - A Survey
Section 1: INTRODUCTION
Description 1: This section introduces the need for a comprehensive healthcare system for elderly people and the role of Wireless Body Area Networks (WBAN) in providing ubiquitous healthcare (UHC).
Section 2: RELATED WORK
Description 2: This section discusses previous research related to path loss models, UHC architectures, energy-efficient algorithms, and sensor placement techniques in WBAN.
Section 3: MOST FREQUENTLY USED STANDARDS FOR WBAN COMMUNICATION
Description 3: This section covers the various standards adopted for WBAN communication, such as Bluetooth, ZigBee, MICS, and UWB, and their applications.
Section 4: PATH LOSS IN WBAN
Description 4: This section details the factors contributing to path loss in WBAN, including free space impairments and the effects of reflection, diffraction, and refraction.
Section 5: WBAN
Description 5: This section outlines the different types of nodes in WBAN and the characterization of electromagnetic wave propagation for In-Body, On-Body, and External node communications.
Section 6: RAKE RECEIVER
Description 6: This section discusses the use of Rake receivers in WBAN, their types, and their performance in terms of power consumption and complexity.
Section 7: Effect of WBAN Antennas
Description 7: This section describes the influence of antenna placement on the surface or inside the body, and the different types of antennas suitable for WBAN communication.
Section 8: Characteristics of Human Body
Description 8: This section explains the impact of the human body's dielectric constants, thickness, and impedance on wireless communication in WBAN.
Section 9: SCENARIOS OF PATH LOSS IN WBAN
Description 9: This section examines various scenarios of path loss in WBAN, including In-Body and On-Body communications, and presents simulations to analyze these effects.
Section 10: WBAN Channel Model
Description 10: This section describes the WBAN channel model, including the fading and path losses experienced by propagation paths and the overall channel characteristics.
Section 11: Evaluation of M-ary Modulations through a WBAN Channel
Description 11: This section evaluates the error rate link performance of different M-ary modulation schemes in WBAN channels through simulations.
Section 12: Conclusion
Description 12: This section summarizes the findings of the survey, highlighting the importance of WBAN in UHC and discussing the results of simulations for path loss and modulation schemes in WBAN. |
A survey on learning from data streams: current and future trends | 19 | ---
paper_title: C4.5: Programs for Machine Learning
paper_content:
From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses.
---
paper_title: Data Streams: Algorithms and Applications
paper_content:
In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].
---
paper_title: Probabilistic Counting Algorithms for Data Base Applications
paper_content:
Abstract This paper introduces a class of probabilistic counting algorithms with which one can estimate the number of distinct elements in a large collection of data (typically a large file stored on disk) in a single pass using only a small additional storage (typically less than a hundred binary words) and only a few operations per element scanned. The algorithms are based on statistical observations made on bits of hashed values of records. They are by construction totally insensitive to the replicative structure of elements in the file; they can be used in the context of distributed systems without any degradation of performances and prove especially useful in the context of data bases query optimisation.
---
paper_title: Models and issues in data stream systems
paper_content:
In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream query languages, new requirements and challenges in query processing, and algorithmic issues.
---
paper_title: Continuous queries over data streams
paper_content:
Continuous queries (CQs) represent a new paradigm for interacting with dynamically-changing data. Unlike traditional one-time queries, a CQ is registered with a data management system and provides continuous results as data and updates stream into the system. Applications include tracking real-time trends in stock market data, monitoring the health of a computer network, and online processing of sensor data. ::: This thesis addresses several fundamental challenges in building a system for processing declaratively-specified continuous queries. We first present a new language---an intuitive and natural extension of a traditional database query language---for specifying CQs. The language has been implemented in a comprehensive, publicly-available research prototype called STREAM (for STanford stREam datA Manager). Since CQs are long-running, potentially requiring large amounts of memory, we next present a precise characterization of the amount of memory required for any query in the language. For an important class of queries that require unbounded memory, we describe algorithms that trade off answer accuracy for a lower memory requirement. Finally, we describe techniques for sharing resources such as computation and state across multiple CQs, enabling scalability to a very large number of concurrent CQs.
---
paper_title: Data Streams: Models and Algorithms
paper_content:
This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science.
---
paper_title: Data Streams: Algorithms and Applications
paper_content:
In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].
---
paper_title: Estimating Entropy and Entropy Norm on Data Streams
paper_content:
We consider the problem of computing information theoretic functions such as entropy on a data stream, using sublinear space. Our first result deals with a measure we call the entropy norm of an input stream: it is closely related to entropy but is structurally similar to the well-studied notion of frequency moments. We give a polylogarithmic space one-pass algorithm for estimating this norm under certain conditions on the input stream. We also prove a lower bound that rules out such an algorithm if these conditions do not hold. Our second group of results are for estimating the empirical entropy of an input stream. We first present a sublinear space one-pass algorithm for this problem. For a stream of m items and a given real parameter α, our algorithm uses space O(m 2α ) and provides an approximation of 1/α in the worst case and (1 + e) in most cases. We then present a two-pass polylogarithmic space (1+e)-approximation algorithm. All our algorithms are quite simple.
---
paper_title: Randomized Algorithms
paper_content:
For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. This book introduces the basic concepts in the design and analysis of randomized algorithms. The first part of the text presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications. Algorithmic examples are also given to illustrate the use of each tool in a concrete setting. In the second part of the book, each chapter focuses on an important area to which randomized algorithms can be applied, providing a comprehensive and representative selection of the algorithms that might be used in each of these areas. Although written primarily as a text for advanced undergraduates and graduate students, this book should also prove invaluable as a reference for professionals and researchers.
---
paper_title: Approximate Frequency Counts over Data Streams
paper_content:
Research in data stream algorithms has blossomed since late 90s. The talk will trace the history of the Approximate Frequency Counts paper, how it was conceptualized and how it influenced data stream research. The talk will also touch upon a recent development: analysis of personal data streams for improving our quality of lives.
---
paper_title: Conquering the Divide: Continuous Clustering of Distributed Data Streams
paper_content:
Data is often collected over a distributed network, but in many cases, is so voluminous that it is impractical and undesirable to collect it in a central location. Instead, we must perform distributed computations over the data, guaranteeing high quality answers even as new data arrives. In this paper, we formalize and study the problem of maintaining a clustering of such distributed data that is continuously evolving. In particular, our goal is to minimize the communication and computational cost, still providing guaranteed accuracy of the clustering. We focus on the k-center clustering, and provide a suite of algorithms that vary based on which centralized algorithm they derive from, and whether they maintain a single global clustering or many local clusterings that can be merged together. We show that these algorithms can be designed to give accuracy guarantees that are close to the best possible even in the centralized case. In our experiments, we see clear trends among these algorithms, showing that the choice of algorithm is crucial, and that we can achieve a clustering that is as good as the best centralized clustering, with only a small fraction of the communication required to collect all the data in a single location.
---
paper_title: Mining association rules between sets of items in large databases
paper_content:
We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm.
---
paper_title: Maintaining Stream Statistics over Sliding Windows
paper_content:
We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using $O(\frac{1}{\epsilon} \log^2 N)$ bits of memory, we can estimate the number of 1's to within a factor of $1 + \epsilon$. We also give a matching lower bound of $\Omega(\frac{1}{\epsilon}\log^2 N)$ memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ($p \in [1,2]$) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of $O(\frac{1}{\epsilon}\log N)$ in memory and a $1 +\epsilon$ factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.
---
paper_title: Sampling from a moving window over streaming data
paper_content:
We introduce the problem of sampling from a moving window of recent items from a data stream and develop two algorithms for this problem. The first algorithm, "chain-sample", extends reservoir sampling to deal with the expiration of data elements from the sample. The expected memory usage of our algorithm is O(k) when maintaining a sample of size k over a window of the n most recent elements from the data stream, and with high probability the algorithm requires no more than O(k log n) memory.When the number of elements in the window is variable, as is the case when the size of the window is defined as a time duration rather than as a fixed number of data elements, the sampling problem becomes harder. Our second algorithm, "priority-sample", works even when the number of elements in the window can vary dynamically over time. With high probability, the "priority-sample" algorithm uses no more than O(k log n) memory.
---
paper_title: Kalman Filters and Adaptive Windows for Learning in Data Streams
paper_content:
We study the combination of Kalman filter and a recently proposed algorithm for dynamically maintaining a sliding window, for learning from streams of examples. We integrate this idea into two well-known learning algorithms, the Naive Bayes algorithm and the k-means clusterer. We show on synthetic data that the new algorithms do never worse, and in some cases much better, than the algorithms using only memoryless Kalman filters or sliding windows with no filtering.
---
paper_title: On Biased Reservoir Sampling in the Presence of Stream Evolution
paper_content:
The method of reservoir based sampling is often used to pick an unbiased sample from a data stream. A large portion of the unbiased sample may become less relevant over time because of evolution. An analytical or mining task (eg. query estimation) which is specific to only the sample points from a recent time-horizon may provide a very inaccurate result. This is because the size of the relevant sample reduces with the horizon itself. On the other hand, this is precisely the most important case for data stream algorithms, since recent history is frequently analyzed. In such cases, we show that an effective solution is to bias the sample with the use of temporal bias functions. The maintenance of such a sample is non-trivial, since it needs to be dynamically maintained, without knowing the total number of points in advance. We prove some interesting theoretical properties of a large class of memory-less bias functions, which allow for an efficient implementation of the sampling algorithm. We also show that the inclusion of bias in the sampling process introduces a maximum requirement on the reservoir size. This is a nice property since it shows that it may often be possible to maintain the maximum relevant sample with limited storage requirements. We not only illustrate the advantages of the method for the problem of query estimation, but also show that the approach has applicability to broader data mining problems such as evolution analysis and classification.
---
paper_title: On Biased Reservoir Sampling in the Presence of Stream Evolution
paper_content:
The method of reservoir based sampling is often used to pick an unbiased sample from a data stream. A large portion of the unbiased sample may become less relevant over time because of evolution. An analytical or mining task (eg. query estimation) which is specific to only the sample points from a recent time-horizon may provide a very inaccurate result. This is because the size of the relevant sample reduces with the horizon itself. On the other hand, this is precisely the most important case for data stream algorithms, since recent history is frequently analyzed. In such cases, we show that an effective solution is to bias the sample with the use of temporal bias functions. The maintenance of such a sample is non-trivial, since it needs to be dynamically maintained, without knowing the total number of points in advance. We prove some interesting theoretical properties of a large class of memory-less bias functions, which allow for an efficient implementation of the sampling algorithm. We also show that the inclusion of bias in the sampling process introduces a maximum requirement on the reservoir size. This is a nice property since it shows that it may often be possible to maintain the maximum relevant sample with limited storage requirements. We not only illustrate the advantages of the method for the problem of query estimation, but also show that the approach has applicability to broader data mining problems such as evolution analysis and classification.
---
paper_title: Maintaining Stream Statistics over Sliding Windows
paper_content:
We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using $O(\frac{1}{\epsilon} \log^2 N)$ bits of memory, we can estimate the number of 1's to within a factor of $1 + \epsilon$. We also give a matching lower bound of $\Omega(\frac{1}{\epsilon}\log^2 N)$ memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ($p \in [1,2]$) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of $O(\frac{1}{\epsilon}\log N)$ in memory and a $1 +\epsilon$ factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.
---
paper_title: An Improved Data Stream Summary: The Count-Min Sketch and Its Applications
paper_content:
We introduce a new sublinear space data structure—the Count-Min Sketch— for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known — typically from 1/e 2 to 1/e in factor.
---
paper_title: The space complexity of approximating the frequency moments
paper_content:
The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.
---
paper_title: Data Streams: Algorithms and Applications
paper_content:
In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].
---
paper_title: Surfing Wavelets on Streams: One-Pass Summaries for Approximate Aggregate Queries
paper_content:
We present techniques for computing small space representations of massive data streams. These are inspired by traditional wavelet-based approximations that consist of specific linear projections of the underlying data. We present general “sketch” based methods for capturing various linear projections of the data and use them to provide pointwise and rangesum estimation of data streams. These methods use small amounts of space and per-item time while streaming through the data, and provide accurate representation as our experiments with real data streams show.
---
paper_title: Learning decision trees from dynamic data streams
paper_content:
This paper presents a system for induction of forest of functional trees from data streams able to detect concept drift. The Ultra Fast Forest of Trees (UFFT) is an incremental algorithm, that works online, processing each example in constant time, and performing a single scan over the training examples. It uses analytical techniques to choose the splitting criteria, and the information gain to estimate the merit of each possible splitting-test. For multi-class problems the algorithm grows a binary tree for each possible pair of classes, leading to a forest of trees. Decision nodes and leaves contain naive-Bayes classifiers playing different roles during the induction process. Naive-Bayes in leaves are used to classify test examples, naive-Bayes in inner nodes can be used as multivariate splitting-tests if chosen by the splitting criteria, and used to detect drift in the distribution of the examples that traverse the node. When a drift is detected, all the sub-tree rooted at that node will be pruned. The use of naive-Bayes classifiers at leaves to classify test examples, the use of splitting-tests based on the outcome of naive-Bayes, and the use of naive-Bayes classifiers at decision nodes to detect drift are directly obtained from the sufficient statistics required to compute the splitting criteria, without no additional computations. This aspect is a main advantage in the context of high-speed data streams. This methodology was tested with artificial and real-world data sets. The experimental results show a very good performance in comparison to a batch decision tree learner, and high capacity to detect and react to drift.
---
paper_title: Decision trees for mining data streams
paper_content:
In this paper we study the problem of constructing accurate decision tree models from data streams. Data streams are incremental tasks that require incremental, online, and any-time learning algorithms. One of the most successful algorithms for mining data streams is VFDT. We have extended VFDT in three directions: the ability to deal with continuous data; the use of more powerful classification techniques at tree leaves, and the ability to detect and react to concept drift. VFDTc system can incorporate and classify new information online, with a single scan of the data, in time constant per example. The most relevant property of our system is the ability to obtain a performance similar to a standard decision tree algorithm even for medium size datasets. This is relevant due to the any-time property. We also extend VFDTc with the ability to deal with concept drift, by continuously monitoring differences between two class-distribution of the examples: the distribution when a node was built and the distribution in a time window of the most recent examples. We study the sensitivity of VFDTc with respect to drift, noise, the order of examples, and the initial parameters in different problems and demonstrate its utility in large and medium data sets.
---
paper_title: Improving Adaptive Bagging Methods for Evolving Data Streams
paper_content:
We propose two new improvements for bagging methods on evolving data streams. Recently, two new variants of Bagging were proposed: ADWIN Bagging and Adaptive-Size Hoeffding Tree (ASHT) Bagging. ASHT Bagging uses trees of different sizes, and ADWIN Bagging uses ADWIN as a change detector to decide when to discard underperforming ensemble members. We improve ADWIN Bagging using Hoeffding Adaptive Trees, trees that can adaptively learn from data streams that change over time. To speed up the time for adapting to change of Adaptive-Size Hoeffding Tree (ASHT) Bagging, we add an error change detector for each classifier. We test our improvements by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.
---
paper_title: Accurate decision trees for mining high-speed data streams
paper_content:
In this paper we study the problem of constructing accurate decision tree models from data streams. Data streams are incremental tasks that require incremental, online, and any-time learning algorithms. One of the most successful algorithms for mining data streams is VFDT. In this paper we extend the VFDT system in two directions: the ability to deal with continuous data and the use of more powerful classification techniques at tree leaves. The proposed system, VFDTc, can incorporate and classify new information online, with a single scan of the data, in time constant per example. The most relevant property of our system is the ability to obtain a performance similar to a standard decision tree algorithm even for medium size datasets. This is relevant due to the any-time property. We study the behaviour of VFDTc in different problems and demonstrate its utility in large and medium data sets. Under a bias-variance analysis we observe that VFDTc in comparison to C4.5 is able to reduce the variance component.
---
paper_title: Leveraging bagging for evolving data streams
paper_content:
Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called leveraging bagging. This method combines the simplicity of bagging with adding more randomization to the input, and output of the classifiers. We test our method by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.
---
paper_title: Hierarchical Clustering of Time-Series Data Streams
paper_content:
This paper presents and analyzes an incremental system for clustering streaming time series. The Online Divisive-Agglomerative Clustering (ODAC) system continuously maintains a tree-like hierarchy of clusters that evolves with data, using a top-down strategy. The splitting criterion is a correlation-based dissimilarity measure among time series, splitting each node by the farthest pair of streams. The system also uses a merge operator that reaggregates a previously split node in order to react to changes in the correlation structure between time series. The split and merge operators are triggered in response to changes in the diameters of existing clusters, assuming that in stationary environments, expanding the structure leads to a decrease in the diameters of the clusters. The system is designed to process thousands of data streams that flow at a high rate. The main features of the system include update time and memory consumption that do not depend on the number of examples in the stream. Moreover, the time and memory required to process an example decreases whenever the cluster structure expands. Experimental results on artificial and real data assess the processing qualities of the system, suggesting a competitive performance on clustering streaming time series, exploring also its ability to deal with concept drift.
---
paper_title: Learning model trees from evolving data streams
paper_content:
The problem of real-time extraction of meaningful patterns from time-changing data streams is of increasing importance for the machine learning and data mining communities. Regression in time-changing data streams is a relatively unexplored topic, despite the apparent applications. This paper proposes an efficient and incremental stream mining algorithm which is able to learn regression and model trees from possibly unbounded, high-speed and time-changing data streams. The algorithm is evaluated extensively in a variety of settings involving artificial and real data. To the best of our knowledge there is no other general purpose algorithm for incremental learning regression/model trees able to perform explicit change detection and informed adaptation. The algorithm performs online and in real-time, observes each example only once at the speed of arrival, and maintains at any-time a ready-to-use model tree. The tree leaves contain linear models induced online from the examples assigned to them, a process with low complexity. The algorithm has mechanisms for drift detection and model adaptation, which enable it to maintain accurate and updated regression models at any time. The drift detection mechanism exploits the structure of the tree in the process of local change detection. As a response to local drift, the algorithm is able to update the tree structure only locally. This approach improves the any-time performance and greatly reduces the costs of adaptation.
---
paper_title: Mining time-changing data streams
paper_content:
Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them changed during this time, sometimes radically. Although a number of algorithms have been proposed for learning time-changing concepts, they generally do not scale well to very large databases. In this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner. This algorithm, called CVFDT, stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate. CVFDT learns a model which is similar in accuracy to the one that would be learned by reapplying VFDT to a moving window of examples every time a new example arrives, but with O(1) complexity per example, as opposed to O(w), where w is the size of the window. Experiments on a set of large time-changing data streams demonstrate the utility of this approach.
---
paper_title: BIRCH: an efficient data clustering method for very large databases
paper_content:
Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I/O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle "noise" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time/space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.
---
paper_title: A Framework for Clustering Evolving Data Streams
paper_content:
The clustering problem is a difficult problem for the data stream domain. This is because the large volumes of data arriving in a stream renders most traditional algorithms too inefficient. In recent years, a few one-pass clustering algorithms have been developed for the data stream problem. Although such methods address the scalability issues of the clustering problem, they are generally blind to the evolution of the data and do not address the following issues: (1) The quality of the clusters is poor when the data evolves considerably over time. (2) A data stream clustering algorithm requires much greater functionality in discovering and exploring clusters over different portions of the stream. ::: ::: The widely used practice of viewing data stream clustering algorithms as a class of one-pass clustering algorithms is not very useful from an application point of view. For example, a simple one-pass clustering algorithm over an entire data stream of a few years is dominated by the outdated history of the stream. The exploration of the stream over different time windows can provide the users with a much deeper understanding of the evolving behavior of the clusters. At the same time, it is not possible to simultaneously perform dynamic clustering over all possible time horizons for a data stream of even moderately large volume. ::: ::: This paper discusses a fundamentally different philosophy for data stream clustering which is guided by application-centered requirements. The idea is divide the clustering process into an online component which periodically stores detailed summary statistics and an offine component which uses only this summary statistics. The offine component is utilized by the analyst who can use a wide variety of inputs (such as time horizon or number of clusters) in order to provide a quick understanding of the broad clusters in the data stream. The problems of efficient choice, storage, and use of this statistical data for a fast data stream turns out to be quite tricky. For this purpose, we use the concepts of a pyramidal time frame in conjunction with a microclustering approach. Our performance experiments over a number of real and synthetic data sets illustrate the effectiveness, efficiency, and insights provided by our approach.
---
paper_title: On Biased Reservoir Sampling in the Presence of Stream Evolution
paper_content:
The method of reservoir based sampling is often used to pick an unbiased sample from a data stream. A large portion of the unbiased sample may become less relevant over time because of evolution. An analytical or mining task (eg. query estimation) which is specific to only the sample points from a recent time-horizon may provide a very inaccurate result. This is because the size of the relevant sample reduces with the horizon itself. On the other hand, this is precisely the most important case for data stream algorithms, since recent history is frequently analyzed. In such cases, we show that an effective solution is to bias the sample with the use of temporal bias functions. The maintenance of such a sample is non-trivial, since it needs to be dynamically maintained, without knowing the total number of points in advance. We prove some interesting theoretical properties of a large class of memory-less bias functions, which allow for an efficient implementation of the sampling algorithm. We also show that the inclusion of bias in the sampling process introduces a maximum requirement on the reservoir size. This is a nice property since it shows that it may often be possible to maintain the maximum relevant sample with limited storage requirements. We not only illustrate the advantages of the method for the problem of query estimation, but also show that the approach has applicability to broader data mining problems such as evolution analysis and classification.
---
paper_title: Mining Frequent Patterns in Data Streams at Multiple Time Granularities
paper_content:
Although frequent-pattern mining has been widely studied and used, it is challenging to extend it to data streams. Compared to mining from a static transaction data set, the streaming case has far more information to track and far greater complexity to manage. Infrequent items can become frequent later on and hence cannot be ignored. The storage structure needs to be dynamically adjusted to reflect the evolution of itemset frequencies over time. In this paper, we propose computing and maintaining all the frequent patterns (which is usually more stable and smaller than the streaming data) and dynamically updating them with the incoming data streams. We extended the framework to mine time-sensitive patterns with approximate support guarantee. We incrementally maintain tilted-time windows for each pattern at multiple time granularities. Interesting
---
paper_title: Mining frequent patterns without candidate generation
paper_content:
Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods.
---
paper_title: Mining association rules between sets of items in large databases
paper_content:
We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm.
---
paper_title: A framework for resource-aware knowledge discovery in data streams: a holistic approach with its application to clustering
paper_content:
Mining data streams is a field of increase interest due to the importance of its applications and dissemination of data stream generators. Most of the streaming techniques developed so far have not addressed the need of resource-aware computing in data stream analysis. The fact that streaming information is often generated or received onboard resource-constrained computational devices such as sensors and mobile devices motivates the need for resource-awareness in data stream processing systems. In this paper, we propose a generic framework that enables resource-awareness in streaming computation using algorithm granularity settings in order to change the resource consumption patterns periodically. This generic framework is applied to a novel threshold-based micro-clustering algorithm to test its validity and feasibility. We have termed this algorithm as RA-Cluster. RA-Custer is the first stream clustering algorithm that can adapt to the changing availability of different resources. The experimental results showed the applicability of the framework and the algorithm in terms of resource-awareness and accuracy.
---
paper_title: Cost-Efficient Mining Techniques for Data Streams
paper_content:
A data stream is a continuous and high-speed flow of data items. High speed refers to the phenomenon that the data rate is high relative to the computational power. The increasing focus of applications that generate and receive data streams stimulates the need for online data stream analysis tools. Mining data streams is a real time process of extracting interesting patterns from high-speed data streams. Mining data streams raises new problems for the data mining community in terms of how to mine continuous high-speed data items that you can only have one look at. In this paper, we propose algorithm output granularity as a solution for mining data streams. Algorithm output granularity is the amount of mining results that fits in main memory before any incremental integration. We show the application of the proposed strategy to build efficient clustering, frequent items and classification techniques. The empirical results for our clustering algorithm are presented and discussed which demonstrate acceptable accuracy coupled with efficiency in running time.
---
paper_title: Detecting Change in Data Streams
paper_content:
Detecting changes in a data stream is an important area of research with many applications. In this paper, we present a novel method for the detection and estimation of change. In addition to providing statistical guarantees on the reliability of detected changes, our method also provides meaningful descriptions and quantification of these changes. Our approach assumes that the points in the stream are independently generated, but otherwise makes no assumptions on the nature of the generating distribution. Thus our techniques work for both continuous and discrete data. In an experimental study we demonstrate the power of our techniques.
---
paper_title: Orthogonal decision trees
paper_content:
This paper introduces orthogonal decision trees that offer an effective way to construct a redundancy-free, accurate, and meaningful representation of large decision-tree-ensembles often created by popular techniques such as bagging, boosting, random forests, and many distributed and data stream mining algorithms. Orthogonal decision trees are functionally orthogonal to each other and they correspond to the principal components of the underlying function space. This paper offers a technique to construct such trees based on the Fourier transformation of decision trees and eigen-analysis of the ensemble in the Fourier representation. It offers experimental results to document the performance of orthogonal trees on the grounds of accuracy and model complexity
---
paper_title: Data Mining: Next Generation Challenges and Future Directions
paper_content:
Data mining, or knowledge discovery, has become an indispensable technology for businesses and researchers in many fields. Drawing on work in such areas as statistics, machine learning, pattern recognition, databases, and high performance computing, data mining extracts useful information from the large data sets now available to industry and science. This collection surveys the most recent advances in the field and charts directions for future research.The first part looks at pervasive, distributed, and stream data mining, discussing topics that include distributed data mining algorithms for new application areas, several aspects of next-generation data mining systems and applications, and detection of recurrent patterns in digital media. The second part considers data mining, counter-terrorism, and privacy concerns, examining such topics as biosurveillance, marshalling evidence through data mining, and link discovery. The third part looks at scientific data mining; topics include mining temporally-varying phenomena, data sets using graphs, and spatial data mining. The last part considers web, semantics, and data mining, examining advances in text mining algorithms and software, semantic webs, and other subjects.
---
paper_title: Mining decision trees from data streams in a mobile environment
paper_content:
This paper presents a novel Fourier analysis-based technique to aggregate, communicate and visualize decision trees in a mobile environment. A Fourier representation of a decision tree has several useful properties that are particularly useful for mining continuous data streams from small mobile computing devices. This paper presents algorithms to compute the Fourier spectrum of a decision tree and vice versa. It offers a framework to aggregate decision trees in their Fourier representations. It also describes a touchpad/ticker-based approach to visualize decision trees using their Fourier spectrum and an implementation for PDAs.
---
paper_title: A geometric approach to monitoring threshold functions over distributed data streams
paper_content:
Monitoring data streams in a distributed system is the focus of much research in recent years. Most of the proposed schemes, however, deal with monitoring simple aggregated values, such as the frequency of appearance of items in the streams. More involved challenges, such as the important task of feature selection (e.g., by monitoring the information gain of various features), still require very high communication overhead using naive, centralized algorithms. We present a novel geometric approach by which an arbitrary global monitoring task can be split into a set of constraints applied locally on each of the streams. The constraints are used to locally filter out data increments that do not affect the monitoring outcome, thus avoiding unnecessary communication. As a result, our approach enables monitoring of arbitrary threshold functions over distributed data streams in an efficient manner. We present experimental results on real-world data which demonstrate that our algorithms are highly scalable, and considerably reduce communication load in comparison to centralized algorithms.
---
paper_title: Mining adaptively frequent closed unlabeled rooted trees in data streams
paper_content:
Closed patterns are powerful representatives of frequent patterns, since they eliminate redundant information. We propose a new approach for mining closed unlabeled rooted trees adaptively from data streams that change over time. Our approach is based on an efficient representation of trees and a low complexity notion of relaxed closed trees, and leads to an on-line strategy and an adaptive sliding window technique for dealing with changes over time. More precisely, we first present a general methodology to identify closed patterns in a data stream, using Galois Lattice Theory. Using this methodology, we then develop three closed tree mining algorithms: an incremental one IncTreeNat, a sliding-window based one, WinTreeNat, and finally one that mines closed trees adaptively from data streams, AdaTreeNat. To the best of our knowledge this is the first work on mining frequent closed trees in streaming data varying with time. We give a first experimental evaluation of the proposed algorithms.
---
paper_title: Adaptive XML Tree Classification on evolving data streams
paper_content:
We propose a new method to classify patterns, using closed and maximal frequent patterns as features. Generally, classification requires a previous mapping from the patterns to classify to vectors of features, and frequent patterns have been used as features in the past. Closed patterns maintain the same information as frequent patterns using less space and maximal patterns maintain approximate information. We use them to reduce the number of classification features. We present a new framework for XML tree stream classification. For the first component of our classification framework, we use closed tree mining algorithms for evolving data streams. For the second component, we use state of the art classification methods for data streams. To the best of our knowledge this is the first work on tree classification in streaming data varying with time. We give a first experimental evaluation of the proposed classification method.
---
paper_title: Issues in evaluation of stream learning algorithms
paper_content:
Learning from data streams is a research area of increasing importance. Nowadays, several stream learning algorithms have been developed. Most of them learn decision models that continuously evolve over time, run in resource-aware environments, detect and react to changes in the environment generating data. One important issue, not yet conveniently addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. There are no golden standards for assessing performance in non-stationary environments. This paper proposes a general framework for assessing predictive stream learning algorithms. We defend the use of Predictive Sequential methods for error estimate - the prequential error. The prequential error allows us to monitor the evolution of the performance of models that evolve over time. Nevertheless, it is known to be a pessimistic estimator in comparison to holdout estimates. To obtain more reliable estimators we need some forgetting mechanism. Two viable alternatives are: sliding windows and fading factors. We observe that the prequential error converges to an holdout estimator when estimated over a sliding window or using fading factors. We present illustrative examples of the use of prequential error estimators, using fading factors, for the tasks of: i) assessing performance of a learning algorithm; ii) comparing learning algorithms; iii) hypothesis testing using McNemar test; and iv) change detection using Page-Hinkley test. In these tasks, the prequential error estimated using fading factors provide reliable estimators. In comparison to sliding windows, fading factors are faster and memory-less, a requirement for streaming applications. This paper is a contribution to a discussion in the good-practices on performance assessment when learning dynamic models that evolve over time.
---
paper_title: The New Frontier of Web Search Technology: Seven Challenges
paper_content:
The classic Web search experience, consisting of returning "ten blue links" in response to a short user query, is powered today by a mature technology where progress has become incremental and expensive. Furthermore, the "ten blue links" represent only a fractional part of the total Web search experience: today, what users expect and receive in response to a "web query" is a plethora of multi-media information extracted and synthesized from numerous sources on and off the Web. In consequence, we argue that the major technical challenges in Web search are now driven by the quest to satisfy the implicit and explicit needs of users, continuing a long evolutionary trend in commercial Web search engines going back more than fifteen years, moving from relevant document selection towards satisfactory task completion. We identify seven of these challenges and discuss them in some detail.
---
paper_title: The need for low bias algorithms in classification learning from large data sets
paper_content:
This paper reviews the appropriateness for application to large data sets of standard machine learning algorithms, which were mainly developed in the context of small data sets. Sampling and parallelisation have proved useful means for reducing computation time when learning from large data sets. However, such methods assume that algorithms that were designed for use with what are now considered small data sets are also fundamentally suitable for large data sets. It is plausible that optimal learning from large data sets requires a different type of algorithm to optimal learning from small data sets. This paper investigates one respect in which data set size may affect the requirements of a learning algorithm - the bias plus variance decomposition of classification error. Experiments show that learning from large data sets may be more effective when using an algorithm that places greater emphasis on bias management, rather than variance management.
---
paper_title: Algorithms for Next Generation Networks
paper_content:
Data networking now plays a major role in everyday life and new applications continue to appear at a blinding pace. Yet we still do not have a sound foundation for designing, evaluating and managing these networks. This book covers topics at the intersection of algorithms and networking. It builds a complete picture of the current state of research on Next Generation Networks and the challenges for the years ahead. Particular focus is given to evolving research initiatives and the architecture they propose and implications for networking. Topics: Network design and provisioning, hardware issues, layer-3 algorithms and MPLS, BGP and Inter AS routing, packet processing for routing, security and network management, load balancing, oblivious routing and stochastic algorithms, network coding for multicast, overlay routing for P2P networking and content delivery. This timely volume will be of interest to a broad readership from graduate students to researchers looking to survey recent research its open questions.
---
paper_title: The Catalog Archive Server Database Management System
paper_content:
The multiterabyte Sloan Digital Sky Survey's (SDSS's) catalog data is stored in a commercial relational database management system with SQL query access and a built-in query optimizer. The SDSS catalog archive server adds advanced data mining features to the DBMS to provide fast online access to the data.
---
| Title: A survey on learning from data streams: current and future trends
Section 1: Introduction
Description 1: Provide an overview of machine learning goals, the transition from batch learning to data stream learning, and the challenges posed by streaming data.
Section 2: Machine learning and data streams
Description 2: Explain how machine learning extracts knowledge from data streams, distinguishing between different models and discussing research issues in data stream management systems.
Section 3: Approximation and randomization
Description 3: Discuss the importance of approximation techniques and randomization in data stream processing, and provide examples of problems these techniques address.
Section 4: Time windows
Description 4: Describe the use of time windows in computing statistics over data streams, including models like sliding windows, and discuss strategies for maintaining recent information relevance.
Section 5: Sampling
Description 5: Elaborate on sampling techniques for selecting data subsets in streams, highlighting the challenges of obtaining representative samples.
Section 6: Synopsis, sketches and summaries
Description 6: Introduce data summarization techniques, such as wavelets, exponential histograms, and count-min sketches, used for querying large data streams efficiently.
Section 7: Algorithms for learning from data streams
Description 7: Review various streaming algorithms for tasks like decision trees, clustering, and frequent pattern mining, discussing their methodologies and applications.
Section 8: Predictive learning from data streams
Description 8: Detail methods for predictive learning from data streams, focusing on reducing learners' loss and minimizing example usage while maintaining model accuracy.
Section 9: Clustering data streams
Description 9: Explain the clustering process for data streams, touching on concepts like cluster features and structures for generating and evolving clusters over time.
Section 10: Frequent pattern mining
Description 10: Describe techniques for mining frequent itemsets and patterns from data streams, addressing challenges posed by evolving data and one-scan constraints.
Section 11: Algorithm issues in learning from data streams
Description 11: Discuss the need for learning algorithms that adapt to new data while forgetting outdated information, addressing concept drift and non-stationary data distributions.
Section 12: Cost-performance management
Description 12: Explore the balance between update costs and performance gains in learning algorithms, focusing on incremental and decremental learning, and strategies for efficient model maintenance.
Section 13: Monitoring learning
Description 13: Highlight the importance of monitoring the evolution of learning algorithms and decision models, concentrating on handling concept drift and maintaining model accuracy.
Section 14: Novelty detection
Description 14: Examine the capability of learning algorithms to identify and adapt to new concepts in dynamic environments.
Section 15: Distributed streams
Description 15: Discuss methods for learning from distributed data streams, and the advantages and challenges of decentralized processing.
Section 16: Structured data
Description 16: Cover the challenges of mining structured data like sequences, trees, and graphs from data streams.
Section 17: Evolving feature spaces
Description 17: Address the need for algorithms that can handle changing schemas in dynamic data stream environments.
Section 18: Evaluation methods and metrics
Description 18: Present evaluation criteria for learning algorithms in data streams, emphasizing how metrics should evolve over time.
Section 19: Emerging challenges and future issues
Description 19: Identify current and future challenges in data stream management and mining, including adaptive algorithms, resource management, and developing intelligent, self-diagnosing systems. |
Music Data Analysis: A State-of-the-art Survey | 16 | ---
paper_title: A Machine Learning Approach to Musical Style Recognition
paper_content:
Much of the work on perception and understanding of music by computers has focused on low-level ::: perceptual features such as pitch and tempo. Our work demonstrates that machine learning can be ::: used to build effective style classifiers for interactive performance systems. We also present an analysis explaining why these techniques work so well when hand-coded approaches have consistently ::: failed. We also describe a reliable real-time performance style classifier.
---
paper_title: Genre classification of symbolic music with SMBGT
paper_content:
Automatic music genre classification is a task that has attracted the interest of the music community for more than two decades. Music can be of high importance within the area of assistive technologies as it can be seen as an assistive technology with high therapeutic and educational functionality for children and adults with disabilities. Several similarity methods and machine learning techniques have been applied in the literature to deal with music genre classification, and as a result data mining and Music Information Retrieval (MIR) are strongly interconnected. In this paper, we deal with music genre classification for symbolic music, and specifically MIDI, by combining the recently proposed novel similarity measure for sequences, SMBGT, with the k-Nearest Neighbor (k-NN) classifier. For all MIDI songs we first extract all of their channels and then transform each channel into a sequence of 2D points, providing information for pitch and duration of their music notes. The similarity between two songs is found by computing the SMBGT for all pairs of the songs' channels and getting the maximum pairwise channel score as their similarity. Each song is treated as a query to which k-NN is applied, and the returned genre of the classifier is the one with the majority of votes in the k neighbors. Classification accuracy results indicate that there is room for improvement, especially due to the ambiguous definitions of music genres that make it hard to clearly discriminate them. Using this framework can also help us analyze and understand potential disadvantages of SMBGT, and thus identify how it can be improved when used for classification of real-time sequences.
---
paper_title: Capturing the Temporal Domain in Echonest Features for Improved Classification Effectiveness
paper_content:
This paper proposes Temporal Echonest Features to harness the information available from the beat-aligned vector sequences of the features provided by The Echo Nest. Rather than aggregating them via simple averaging approaches, the statistics of temporal variations are analyzed and used to represent the audio content. We evaluate the performance on four traditional music genre classification test collections and compare them to state of the art audio descriptors. Experiments reveal, that the exploitation of temporal variability from beat-aligned vector sequences and combinations of different descriptors leads to an improvement of classification accuracy. Comparing the results of Temporal Echonest Features to those of approved conventional audio descriptors used as benchmarks, these approaches perform well, often significantly outperforming their predecessors, and can be effectively used for large scale music genre classification.
---
paper_title: Enriching music mood annotation by semantic association reasoning
paper_content:
Mood annotation of music is challenging as it concerns not only audio content but also extra-musical information. It is a representative research topic about how to traverse the well-known semantic gap. In this paper, we propose a new music-mood-specific ontology. Novel ontology-based semantic reasoning methods are applied to effectively bridge content-based information with web-based resources. Also, the system can automatically discover closely relevant semantics for music mood and thus a novel weighting method is proposed for mood propagation. Experiments show that the proposed method outperforms purely content-based methods and significantly enhances the mood prediction accuracy. Furthermore, evaluations show the system's accuracy could be promisingly increased with the enrichment of metadata.
---
paper_title: Machine Learning Approaches for Mood Classification of Songs toward Music Search Engine
paper_content:
Human often wants to listen to music that fits best his current emotion. A grasp of emotions in songs might be a great help for us to effectively discover music. In this paper, we aimed at automatically classifying moods of songs based on lyrics and metadata, and proposed several methods for supervised learning of classifiers. In future, we plan to use automatically identified moods of songs as metadata in our music search engine. Mood categories in a famous contest about Audio Music Mood Classification (MIREX 2007) are applied for our system. The training data is collected from a LiveJournal blog site in which each blog entry is tagged with a mood and a song. Then three kinds of machine learning algorithms are applied for training classifiers: SVM, Naive Bayes and Graph-based methods. The experiments showed that artist, sentiment words, putting more weight for words in chorus and title parts are effective for mood classification. Graph-based method promises a good improvement if we have rich relationship information among songs.
---
paper_title: A Neural Probabilistic Model for Predicting Melodic Sequences
paper_content:
We present an approach for modelling melodic sequences us- ing Restricted Boltzmann Machines, with an application to folk melody classification. 1Results show that this model's predictive performance is slightly better in our experiment than that of previously evaluated n- gram models (7). The model has a simple structure and in our evaluation it scaled linearly in the number of free parameters with length of the modelled context. A set of these models is used to classify 7 different styles of folk melodies with an accuracy of 61.74%.
---
paper_title: Enhanced peak picking for onset detection with recurrent neural networks
paper_content:
We present a new neural network based peak-picking algo- rithm for common onset detection functions. Compared to existing hand- crafted methods it yields a better performance and leads to a much lower number of false negative detections. The performance is evaluated on basis of a huge dataset with over 25k annotated onsets and shows a signicant improvement over existing methods in cases of signals with previously unknown levels.
---
paper_title: Dance Hit Song Science
paper_content:
With annual investments of several billions of dollars world- wide, record companies can benet tremendously by gaining insight into what actually makes a hit song. This question is tackled in this re- search by focussing on the dance hit song problem prediction problem. A database of dance hit songs from 1985 until 2013 is built, including basic musical features, as well as more advanced features that capture a temporal aspect. Dierent classiers are used to build and test dance hit prediction models. The resulting model has a good performance when predicting whether a song is a \top 10" dance hit versus a lower listed position.
---
paper_title: Automatic Music Classification with jMIR
paper_content:
Automatic music classification is a wide-ranging and multidisciplinary area of inquiry that offers significant benefits from both academic and commercial perspectives. This dissertation focuses on the development of jMIR, a suite of powerful, flexible, accessible and original software tools that can be used to design, share and apply a wide range of automatic music classification technologies. ::: jMIR permits users to extract meaningful information from audio recordings, symbolic musical representations and cultural information available on the Internet; to use machine learning technologies to automatically build classification models; to automatically collect profiling statistics and detect metadata errors in musical collections; to perform experiments on large, stylistically diverse and well-labelled collections of music in both audio and symbolic formats; and to store and distribute information that is essential to automatic music classification in expressive and flexible standardised file formats. ::: In order to have as diverse a range of applications as possible, care was taken to avoid tying jMIR to any particular types of music classification. Rather, it is designed to be a general-purpose toolkit that can be applied to arbitrary types of music classification. Each of the jMIR components is also designed to be accessible not only by users with a high degree of expertise in computer-based research technologies, but also by researchers with valuable musical expertise, but perhaps less of a background in computational research. Moreover, although the jMIR software can certainly be used as a set of ready-to-use tools for solving music classification problems directly, it is also designed to serve as an open-source platform for developing and testing original algorithms. ::: This dissertation also describes several experiments that were performed with jMIR. These experiments were intended not only to verify the effectiveness of the software, but also to investigate the utility of combining information from different types of musical data, an approach with the potential to significantly advance the performance of automatic music classification in general.
---
paper_title: Exploring the music similarity space on the web
paper_content:
This article comprehensively addresses the problem of similarity measurement between music artists via text-based features extracted from Web pages. To this end, we present a thorough evaluation of different term-weighting strategies, normalization methods, aggregation functions, and similarity measurement techniques. In large-scale genre classification experiments carried out on real-world artist collections, we analyze several thousand combinations of settings/parameters that influence the similarity calculation process, and investigate in which way they impact the quality of the similarity estimates. Accurate similarity measures for music are vital for many applications, such as automated playlist generation, music recommender systems, music information systems, or intelligent user interfaces to access music collections by means beyond text-based browsing. Therefore, by exhaustively analyzing the potential of text-based features derived from artist-related Web pages, this article constitutes an important contribution to context-based music information research.
---
paper_title: Combining sources of description for approximating music similarity ratings
paper_content:
In this paper, we compare the effectiveness of basic acoustic features and genre annotations when adapting a music similarity model to user ratings. We use the Metric Learning to Rank algorithm to learn a Mahalanobis metric from comparative similarity ratings in in the MagnaTagATune database. Using common formats for feature data, our approach can easily be transferred to other existing databases. Our results show that genre data allow more effective learning of a metric than simple audio features, but a combination of both feature sets clearly outperforms either individual set.
---
paper_title: A Filter-and-Refine Indexing Method for Fast Similarity Search in Millions of Music Tracks
paper_content:
We present a filter-and-refine method to speed up acoustic audio similarity queries which use the Kullback-Leibler divergence as similarity measure. The proposed method rescales the divergence and uses a modified FastMap [1] implementation to accelerate nearest-neighbor queries. The search for similar music pieces is accelerated by a factor of 10 30 compared to a linear scan but still offers high recall values (relative to a linear scan) of 95 99%. We show how the proposed method can be used to query several million songs for their acoustic neighbors very fast while producing almost the same results that a linear scan over the whole database would return. We present a working prototype implementation which is able to process similarity queries on a 2:5 million songs collection in about half a second on a standard CPU.
---
paper_title: An Approach to Automatic Music Band Member Detection Based on Supervised Learning
paper_content:
Automatically extracting factual information about musical entities, such as detecting the members of a band, helps building advanced browsing interfaces and recommendation systems. In this paper, a supervised approach to learning to identify and to extract the members of a music band from related Web documents is proposed. While existing methods utilize manually optimized rules for this purpose, the presented technique learns from automatically labelled examples, making therefore also manual annotation obsolete. The presented approach is compared against existing rule-based methods for band-member extraction by performing systematic evaluation on two different test sets.
---
paper_title: Multidisciplinary Perspectives on Music Emotion Recognition: Implications for Content and Context-Based Models
paper_content:
The prominent status of music in human culture and every day life is due in large part to its striking ability to elicit emotions, which may manifest from slight variation in mood to changes in our physical condition and actions. In this paper, we first review state of the art stud- ies on music and emotions from dierent disciplines including psychology, musicology and music information retrieval. Based on these studies, we then propose new insights to enhance automated music emotion recog- nition models.
---
paper_title: Automated Music Emotion Recognition: A Systematic Evaluation
paper_content:
Abstract Automated music emotion recognition (MER) is a challenging task in Music Information Retrieval with wide-ranging applications. Some recent studies pose MER as a continuous regression problem in the Arousal-Valence (AV) plane. These consist of variations on a common architecture having a universal model of emotional response, a common repertoire of low-level audio features, a bag-of-frames approach to audio analysis, and relatively small data sets. These approaches achieve some success at MER and suggest that further improvements are possible with current technology. Our contribution to the state of the art is to examine just how far one can go within this framework, and to investigate what the limitations of this framework are. We present the results of a systematic study conducted in an attempt to maximize the prediction performance of an automated MER system using the architecture described. We begin with a carefully constructed data set, emphasizing quality over quantity. We address affect ind...
---
paper_title: Music Emotion Recognition: The Importance of Melodic Features
paper_content:
We study the importance of a melodic audio (MA) feature set in music emotion recognition (MER) and compare its performance to an approach using only standard audio (SA) features. We also analyse the fusion of both types of features. Employing only SA features, the best attained performance was 46.3%, while using only MA features the best outcome was 59.1% (F- measure). A combination of SA and MA features improved results to 64%. These results might have an important impact to help break the so-called glass ceiling in MER, as most current approaches are based on SA features.
---
paper_title: Essentia: An Audio Analysis Library for Music Information Retrieval
paper_content:
Comunicacio presentada a la 14th International Society for Music Information Retrieval Conference, celebrada a Curitiba (Brasil) els dies 4 a 8 de novembre de 2013.
---
paper_title: Contextual Music Information Retrieval and Recommendation: State of the Art and Challenges
paper_content:
Abstract Increasing amount of online music content has opened new opportunities for implementing new effective information access services–commonly known as music recommender systems–that support music navigation, discovery, sharing, and formation of user communities. In the recent years a new research area of contextual (or situational) music recommendation and retrieval has emerged. The basic idea is to retrieve and suggest music depending on the user’s actual situation, for instance emotional state, or any other contextual conditions that might influence the user’s perception of music. Despite the high potential of such idea, the development of real-world applications that retrieve or recommend music depending on the user’s context is still in its early stages. This survey illustrates various tools and techniques that can be used for addressing the research challenges posed by context-aware music retrieval and recommendation. This survey covers a broad range of topics, starting from classical music information retrieval (MIR) and recommender system (RS) techniques, and then focusing on context-aware music applications as well as the newer trends of affective and social computing applied to the music domain.
---
paper_title: Music Search and Recommendation
paper_content:
In the last ten years, our ways to listen to music have drastically changed: In earlier times, we went to record stores or had to use low bit-rate audio coding to get some music and to store it on PCs. Nowadays, millions of songs are within reach via on-line distributors. Some music lovers already got terabytes of music on their hard disc. Users are now no longer desparate to get music, but to select, to find the music they love. A number of technologies has been developed to adress these new requirements. There are techniques to identify music and ways to search for music. Recommendation today is a hot topic as well as organizing music into playlists.
---
paper_title: SoCo: a social network aided context-aware recommender system
paper_content:
Contexts and social network information have been proven to be valuable information for building accurate recommender system. However, to the best of our knowledge, no existing works systematically combine diverse types of such information to further improve recommendation quality. In this paper, we propose SoCo, a novel context-aware recommender system incorporating elaborately processed social network information. We handle contextual information by applying random decision trees to partition the original user-item-rating matrix such that the ratings with similar contexts are grouped. Matrix factorization is then employed to predict missing preference of a user for an item using the partitioned matrix. In order to incorporate social network information, we introduce an additional social regularization term to the matrix factorization objective function to infer a user's preference for an item by learning opinions from his/her friends who are expected to share similar tastes. A context-aware version of Pearson Correlation Coefficient is proposed to measure user similarity. Real datasets based experiments show that SoCo improves the performance (in terms of root mean square error) of the state-of-the-art context-aware recommender system and social recommendation model by 15.7% and 12.2% respectively.
---
paper_title: Implementing situation-aware and user-adaptive music recommendation service in semantic web and real-time multimedia computing environment
paper_content:
With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user's situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user's musical preferences and contexts, and supporting reasoning about the user's desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.
---
paper_title: dbrec — Music Recommendations Using DBpedia
paper_content:
This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm.
---
paper_title: Leveraging Social Media Sources to Generate Personalized Music Playlists
paper_content:
This paper presents MyMusic, a system that exploits social media sources for generating personalized music playlists. This work is based on the idea that information extracted from social networks, such as Facebook and Last.fm, might be effectively exploited for personalization tasks. Indeed, information related to music preferences of users can be easily gathered from social platforms and used to define a model of user interests. The use of social media is a very cheap and effective way to overcome the classical cold start problem of recommender systems. In this work we enriched social media-based playlists with new artists related to those the user already likes. Specifically, we compare two different enrichment techniques: the first leverages the knowledge stored on DBpedia, the structured version of Wikipedia, while the second is based on the content-based similarity between descriptions of artists. The final playlist is ranked and finally presented to the user that can listen to the songs and express her feedbacks. A prototype version of MyMusic was made available online in order to carry out a preliminary user study to evaluate the best enrichment strategy. The preliminary results encouraged keeping on this research.
---
paper_title: Contextualize Your Listening: The Playlist as Recommendation Engine
paper_content:
It is not hyperbole to note that a revolution has occurred in the way that we as a society distribute data and information. This revolution has come about through the confluence of Web-related technologies and the approaching universal adoption of internet connectivity. Add to this mix the normalised use of lossy compression in digital music and the increase in digital music download and streaming services; the result is an environment where nearly anyone can listen to nearly any piece of music nearly anywhere. This is in many respects the pinnacle in music access and availability. Yet, a listener is now faced with a ::: dilemma of choice. Without being familiar with the ever-expanding millions of songs available, how does a listener know what to listen to? If a near-complete collection of recorded music is available what does one listen to next? While the world of music distribution underwent a revolution, the ubiquitous access and availability it created brought new problems in recommendation and discovery. ::: ::: In this thesis, a solution to these problems of recommendation and discovery is presented. We begin with an introduction to the core concepts around the playlist (i.e. sequential ordering of musical works). Next, we examine the history of the playlist as a recommendation technique, starting from before the invention of audio recording and moving through to modern automatic methods. This leads to an awareness that the creation of suitable playlists requires a high degree of knowledge of the relation between songs in a collection (e.g. song similarity). To better inform our base of knowledge of the relationships between songs we explore the use of social network analysis in combination with content-based music information retrieval. In an effort to show the promise of this more complex relational space, a fully automatic interactive radio system is proposed, using audio-content and social network data as a backbone. The implementation of the system is detailed. The creation of this system presents another problem in the area of evaluation. To that end, a novel distance metric between playlists is specified and tested. We then conclude with a discussion of what has been shown and what future work remains.
---
| Title: Music Data Analysis: A State-of-the-art Survey
Section 1: Introduction
Description 1: Introduce the importance of music in online activities and the potential for data science research in music data analysis.
Section 2: Prediction and Recognition of Musical Aspects
Description 2: Discuss the use of music data analysis for predicting and recognizing various musical aspects such as style, genre, mood, emotion, onset detection, and song hit prediction.
Section 3: Style
Description 3: Explore techniques and datasets used for recognizing and classifying musical styles using supervised and semi-supervised learning algorithms.
Section 4: Genre
Description 4: Review state-of-the-art techniques for genre detection and classification in music data analysis, including commonly used datasets and classification methods.
Section 5: Mood
Description 5: Cover methodologies and techniques used for mood classification in music, including semantic web technologies and supervised learning algorithms.
Section 6: Melodic Sequence
Description 6: Discuss machine learning approaches for modeling melodic sequences in music, primarily focusing on monophonic melodies.
Section 7: Onset Detection
Description 7: Detail the use of algorithms for detecting the onset of musical events in audio streams.
Section 8: Song Hit Prediction
Description 8: Investigate methods and machine learning techniques used to predict the potential success of songs before their release.
Section 9: Music Classification
Description 9: Examine various approaches and tools used in music classification, including the jMIR software for standardized MIR research.
Section 10: Similarity
Description 10: Analyze techniques used for measuring music similarity, including acoustic features and text-based features extracted from web pages.
Section 11: Emotion
Description 11: Discuss methods for music emotion recognition using melodic and standard audio features, and the potential of semantic web ontology in this field.
Section 12: Audio Analysis
Description 12: Explore the use of open-source libraries like Essentia for audio analysis and their contributions to music information retrieval.
Section 13: Music Recommendation
Description 13: Review various music recommendation systems, including context-aware and social network-based recommendation approaches.
Section 14: Playlist Recommendation
Description 14: Investigate systems and methodologies for generating personalized music playlists using social media data and content-based similarity measures.
Section 15: Discussion
Description 15: Summarize the identified aspects of music data analysis, common datasets used, and additional data types that present opportunities for future research.
Section 16: Conclusions
Description 16: Provide concluding remarks on the state-of-the-art in music data analysis, highlighting technological approaches and opportunities for further exploration. |
Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview | 12 | ---
paper_title: The Fokker-Planck equation
paper_content:
An arrangement for retaining a heart valve in a passageway of a cardiovascular system, and a heart valve useable therewith. The retaining arrangement includes an annular arrangement surrounding the circular heart valve and engaging it so as to be radially expandable and contractable relative thereto while securely holding the heart valve in place. The annular arrangement may include a single overlapping loop or a plurality of arcuate segments, all of which could slidingly engage a circumferentially extending flange on the outside of the heart valve. The heart valve includes a plurality of circular elements interfitting in a single plane to close the valve and extending axially relative to each other to open the valve and hence permit fluid flow through the passageway.
---
paper_title: The Algorithmic Beauty of Plants
paper_content:
1 Graphical modeling using L-systems.- 1.1 Rewriting systems.- 1.2 DOL-systems.- 1.3 Turtle interpretation of strings.- 1.4 Synthesis of DOL-systems.- 1.4.1 Edge rewriting.- 1.4.2 Node rewriting.- 1.4.3 Relationship between edge and node rewriting.- 1.5 Modeling in three dimensions.- 1.6 Branching structures.- 1.6.1 Axial trees.- 1.6.2 Tree OL-systems.- 1.6.3 Bracketed OL-systems.- 1.7 Stochastic L-systems.- 1.8 Context-sensitive L-systems.- 1.9 Growth functions.- 1.10 Parametric L-systems.- 1.10.1 Parametric OL-systems.- 1.10.2 Parametric 2L-systems.- 1.10.3 Turtle interpretation of parametric words.- 2 Modeling of trees.- 3 Developmental models of herbaceous plants.- 3.1 Levels of model specification.- 3.1.1 Partial L-systems.- 3.1.2 Control mechanisms in plants.- 3.1.3 Complete models.- 3.2 Branching patterns.- 3.3 Models of inflorescences.- 3.3.1 Monopodial inflorescences.- 3.3.2 Sympodial inflorescences.- 3.3.3 Polypodial inflorescences.- 3.3.4 Modified racemes.- 4 Phyllotaxis.- 4.1 The planar model.- 4.2 The cylindrical model.- 5 Models of plant organs.- 5.1 Predefined surfaces.- 5.2 Developmental surface models.- 5.3 Models of compound leaves.- 6 Animation of plant development.- 6.1 Timed DOL-systems.- 6.2 Selection of growth functions.- 6.2.1 Development of nonbranching filaments.- 6.2.2 Development of branching structures.- 7 Modeling of cellular layers.- 7.1 Map L-systems.- 7.2 Graphical interpretation of maps.- 7.3 Microsorium linguaeforme.- 7.4 Dryopteris thelypteris.- 7.5 Modeling spherical cell layers.- 7.6 Modeling 3D cellular structures.- 8 Fractal properties of plants.- 8.1 Symmetry and self-similarity.- 8.2 Plant models and iterated function systems.- Epilogue.- Appendix A Software environment for plant modeling.- A.1 A virtual laboratory in botany.- A.2 List of laboratory programs.- Appendix B About the figures.- Turtle interpretation of symbols.
---
paper_title: The Fokker-Planck equation
paper_content:
An arrangement for retaining a heart valve in a passageway of a cardiovascular system, and a heart valve useable therewith. The retaining arrangement includes an annular arrangement surrounding the circular heart valve and engaging it so as to be radially expandable and contractable relative thereto while securely holding the heart valve in place. The annular arrangement may include a single overlapping loop or a plurality of arcuate segments, all of which could slidingly engage a circumferentially extending flange on the outside of the heart valve. The heart valve includes a plurality of circular elements interfitting in a single plane to close the valve and extending axially relative to each other to open the valve and hence permit fluid flow through the passageway.
---
paper_title: Cellerator: extending a computer algebra system to include biochemical arrows for signal transduction simulations.
paper_content:
Summary: Cellerator describes single and multi-cellular signal transduction networks (STN) with a compact, optionally palette-driven, arrow-based notation to represent biochemical reactions and transcriptional activation. Multicompartment systems are represented as graphs with STNs embedded in each node. Interactions include massaction, enzymatic, allosteric and connectionist models. Reactions are translated into differential equations and can be solved numerically to generate predictive time courses or output as systems of equations that can be read by other programs. Cellerator simulations are fully extensible and portable to any operating system that supports Mathematica, and can be indefinitely nested within larger data structures to produce highly scaleable models. Availability: Cellerator can be licensed free of charge to noncommercial academic, U.S. government, and nonprofit users. Details and sample notebooks are available at http:
---
paper_title: Tracking Cell Signals in Fluorescent Images
paper_content:
In this paper we present the techniques for tracking cell signal in GFP (Green Fluorescent Protein) images of growing cell colonies. We use such tracking for both data extraction and dynamic modeling of intracellular processes. The techniques are based on optimization of energy functions, which simultaneously determines cell correspondences, while estimating the mapping functions. In addition to spatial mappings such as affine and Thin-Plate Spline mapping, the cell growth and cell division histories must be estimated as well. Different levels of joint optimization are discussed. The most unusual tracking feature addressed in this paper is the possibility of one-to-two correspondences caused by cell division. A novel extended softassign algorithm for solutions of one-to-many correspondences is detailed in this paper. The techniques are demonstrated on three sets of data: growing bacillus Subtillus and e-coli colonies and a developing plant shoot apical meristem. The techniques are currently used by biologists for data extraction and hypothesis formation.
---
| Title: Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview
Section 1: Introduction
Description 1: Write an introduction about probabilistic models and how they can be unified within the framework of Stochastic Parameterized Grammars (SPGs) and Dynamical Grammars (DGs).
Section 2: Syntax Definition
Description 2: Describe the formal syntax of Stochastic Parameterized Grammars, including rule structure, parameter types, and probability rate functions.
Section 3: Semantic Maps
Description 3: Explain the semantics function Ψ for continuous-time and discrete-time stochastic processes derived from the grammar.
Section 4: Operator Algebra
Description 4: Detail the operator algebra used to define the semantics, including creation and annihilation operators and their implications for probability conservation.
Section 5: Continuous-time Semantics
Description 5: Define the continuous-time semantics of the grammar, particularly focusing on the time evolution operator and probability inflow/outflow.
Section 6: Discrete-time SPG Semantics
Description 6: Elaborate on the discrete-time semantics, discussing the operator H and how it conditions probability distribution over discrete rule-firing events.
Section 7: Time-ordered Product Expansion
Description 7: Introduce the time-ordered product expansion tool and its application in generating simulation algorithms and Feynman diagram expansions.
Section 8: Relation between Semantic Maps
Description 8: Discuss the relation between continuous-time and discrete-time semantic maps and the conditions under which they converge.
Section 9: Discussion: Transformations of SPGs
Description 9: Explore transformations of SPGs that preserve semantic properties, with a focus on syntactic transformations and inference algorithms.
Section 10: Examples and Reductions
Description 10: Provide examples of other frameworks that can be expressed or reduced to SPGs, including biochemical reaction networks, logic programs, graph grammars, and differential equations.
Section 11: Discussion: Relevance to Artificial Intelligence and Computational Science
Description 11: Discuss the relevance of SPGs and DGs to AI and computational science, including their applications in machine learning, multiscale modeling, and the construction of mathematical models.
Section 12: Conclusion
Description 12: Summarize the established syntax and semantics for the probabilistic modeling language and its applicability to various fields and models. |
A Review of the Evidence for the Temporal Transferability of Mode-Destination Models | 12 | ---
paper_title: A validation test of a disaggregate mode choice model
paper_content:
Abstract A model of work trip mode choice was developed on a sample of workers taken before Bay Area Rapid Transit (BART) opened for service. Validation tests of the model were performed on a sample of workers taken after BART service began. Two validation methods were used: (1) the actual mode shares in the post-BART sample were compared to the mode shares predicted by the models estimated on the pre-BART sample, and (2) the parameters of models estimated on the post-BART sample were compared with the parameters of the models estimated pre-BART. Three possible reasons were explored for the differences in actual and predicted shares and in the pre- and post-BART model parameters: (1) failure of the independence from irrelevant alternatives (IIA) property of the multinomial logit model, (2) non-genericity and incorrect data contributed substantially to the incorrect data for walk times. It was found that non-genericity and incorrect data contributed substantially to the mispredictions, while failure of the IIA property contributed less. The present study concerns only one model and one transportation environment. The results of this test, however, can be viewed along with the results of other validation studies to obtain a sense of the predictive ability of disaggregate mode choice models.
---
paper_title: A COMPARISON OF THE PREDICTIVE ABILITY OF MODE CHOICE MODELS WITH VARIOUS LEVELS OF COMPLEXITY
paper_content:
Abstract Models of mode choice have recently been developed which include a large number of explanatory variables. The inclusion of some of these variables is obviously the result of trial-and-error analysis of various model specifications: the researcher tries various specifications until he obtains a specification which is consistent with a priori beliefs and fits the data fairly well. This method of model specification allows one to “learn” from the data, but is also open to the critism that the resultant model simply reflects relations which happen to exist in the sample, rather than true, behavioral relations. This paper examines this question. A complex model is presented which was developed after attempting a wide variety of specifications. The predictive ability of this model is compared with that of models with fewer variables, each of which could be included on the basis of a priori ideas. It is found that the complex model predicts best, indicating that the behavioral content of the model which was developed through “learning” from the data is greater than that of models which were specified on a priori beliefs.
---
paper_title: SOME STUDIES OF THE TEMPORAL STABILITY OF PERSON TRIP GENERATION MODELS
paper_content:
Abstract In an attempt to assess the temporal stability of one type of person trip generation model (the category analysis model with the individual as the behavioural unit), some comparative studies based on the Reading Travel Surveys of 1962 and 1971 are described. Categories are defined with respect to a subset of the variables employment status, socio-economic group, household structure, car availability and household car ownership and particular attention is given to optional trips made by persons in certain employment status groups notably housewives and retired persons. Differences in trip rates between the two years are tested for statistical significance. In an introductory section the critical nature of assumptions concerning the stability of trip generation models which are to be used in situations where changes in mobility levels in the population are anticipated is emphasised. Following this the specification of the tested models is presented and the data sources are described, with particular attention to the synthesis of car availability data. It is tentatively concluded that the trip rates of certain groups in the population are susceptible to variation in response to changes in the level of accessibility.
---
paper_title: TESTS OF THE TEMPORAL STABILITY OF TRAVEL SIMULATION MODELS IN SOUTHEASTERN WISCONSIN
paper_content:
The assumption of the stability of travel simulation models over time is as assential element of the urban transportation planning process. This assumption was tested using travel simulation models developed with data from an origin and destination survey conducted in 1963 and travel invensoty data from a similar study conducted in 1972. Both surveys were conducted by the Southeastern Wisconsin Regional Planning Commission; the travel models tested were those that had been used in the preparation of a regional land use and transportation plan for southeastern Wisconsin that was completed in 1966. The testing performed as a part of the reappraisal of the land use and transportation recommendations of 1966, which was of the temporal stability of the three major travel simulation models-trips generation, model split, and trip distribution-indicated that 1972 trip generation, transit use, and trip length characteristics within southeastern Wisconsin were predicted with adequate accuracy through the application of the original 1963 models. /Author/
---
| Title: A Review of the Evidence for the Temporal Transferability of Mode-Destination Models
Section 1: INTRODUCTION
Description 1: Discuss the importance of developing travel behaviour models that can replicate real-world travel choices and generate long-term forecasts. Introduce the concept of temporal transferability and highlight the need for evaluating it in the context of mode-destination models.
Section 2: Defining Transferability
Description 2: Define transferability with reference to existing literature and discuss its broad and narrow interpretations. Emphasize the focus on model transferability over underlying behavioural theories.
Section 3: Temporal and Spatial Transferability
Description 3: Differentiate between temporal and spatial transferability, outlining how each applies in the context of mode-destination choice models. Discuss the challenges and considerations involved in assessing these types of transferability.
Section 4: Conditions for Transferability
Description 4: Present the conditions under which models are considered transferable, citing prior research. Explore hypotheses and empirical tests from the early literature about disaggregate and aggregate models.
Section 5: Impacts of Violation of the Transferability Assumption
Description 5: Analyze the potential implications for forecasting when temporal transferability does not hold, emphasizing the added uncertainty to model predictions and the importance of understanding error sources.
Section 6: Assessing Transferability
Description 6: Discuss the methodologies used to test for temporal transferability, including statistical tests of parameter equality and predictive measures. Explain their significance and application in real-world model validation.
Section 7: LITERATURE REVIEW
Description 7: Review and synthesize key findings from the literature on temporal transferability of travel models, breaking studies down into sub-sections on mode choice, validation studies, and other relevant transferability research.
Section 8: Mode Choice Transferability Studies
Description 8: Detail studies that have investigated the direct temporal transferability of mode choice models, including both findings that support and challenge model parameter stability over time.
Section 9: Mode Choice Validation Studies
Description 9: Summarize research that has validated mode choice models by comparing predictions against observed mode shares, analyzing the significance of varying model specifications and their impact on transferability.
Section 10: Other Transferability Studies
Description 10: Present additional studies on the transferability of other model types, primarily focusing on trip generation models, and discuss their general findings about temporal stability and predictive accuracy.
Section 11: Summary and Critique
Description 11: Summarize the key findings from the literature on temporal transferability, noting gaps in research and limitations of existing studies. Critically assess the adequacy of current evidence in supporting long-term forecasting.
Section 12: DIRECTIONS FOR FUTURE RESEARCH
Description 12: Outline recommendations for future research aimed at improving the understanding of the temporal transferability of mode-destination models. Suggest methodologies and datasets that could provide new insights, and propose exploring the transferability of more advanced models. |
A Survey for Load Balancing in Mobile WiMAX Networks | 12 | ---
paper_title: WiMAX: Standards and Security
paper_content:
As the demand for broadband services continues to grow worldwide, traditional solutions, such as digital cable and fiber optics, are often difficult and expensive to implement, especially in rural and remote areas. The emerging WiMAX system satisfies the growing need for high data-rate applications such as voiceover IP, video conferencing, interactive gaming, and multimedia streaming. WiMAX deployments not only serve residential and enterprise users but can also be deployed as a backhaul for Wi-Fi hotspots or 3G cellular towers. By providing affordable wireless broadband access, the technology of WiMAX will revolutionize broadband communications in the developed world and bridge the digital divide in developing countries. Part of the WiMAX Handbook, this volume focuses on the standards and security issues of WiMAX. The book examines standardized versus proprietary solutions for wireless broadband access, reviews the core medium access control protocol of WiMAX systems, and presents carriers' perspectives on wireless services. It also discusses the main mobility functions of the IEEE 802.16e standard, describes how to speed up WiMAX handover procedures, presents the 802.16 mesh protocol, and surveys the testing and certification processes used for WiMAX products. In addition, the book reviews the security features of both IEEE 802.16 and WiMAX. With the revolutionary technology of WiMAX, the lives of many will undoubtedly improve, thereby leading to greater economic empowerment.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Improved access point selection
paper_content:
This paper presents Virgil, an automatic access point discovery and selection system. Unlike existing systems that select access points based entirely on received signal strength, Virgil scans for all available APs at a location, quickly associates to each, and runs a battery of tests to estimate the quality of each AP's connection to the Internet. Virgil also probes for blocked or redirected ports, to guide AP selection in favor of preserving application services that are currently in use. Results of our evaluation across five neighborhoods in three cities show Virgil finds a usable connection from 22% to 100% more often than selecting based on signal strength alone. By caching AP test results, Virgil both improves performance and success rate. Our overhead is acceptable and is shown to be faster than manually selecting an AP with Windows XP.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Analysis of the Increase and Decreas, e Algorithms for Congestion Avoidance in Computer Networks
paper_content:
Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both con- gestion avoidance and congestion control mechanisms are basi- cally resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to in- crease or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decreas e algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for con- vergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recom- mended for Digital Networking Architecture and OSI Trans- port Class 4 Networks.
---
paper_title: Load balancing in overlapping wireless LAN cells
paper_content:
We propose a load-balancing scheme for overlapping wireless LAN cells. Agents running in each access point broadcast periodically the local load level via the Ethernet backbone and determines whether the access point is overloaded, balanced or under-loaded by comparing it with the received reports. The load metric is the access point throughput. Overloaded access points force the handoff of some stations to balance the load. Only the under-loaded access points accept the roaming stations in minimizing the number of handoffs. We show via experimental evaluation that our balancing scheme increases the total wireless network throughput and decreases the cell delay.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Adaptive handoff algorithms for dynamic traffic load distribution in 4G mobile networks
paper_content:
In 4G mobile networks which is a kind of packet-based cellular systems, resources are shared among all users and the amount of available resources is determined by traffic load. If traffic load is concentrated in a cell, this cell become to the hotspot cell. The hotspot cell can cause to block or drop calls and consequently degrades the service quality even though available resources are remained in neighboring cells. Therefore, it is essential to distribute traffic load of the hotspot cell in order to effectively use remained resources and maintain the acceptable service quality. We propose adaptive handoff algorithms for dynamic traffic load distribution in the hotspot cell. In the simulation, we find that these algorithms can reduce call drop rate of new and handoff calls thus enhance the service quality
---
paper_title: A dynamic load balancing strategy for channel assignment using selective borrowing in cellular mobile environment
paper_content:
We propose a dynamic load balancing scheme for the channel assignment problem in a cellular mobile environment. As an underlying approach, we start with a fixed assignment scheme where each cell is initially allocated a set of channels, each to be assigned on demand to a user in the cell. A cell is classified as 'hot', if the degree of coldness of a cell (defined as the ratio of the number of available channels to the total number of channels for that cell), is less than or equal to some threshold value. Otherwise the cell is 'cold'. Our load balancing scheme proposes to migrate unused channels from underloaded cells to an overloaded one. This is achieved through borrowing a fixed number of channels from cold cells to a hot one according to a channel borrowing algorithm. A channel assignment strategy is also proposed based on dividing the users in a cell into three broad types – 'new', 'departing', 'others' – and forming different priority classes of channel demands from these three types of users. Assignment of the local and borrowed channels are performed according to the priority classes. Next, a Markov model for an individual cell is developed, where the state is determined by the number of occupied channels in the cell. The probability for a cell being hot and the call blocking probability in a hot cell are derived, and a method to estimate the value of the threshold is also given. Detailed simulation experiments are carried out in order to evaluate our proposed methodology. The performance of our load balancing scheme is compared with the fixed channel assignment, simple borrowing, and two existing strategies with load balancing (e.g., directed retry and CBWL), and a significant improvement of the system behavior is noted in all cases.
---
paper_title: A Novel Inter-FA Handover Scheme for Load Balancing in IEEE 802.16e System
paper_content:
As the demand for high data rate increases, load balancing across cell is highly desirable. A handover (HO) among frequency assignment (FA) may be possible for load balancing theoretically, when a serving FA is crowded to accommodate additional bandwidth request in such system as IEEE 802.16e, where the multiple FAs are possible. However, this handover is impossible in the current specification because an indication to target FA is not defined. It also requires unnecessary long scanning and network re-entry procedure. In this paper, we propose novel inter-FA handover, handover among FAs for load balancing by modifying general handover algorithm. It also satisfies HO user's quality of service (QoS) by reducing HO latency, unnecessary scanning, and network re-entry. We compare with previous HO schemes for load balancing by simulation. We show that this HO scheme can achieve significantly better performance in HO user's QoS and HO delay.
---
paper_title: Load Balance for Multi-Layer Reuse Scenarios on Mobile WiMAX System
paper_content:
This paper intends to propose a novel handover algorithm to balance the load of the layers in a multi-reuse scenario. Each layer works independently from each other and it has its own scheduling process, coverage and mobile stations attached. The algorithm proposed balances the resources among layers by moving mobile stations from one to another layer according their QoS requirements and channel condition. Results show that it can increase the cell coverage and still keeping a satisfactory quality for the VoIP calls. Effective spectral efficiency also increased when comparing with single tri-sectorized reuse 1 scenario. The algorithm is also prepared to avoid ping-pong handovers and to work for any number of layers.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: An effective traffic management scheme using adaptive handover time in next-generation cellular networks
paper_content:
Next-generation cellular networks are expected to support various multimedia services over IP networks with high spectral efficiency. In these networks, hotspot cells can occur when available wireless resources at some location are not sufficient to sustain the needs of users. The hotspot cell can potentially lead to blocked or dropped calls, which can deteriorate the service quality for users. A group of users enjoying multimedia services could move around in the networks generating heavy flows of traffic. This situation can generate a hotspot cell which has a short lifespan of only a few minutes. If there is a steady increase in the number of these users, the hotspot cell which has a short lifespan will occur more frequently in the overall service area. In this paper, we propose a handover-based traffic management scheme which can effectively deal with hotspot cells in next-generation cellular networks. With our scheme, the current serving cell can recognize the traffic load status of the target cell in advance, before handover execution. Then, according to the load status, it adaptively controls the handover time. The handover-based traffic management scheme can effectively and flexibly handle hotspot cells in the networks. Acceptable service quality can also be supported as users continuously maintain communication links. In the simulation results, we find that our scheme generates a smaller number of hotspot cells and supports higher service quality than the schemes compared.
---
paper_title: Adaptive handoff algorithms for dynamic traffic load distribution in 4G mobile networks
paper_content:
In 4G mobile networks which is a kind of packet-based cellular systems, resources are shared among all users and the amount of available resources is determined by traffic load. If traffic load is concentrated in a cell, this cell become to the hotspot cell. The hotspot cell can cause to block or drop calls and consequently degrades the service quality even though available resources are remained in neighboring cells. Therefore, it is essential to distribute traffic load of the hotspot cell in order to effectively use remained resources and maintain the acceptable service quality. We propose adaptive handoff algorithms for dynamic traffic load distribution in the hotspot cell. In the simulation, we find that these algorithms can reduce call drop rate of new and handoff calls thus enhance the service quality
---
paper_title: A Vertical Handoff Decision Algorithm (VHDA) and a Call Admission Control (CAC) policy in integrated network between WiMax and UMTS
paper_content:
In this paper, we propose to provide ubiquitous access anywhere anytime by covering the remote areas using World Interoperability for microwave access (WiMax), complementing the existing universal mobile telecommunications systems (UMTS). In this proposed system to integrate WiMAX and UMTS, certain areas are covered by both systems. The overlapping areas are modeled as WiMAX cells overlaid with UMTS cells due to the bigger coverage of WiMAX. In order to facilitate the interworking communications between two different systems, we design and develop vertical handoff algorithm and call admission control policy for the interworking system in the overlaid service areas, taking into consideration of the quality of service (QoS), the cost of handoff and other parameters. Typical QoS parameters maybe load balancing, handoff latency, packet loss ratio (PLR) and transmission rates. For the cost of handoff, we consider the mobile velocity, power consumption, available bandwidth, etc. The call admission control (CAC) scheme mainly depends on the wideband power, throughput and the priority of new arrival calls and handoff calls.
---
paper_title: A Novel Inter-FA Handover Scheme for Load Balancing in IEEE 802.16e System
paper_content:
As the demand for high data rate increases, load balancing across cell is highly desirable. A handover (HO) among frequency assignment (FA) may be possible for load balancing theoretically, when a serving FA is crowded to accommodate additional bandwidth request in such system as IEEE 802.16e, where the multiple FAs are possible. However, this handover is impossible in the current specification because an indication to target FA is not defined. It also requires unnecessary long scanning and network re-entry procedure. In this paper, we propose novel inter-FA handover, handover among FAs for load balancing by modifying general handover algorithm. It also satisfies HO user's quality of service (QoS) by reducing HO latency, unnecessary scanning, and network re-entry. We compare with previous HO schemes for load balancing by simulation. We show that this HO scheme can achieve significantly better performance in HO user's QoS and HO delay.
---
paper_title: Multi-service load balancing in a heterogeneous network
paper_content:
In this paper, we investigate load balancing mechanisms in heterogeneous WiMAX/WLAN network. By taking both the service characteristics of the two networks and service requirements of two kinds of applications into consideration, we distribute all streaming applications to WiMAX with preemptive service priority. Then the remaining capacities of WiMAX and the entire capacity of WLAN are utilized for serving elastic applications. Accordingly, we proposed two size thresholds based on which elastic applications are dispatched to WiMAX or WLAN. Numerical results indicate that dispatching elastic applications with smaller size to WLAN performs fair well compared with other dispatching mechanisms. It is also found that the size distribution has significant impact on the performance of load balancing mechanisms.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Dual Handover Procedures for FA Load Balancing in WiBro Systems
paper_content:
WiBro is defined as a subset of the IEEE 802.16 standard and is developed based on the mobile WiMAX. WiBro delivers wireless data with a high transfer rate to provide mobile broadband services. However, it is necessary to consider radio resources efficiency due to limited resources. In this paper, we propose frequency assignment (FA) load balancing algorithm in order to use resources efficiently, by controlling two types of handover procedure and scheduling. When the resource unbalancing is detected in a FA, we choose BS (base station) initiated handover or MS (mobile station) initiated handover adaptively to disperse users entering and guarantee QoS (quality of service) in real time services. The difference between call admission control (CAC) and FA load balancing scheme is that FA load balancing scheme makes the MS connect with other FAs without disconnection. The proposed algorithm shows the performance improvement which is a lot for the existing algorithm in the user acceptance and QoS in real time service.
---
paper_title: WiMAX: Standards and Security
paper_content:
As the demand for broadband services continues to grow worldwide, traditional solutions, such as digital cable and fiber optics, are often difficult and expensive to implement, especially in rural and remote areas. The emerging WiMAX system satisfies the growing need for high data-rate applications such as voiceover IP, video conferencing, interactive gaming, and multimedia streaming. WiMAX deployments not only serve residential and enterprise users but can also be deployed as a backhaul for Wi-Fi hotspots or 3G cellular towers. By providing affordable wireless broadband access, the technology of WiMAX will revolutionize broadband communications in the developed world and bridge the digital divide in developing countries. Part of the WiMAX Handbook, this volume focuses on the standards and security issues of WiMAX. The book examines standardized versus proprietary solutions for wireless broadband access, reviews the core medium access control protocol of WiMAX systems, and presents carriers' perspectives on wireless services. It also discusses the main mobility functions of the IEEE 802.16e standard, describes how to speed up WiMAX handover procedures, presents the 802.16 mesh protocol, and surveys the testing and certification processes used for WiMAX products. In addition, the book reviews the security features of both IEEE 802.16 and WiMAX. With the revolutionary technology of WiMAX, the lives of many will undoubtedly improve, thereby leading to greater economic empowerment.
---
paper_title: Handover in Mobile WiMAX Networks: The State of Art and Research Issues
paper_content:
The next-generation Wireless Metropolitan Area Networks, using the Worldwide Interoperability for Microwave Access (WiMAX) as the core technology based on the IEEE 802.16 family of standards, is evolving as a Fourth-Generation (4G) technology. With the recent introduction of mobility management frameworks in the IEEE 802.16e standard, WiMAX is now in competition with the existing and forthcoming generations of wireless technologies for providing ubiquitous computing solutions. However, the success of a good mobility framework largely depends on the capability of performing fast and seamless handovers irrespective of the deployed architectural scenario. Now that the IEEE has defined the Mobile WiMAX (IEEE 802.16e) MAC-layer handover management framework, the Network Working Group (NWG) of the WiMAX Forum is working on the development of the upper layers. However, the path to commercialization of a full-fledged WiMAX mobility framework is full of research challenges. This article focuses on potential handover-related research issues in the existing and future WiMAX mobility framework. A survey of these issues in the MAC, Network and Cross-Layer scenarios is presented along with discussion of the different solutions to those challenges. A comparative study of the proposed solutions, coupled with some insights to the relevant issues, is also included.
---
paper_title: Seamless high-velocity handover support in mobile WiMAX networks
paper_content:
The IEEE 802.16e standard (i.e., mobile WiMAX) has been proposed to provide connectivity in wireless networks for mobile users (including users at a vehicular speed). It is shown in our analysis that the probability of a successful handover decreases significantly when the user moves at a higher speed. Then, we propose a scheme that combines adaptive forward error correction (FEC) with retransmission to offer extra protection for handover signaling messages to enhance the probability of a successful handover, especially at a higher velocity. It is demonstrated by computer simulation that the proposed scheme provides a higher successful handover probability at various velocities.
---
paper_title: Efficient Authentication Schemes for Handover in Mobile WiMAX
paper_content:
Mobile WiMAX is the next generation of broadband wireless network. It allows users to roam over the network under vehicular speeds. However, when a mobile station changes from one base station to another, it should be authenticated again. This may lead to delay in communication, especially for real-time applications, such as VoIP and Pay-TV systems. In this paper, we propose two efficient schemes to enhance the performance of authentication during handover in mobile WiMAX. The first scheme adopts, instead of the standard EAP method used in handover authentication, an efficient shared key-based EAP method. The second one, skips the standard EAP method, does the authentication in SA-TEK three-way handshake in PKMv2 process. In addition, the security proofs of our schemes are provided in this paper.
---
paper_title: Fast Intra-Network and Cross-Layer Handover (FINCH) for WiMAX and Mobile Internet
paper_content:
To support fast and efficient handovers in mobile WiMAX, we propose fast intra-network and cross-layer handover (FINCH) for intradomain (intra-CSN) mobility management. FINCH is a complementary protocol to mobile IP (MIP), which deals with interdomain (inter-CSN) mobility management in mobile WiMAX. FINCH can reduce not only the handover latency but also the end-to-end latency for MIP. Paging extension for FINCH is also proposed to enhance the energy efficiency. The proposed FINCH is especially suitable for real-time services in frequent handover environment, which is important for future mobile WiMAX networks. In addition, FINCH is a generic protocol for other IEEE 802-series standards. This is especially beneficial for the integration of heterogeneous networks, for instance, the integration of WiMAX and WiFi networks. Both mathematical analysis and simulation are developed to analyze and compare the performance of FINCH with other protocols. The results show that FINCH can support fast and efficient link layer and intradomain handovers. The numerical results can also be used to select proper network configurations.
---
paper_title: A structured channel borrowing scheme for dynamic load balancing in cellular networks
paper_content:
We propose an efficient dynamic load balancing scheme in cellular networks for managing a teletraffic hot spot in which channel demand exceeds a certain threshold. A hot spot, depicted as a stack of hexagonal 'ring' of cells, is classified as complete if all cells within it are hot. The rings containing only cold cells outside the hot spot are called 'peripheral rings'. Our load balancing scheme migrates channels through a structured borrowing mechanism from the cold cells within the 'rings' or 'peripheral rings' to the hot cells in the hot spot. For the more general case of an incomplete hot spot, a cold cell is further classified as cold safe, cold semi-safe or cold unsafe, and a demand graph is constructed from the channel demand of each hot cell from its adjacent cells in the next outer ring. The channel borrowing algorithm works on the demand graph in a bottom up fashion, satisfying the demands of the cells in each subsequent inner ring. Markov chain models are developed for a hot cell and detailed simulation experiments are conducted to evaluate the performance of our load balancing scheme. Comparison with an existing load balancing strategy under moderate and heavy teletraffic conditions, shows a performance improvement of 12% in terms of call blockade by our load balancing scheme.
---
paper_title: Load balancing in overlapping wireless LAN cells
paper_content:
We propose a load-balancing scheme for overlapping wireless LAN cells. Agents running in each access point broadcast periodically the local load level via the Ethernet backbone and determines whether the access point is overloaded, balanced or under-loaded by comparing it with the received reports. The load metric is the access point throughput. Overloaded access points force the handoff of some stations to balance the load. Only the under-loaded access points accept the roaming stations in minimizing the number of handoffs. We show via experimental evaluation that our balancing scheme increases the total wireless network throughput and decreases the cell delay.
---
paper_title: Adaptive traffic-load shedding and its capacity gain in CDMA cellular systems
paper_content:
In a situation such as a traffic accident on a highway, the active mobiles in an affected cell may easily outnumber the capacity, and an excessive increase in CDMA noise may, in the worst case, block all calls in the cell if the users insist on calling through the same cell site. A scheme to alleviate such congestion in CDMA cellular systems is proposed, so that traffic load is adaptively shed from affected cells by forced hand-offs of the mobiles farthest away from current cell site. The capacity with adaptive load shedding is studied by computer simulations considering two scenarios: a single congested cell with six neighbouring cells partially loaded, and three consecutive congested cells with four neighbouring cells partially loaded. The results show that the scheme offers higher CDMA cellular capacity gain than the normal power-up control adopted by the current CDMA cellular standard.
---
paper_title: Adaptive handoff algorithms for dynamic traffic load distribution in 4G mobile networks
paper_content:
In 4G mobile networks which is a kind of packet-based cellular systems, resources are shared among all users and the amount of available resources is determined by traffic load. If traffic load is concentrated in a cell, this cell become to the hotspot cell. The hotspot cell can cause to block or drop calls and consequently degrades the service quality even though available resources are remained in neighboring cells. Therefore, it is essential to distribute traffic load of the hotspot cell in order to effectively use remained resources and maintain the acceptable service quality. We propose adaptive handoff algorithms for dynamic traffic load distribution in the hotspot cell. In the simulation, we find that these algorithms can reduce call drop rate of new and handoff calls thus enhance the service quality
---
paper_title: WLC17-4: Load-Balancing QoS-Guaranteed Handover in the IEEE 802.16e OFDMA Network
paper_content:
In this paper we propose a load-balancing QoS- guaranteed handover algorithm in the IEEE 802.16e OFDMA network. We describe our system load model for the IEEE 802.16e OFDMA network and propose the optimal and fast handover algorithms. We evaluate our fast handover algorithm using system level simulation of the IEEE 802.16e OFDMA network. Our fast load-balancing handover algorithm eliminates overloading, thus guarantees meeting the QoS requirements of the users. It also provides a considerable throughput gain over the traditional SINR-based handover algorithm.
---
paper_title: A Novel Inter-FA Handover Scheme for Load Balancing in IEEE 802.16e System
paper_content:
As the demand for high data rate increases, load balancing across cell is highly desirable. A handover (HO) among frequency assignment (FA) may be possible for load balancing theoretically, when a serving FA is crowded to accommodate additional bandwidth request in such system as IEEE 802.16e, where the multiple FAs are possible. However, this handover is impossible in the current specification because an indication to target FA is not defined. It also requires unnecessary long scanning and network re-entry procedure. In this paper, we propose novel inter-FA handover, handover among FAs for load balancing by modifying general handover algorithm. It also satisfies HO user's quality of service (QoS) by reducing HO latency, unnecessary scanning, and network re-entry. We compare with previous HO schemes for load balancing by simulation. We show that this HO scheme can achieve significantly better performance in HO user's QoS and HO delay.
---
paper_title: Load Balance for Multi-Layer Reuse Scenarios on Mobile WiMAX System
paper_content:
This paper intends to propose a novel handover algorithm to balance the load of the layers in a multi-reuse scenario. Each layer works independently from each other and it has its own scheduling process, coverage and mobile stations attached. The algorithm proposed balances the resources among layers by moving mobile stations from one to another layer according their QoS requirements and channel condition. Results show that it can increase the cell coverage and still keeping a satisfactory quality for the VoIP calls. Effective spectral efficiency also increased when comparing with single tri-sectorized reuse 1 scenario. The algorithm is also prepared to avoid ping-pong handovers and to work for any number of layers.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: WLC17-4: Load-Balancing QoS-Guaranteed Handover in the IEEE 802.16e OFDMA Network
paper_content:
In this paper we propose a load-balancing QoS- guaranteed handover algorithm in the IEEE 802.16e OFDMA network. We describe our system load model for the IEEE 802.16e OFDMA network and propose the optimal and fast handover algorithms. We evaluate our fast handover algorithm using system level simulation of the IEEE 802.16e OFDMA network. Our fast load-balancing handover algorithm eliminates overloading, thus guarantees meeting the QoS requirements of the users. It also provides a considerable throughput gain over the traditional SINR-based handover algorithm.
---
paper_title: A new distributed uplink packet scheduling algorithm in WiMAX newtorks
paper_content:
Worldwide interoperability for Microwave Access (WiMAX) is designed to support a wide range of Quality of Service (QoS) requirements for different applications. Unlike the single service networks, a priority based mechanism is needed for specifying the transmission order of packets belonging to different traffic sources according to their QoS characteristics, which is called packet scheduling. In WiMAX standard, packet scheduling algorithm is not defined and its design is left for researchers. In this work, we propose a distributed uplink packet scheduling algorithm. In the proposed algorithm, when uplink capacity cannot satisfy the required resource of connections, according to traffic characteristics of the connection and their QoS requirement, the traffic of one or some user terminals from user terminals in the overlapping cells are selected for transferring to the neighboring under-loaded cells. The simulation results show that, the proposed algorithm increases the overall throughput of the network.
---
paper_title: Load Balance for Multi-Layer Reuse Scenarios on Mobile WiMAX System
paper_content:
This paper intends to propose a novel handover algorithm to balance the load of the layers in a multi-reuse scenario. Each layer works independently from each other and it has its own scheduling process, coverage and mobile stations attached. The algorithm proposed balances the resources among layers by moving mobile stations from one to another layer according their QoS requirements and channel condition. Results show that it can increase the cell coverage and still keeping a satisfactory quality for the VoIP calls. Effective spectral efficiency also increased when comparing with single tri-sectorized reuse 1 scenario. The algorithm is also prepared to avoid ping-pong handovers and to work for any number of layers.
---
paper_title: Multi-service load balancing in a heterogeneous network
paper_content:
In this paper, we investigate load balancing mechanisms in heterogeneous WiMAX/WLAN network. By taking both the service characteristics of the two networks and service requirements of two kinds of applications into consideration, we distribute all streaming applications to WiMAX with preemptive service priority. Then the remaining capacities of WiMAX and the entire capacity of WLAN are utilized for serving elastic applications. Accordingly, we proposed two size thresholds based on which elastic applications are dispatched to WiMAX or WLAN. Numerical results indicate that dispatching elastic applications with smaller size to WLAN performs fair well compared with other dispatching mechanisms. It is also found that the size distribution has significant impact on the performance of load balancing mechanisms.
---
paper_title: Base Station Controlled Load Balancing with Handovers in Mobile WiMAX
paper_content:
In this paper we examine base station controlled handover based load balancing in the mobile WiMAX system and study the framework that mobile WiMAX offers for conducting load balancing and directed handovers. We also propose and evaluate a BS initiated load balancing scheme for mobile WiMAX. The simulations show that the proposed scheme can balance load efficiently within the system but also avoid the so called handover ldquoping-pongrdquo effect especially harmful e.g. for VoIP connections. We also propose an addition to the mobile WiMAX specification to distinguish load balancing based directed handovers and signal quality based rescue handovers conducted by moving terminals. This way the more important rescue handovers can be prioritized in a target Base Station.
---
paper_title: Dual Handover Procedures for FA Load Balancing in WiBro Systems
paper_content:
WiBro is defined as a subset of the IEEE 802.16 standard and is developed based on the mobile WiMAX. WiBro delivers wireless data with a high transfer rate to provide mobile broadband services. However, it is necessary to consider radio resources efficiency due to limited resources. In this paper, we propose frequency assignment (FA) load balancing algorithm in order to use resources efficiently, by controlling two types of handover procedure and scheduling. When the resource unbalancing is detected in a FA, we choose BS (base station) initiated handover or MS (mobile station) initiated handover adaptively to disperse users entering and guarantee QoS (quality of service) in real time services. The difference between call admission control (CAC) and FA load balancing scheme is that FA load balancing scheme makes the MS connect with other FAs without disconnection. The proposed algorithm shows the performance improvement which is a lot for the existing algorithm in the user acceptance and QoS in real time service.
---
| Title: A Survey for Load Balancing in Mobile WiMAX Networks
Section 1: INTRODUCTION
Description 1: This section introduces Mobile WiMAX, its advantages, and the challenges of providing Quality of Service (QoS) with the focus on load balancing and handover processes in mobile environments.
Section 2: Load Balancing Process
Description 2: This section describes the general process of load balancing including the essential elements such as load definition, load measurement, and load balancing mechanisms.
Section 3: Load Definition
Description 3: This section explains the role of load metrics in load balancing algorithms and how they measure the balance state of the system in mobile WiMAX networks.
Section 4: Load Measurements
Description 4: This section covers the criteria and methods used for evaluating the load status of the system in load balancing algorithms.
Section 5: Load Balancing Mechanisms
Description 5: This section details the schemes to solve the problem in overloaded cells, divided into resource allocation and load distribution schemes.
Section 6: Resource Allocation Schemes
Description 6: This section discusses the schemes of load balancing through resource allocation, including fixed channel allocation (FCA) and dynamic channel allocation (DCA).
Section 7: Load Distribution Schemes
Description 7: This section describes the schemes of load balancing through load distribution, focusing on handover-based algorithms.
Section 8: HANDOVER IN MOBILE WIMAX
Description 8: This section gives an overview of the handover process in mobile WiMAX, highlighting its importance for balancing load and ensuring QoS.
Section 9: Types of Handover in Mobile WiMAX
Description 9: This section classifies types of handovers based on technology, structural aspect, initiation, and execution mechanisms.
Section 10: LOAD BALANCING ALGORITHMS IN WIRELESS NETWORKS
Description 10: This section provides a review of load balancing algorithms in traditional wireless networks, emphasizing resource allocation and load distribution schemes.
Section 11: LOAD BALANCING ALGORITHMS IN WIMAX NETWORKS
Description 11: This section examines existing load balancing algorithms specifically designed for mobile WiMAX networks, focusing on load distribution methods.
Section 12: CONCLUSION
Description 12: This section summarizes the survey, discussing the effectiveness of different load balancing algorithms and suggesting the best approaches for mobile WiMAX networks. |
A Survey on Human Emotion Recognition Approaches, Databases and Applications | 11 | ---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
paper_content:
This survey describes recent progress in the field of Affective Computing (AC), with a focus on affect detection. Although many AC researchers have traditionally attempted to remain agnostic to the different emotion theories proposed by psychologists, the affective technologies being developed are rife with theoretical assumptions that impact their effectiveness. Hence, an informed and integrated examination of emotion theories from multiple areas will need to become part of computing practice if truly effective real-world systems are to be achieved. This survey discusses theoretical perspectives that view emotions as expressions, embodiments, outcomes of cognitive appraisal, social constructs, products of neural circuitry, and psychological interpretations of basic feelings. It provides meta-analyses on existing reviews of affect detection systems that focus on traditional affect detection modalities like physiology, face, and voice, and also reviews emerging research on more novel channels such as text, body language, and complex multimodal systems. This survey explicitly explores the multidisciplinary foundation that underlies all AC applications by describing how AC researchers have incorporated psychological theories of emotion and how these theories affect research questions, methods, results, and their interpretations. In this way, models and methods can be compared, and emerging insights from various disciplines can be more expertly integrated.
---
paper_title: Affective computing: Challenges
paper_content:
A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of "affective computing." This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in human-computer interaction.
---
paper_title: Handbook of Emotions
paper_content:
Part 1. Interdisciplinary Foundations. R.C. Solomon, The Philosophy of Emotions. P.N. Stearns, History of Emotions: Issues of Change and Impact. J.E. Stets, J.H. Turner, The Sociology of Emotions. J. Panksepp, The Affective Brain and Core Consciousness: How Does Neural Activity Generate Emotional Feelings? N.H. Frijda, The Psychologist's Point of View. L.S. Greenberg, The Clinical Application of Emotion in Psychotherapy. P.N. Johnson-Laird, K. Oatley, Emotions, Music, and Literature. J. Tooby, L. Cosmides, The Evolutionary Psychology of the Emotions and Their Relationship to Internal Regulatory Variables. R. Loewenstein, G. Loewenstein, The Role of Emotion in Economic Behavior. Part 2. Biological and Neurophysiological Approaches to Emotion. J.E. LeDoux, E.A. Phelps, Emotional Networks in the Brain. J.T. Larsen, G.G. Berntson, K.M. Poehlmann, T.A. Ito, J.T. Cacioppo, The Psychophysiology of Emotion. J. Bachorowski, M.J. Owren, Vocal Expressions of Emotion. D. Matsumoto, D. Keltner, M.N. Shiota, M. O'Sullivan, M. Frank, Facial Expressions of Emotion. J.M. Haviland-Jones, P.J. Wilson, A "Nose" for Emotion: Emotional Information and Challenges in Odors and Semiochemicals. T.D. Wager, L. Feldman Barrett, E. Bliss-Moreau, K. Lindquist, S. Duncan, H. Kober, J. Joseph, M. Davidson, J. Mize, The Neuroimaging of Emotion. A.D. Craig, Interoception and Emotion: A Neuroanatomical Perspective. Part 3. Developmental Changes. L.A. Camras, S.S. Fatani, The Development of Facial Expressions: Current Perspectives on Infant Emotions. M. Lewis, The Emergence of Human Emotions. P.L. Harris, Children's Understanding of Emotion. C. Saarni, The Interface of Emotional Development with Social Context. S.C. Widen, J.A. Russell, Young Children's Understanding of Others' Emotions. A.S. Walker-Andrews, Intermodal Emotional Processes in Infancy. C. Magai, Long-Lived Emotions: A Lifecourse Perspective on Emotional Development. Part 4. Social Perspectives. L.R. Brody, J.A. Hall, Gender and Emotion in Context. R.A. Shweder, J. Haidt, R. Horton, C. Joseph, The Cultural Psychology of the Emotions: Ancient and Renewed. E.R. Smith, D.M. Mackie, Intergroup Emotions. M.L. Hoffman, Empathy and Prosocial Behavior. A.H. Fischer, A.S.R. Manstead, Social Functions of Emotion. Part 5. Personality Issues. R.E. Lucas, E. Diener, Subjective Well-Being. J.E. Bates, J.A. Goodnight, J.E. Fite, Temperament and Emotion. J.J. Gross, Emotion Regulation. K.A. Lindquist, L. Feldman Barrett, Emotional Complexity. Part 6. Cognitive Factors. P. Salovey, B.T. Detweiler-Bedell, J.B. Detweiler-Bedell, J.D. Mayer, Emotional Intelligence. A.M. Isen, Some Ways in which Positive Affect Influences Decision Making and Problem Solving. N.L. Stein, M.W. Hernandez, T. Trabasso, Advances in Modeling Emotion and Thought: The Importance of Development, On-Line and Multilevel Analyses. P.M. Niedenthal, Emotion Concepts. E.A. Kensinger, D.L. Schacter, Memory and Emotion. M. Minsky, A Framework for Representing Emotional States. G.L. Clore, A. Ortony, Appraisal Theories: How Cognition Shapes Affect into Emotion. Part 7. Health and Emotions. M.A. Diefenbach, S.M. Miller, M. Porter, E. Peters, M. Stefanek, H. Leventhal, Emotions and Health Behavior: A Self-Regulation Perspective. M.E. Kemeny, A. Shestyuk, Emotions, the Neuroendocrine and Immune Systems, and Health. N.S. Consedine, Emotions and Health. A.M. Kring, Emotion Disturbances as Transdiagnostic Processes in Psychopathology. Part 8. Select Emotions. A. Ohman, Fear and Anxiety: Overlaps and Dissociations. E.A. Lemerise, K.A. Dodge, The Development of Anger and Hostile Interactions. M. Lewis, Self-Conscious Emotions: Embarrassment, Pride, Shame, and Guilt. P. Rozin, J. Haidt, C.R. McCauley, Disgust. B.L. Fredrickson, M.A. Cohn, Positive Emotions. G.A. Bonanno, L. Goorin, K.G. Coifman, Sadness and Grief.
---
paper_title: An affective computing approach to physiological emotion specificity: Toward subject-independent and stimulus-independent classification of film-induced emotions
paper_content:
The hypothesis of physiological emotion specificity has been tested using pattern classification analysis (PCA). To address limitations of prior research using PCA, we studied effects of feature selection (sequential forward selection, sequential backward selection), classifier type (linear and quadratic discriminant analysis, neural networks, k-nearest neighbors method), and cross-validation method (subject- and stimulus-(in)dependence). Analyses were run on a data set of 34 participants watching two sets of three 10-min film clips (fearful, sad, neutral) while autonomic, respiratory, and facial muscle activity were assessed. Results demonstrate that the three states can be classified with high accuracy by most classifiers, with the sparsest model having only five features, even for the most difficult task of identifying the emotion of an unknown subject in an unknown situation (77.5%). Implications for choosing PCA parameters are discussed. Descriptors: Emotion, Pattern classification, Feature selection, Autonomic nervous system, Cardiovascular system, Respiration, Electrodermal system, Affective neuroscience, Affective computing There exists a long tradition of psychophysiological research on differential physiological responding among emotions, originally based on James’ (1884) peripheral perception theory and revived by basic emotion theory (e.g., Ekman, Levenson, & Friesen, 1983; see Friedman, 2010, for a review). These theories hold as a central tenet that basic human emotions have distinct physiological patterns. In an influential series of publications, Fridlund and colleagues (Fridlund & Izard, 1983; Fridlund, Schwartz, & Fowler, 1984) argued for the advantages of a formal patternclassification approach for the study of such physiological patterns of emotion. This approach recognizes the interactive and
---
paper_title: Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions
paper_content:
Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation
---
paper_title: Facial-component-based bag of words and PHOG descriptor for facial expression recognition
paper_content:
A novel framework of facial appearance and shape information extraction for facial expression recognition is proposed. For appearance extraction, a facial-component-based bag of words method is presented. We segment face images into 4 component regions, and sub-divide them into 4×4 sub-regions. Dense SIFT (Scale-Invariant Feature Transform) features are calculated over the sub-regions and vector quantized into 4×4 sets of codeword distributions. For shape extraction, PHOG (Pyramid Histogram of Orientated Gradient) descriptors are computed on the 4 facial component regions to obtain the spatial distribution of edges. Our framework provides holistic characteristics for the local texture and shape features by enhancing the structure-based spatial information, and makes the local descriptors be possible to be used in facial expression recognition for the first time. The recognition rate achieved by the fusion of appearance and shape features at decision level using the Cohn-Kanade database is 96.33%, which outperforms the state of the arts.
---
paper_title: Facial expression recognition using tracked facial actions: Classifier performance analysis
paper_content:
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.
---
paper_title: An Automatic Framework for Textured 3D Video-Based Facial Expression Recognition
paper_content:
Most of the existing research on 3D facial expression recognition has been done using static 3D meshes. 3D videos of a face are believed to contain more information in terms of the facial dynamics which are very critical for expression recognition. This paper presents a fully automatic framework which exploits the dynamics of textured 3D videos for recognition of six discrete facial expressions. Local video-patches of variable lengths are extracted from numerous locations of the training videos and represented as points on the Grassmannian manifold. An efficient graph-based spectral clustering algorithm is used to separately cluster these points for every expression class. Using a valid Grassmannian kernel function, the resulting cluster centers are embedded into a Reproducing Kernel Hilbert Space (RKHS) where six binary SVM models are learnt. Given a query video, we extract video-patches from it, represent them as points on the manifold and match these points with the learnt SVM models followed by a voting based strategy to decide about the class of the query video. The proposed framework is also implemented in parallel on 2D videos and a score level fusion of 2D & 3D videos is performed for performance improvement of the system. The experimental results on BU4DFE data set show that the system achieves a very high classification accuracy for facial expression recognition from 3D videos.
---
paper_title: Facial Expression Recognition Using Facial Movement Features
paper_content:
Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using "salient” distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the "salient” patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.
---
paper_title: Simultaneous Facial Action Tracking and Expression Recognition in the Presence of Head Motion
paper_content:
The recognition of facial gestures and expressions in image sequences is an important and challenging problem. Most of the existing methods adopt the following paradigm. First, facial actions/features are retrieved from the images, then the facial expression is recognized based on the retrieved temporal parameters. In contrast to this mainstream approach, this paper introduces a new approach allowing the simultaneous retrieval of facial actions and expression using a particle filter adopting multi-class dynamics that are conditioned on the expression. For each frame in the video sequence, our approach is split into two consecutive stages. In the first stage, the 3D head pose is retrieved using a deterministic registration technique based on Online Appearance Models. In the second stage, the facial actions as well as the facial expression are simultaneously retrieved using a stochastic framework based on second-order Markov chains. The proposed fast scheme is either as robust as, or more robust than existing ones in a number of respects. We describe extensive experiments and provide evaluations of performance to show the feasibility and robustness of the proposed approach.
---
paper_title: Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning
paper_content:
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
---
paper_title: Affective Body Expression Perception and Recognition: A Survey
paper_content:
Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems.
---
paper_title: Real-time inference of mental states from facial expressions and upper body gestures
paper_content:
We present a real-time system for detecting facial action units and inferring emotional states from head and shoulder gestures and facial expressions. The dynamic system uses three levels of inference on progressively longer time scales. Firstly, facial action units and head orientation are identified from 22 feature points and Gabor filters. Secondly, Hidden Markov Models are used to classify sequences of actions into head and shoulder gestures. Finally, a multi level Dynamic Bayesian Network is used to model the unfolding emotional state based on probabilities of different gestures. The most probable state over a given video clip is chosen as the label for that clip. The average F1 score for 12 action units (AUs 1, 2, 4, 6, 7, 10, 12, 15, 17, 18, 25, 26), labelled on a frame by frame basis, was 0.461. The average classification rate for five emotional states (anger, fear, joy, relief, sadness) was 0.440. Sadness had the greatest rate, 0.64, anger the smallest, 0.11.
---
paper_title: Automatic Temporal Segment Detection and Affect Recognition From Face and Body Display
paper_content:
Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization/alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.
---
paper_title: Recognizing expressions from face and body gesture by temporal normalized motion and appearance features
paper_content:
Recently, recognizing affects from both face and body gestures attracts more attentions. However, it still lacks of efficient and effective features to describe the dynamics of face and gestures for real-time automatic affect recognition. In this paper, we propose a novel approach, which combines both MHI-HOG and Image-HOG through temporal normalization method, to describe the dynamics of face and body gestures for affect recognition. The MHI-HOG stands for Histogram of Oriented Gradients (HOG) on the Motion History Image (MHI). It captures motion direction of an interest point as an expression evolves over the time. The Image-HOG captures the appearance information of the corresponding interesting point. Combination of MHI-HOG and Image-HOG can effectively represent both local motion and appearance information of face and body gesture for affect recognition. The temporal normalization method explicitly solves the time resolution issue in the video-based affect recognition. Experimental results demonstrate promising performance as compared with the state of the art. We also show that expression recognition with temporal dynamics outperforms frame-based recognition.
---
paper_title: Recognising Human Emotions from Body Movement and Gesture Dynamics
paper_content:
We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use non- propositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.
---
paper_title: Affect recognition from face and body: early fusion vs. late fusion
paper_content:
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level.
---
paper_title: Audio-visual emotion recognition using Boltzmann Zippers
paper_content:
This paper presents a novel approach for automatic audio-visual emotion recognition. The audio and visual channels provide complementary information for human emotional states recognition, and we utilize Boltzmann Zippers as model-level fusion to learn intrinsic correlations between the different modalities. We extract effective audio and visual feature streams with different time scales and feed them to two Boltzmann chains respectively. The hidden units of two chains are interconnected. Second-order methods are applied to Boltzmann Zippers to speed up learning and pruning process. Experimental results on audio-visual emotion data collected in Wizard of Oz scenarios demonstrate our approach is promising and outperforms single modal HMM and conventional coupled HMM methods.
---
paper_title: Iterative Feature Normalization Scheme for Automatic Emotion Detection from Speech
paper_content:
The externalization of emotion is intrinsically speaker-dependent. A robust emotion recognition system should be able to compensate for these differences across speakers. A natural approach is to normalize the features before training the classifiers. However, the normalization scheme should not affect the acoustic differences between emotional classes. This study presents the iterative feature normalization (IFN) framework, which is an unsupervised front-end, especially designed for emotion detection. The IFN approach aims to reduce the acoustic differences, between the neutral speech across speakers, while preserving the inter-emotional variability in expressive speech. This goal is achieved by iteratively detecting neutral speech for each speaker, and using this subset to estimate the feature normalization parameters. Then, an affine transformation is applied to both neutral and emotional speech. This process is repeated till the results from the emotion detection system are consistent between consecutive iterations. The IFN approach is exhaustively evaluated using the IEMOCAP database and a data set obtained under free uncontrolled recording conditions with different evaluation configurations. The results show that the systems trained with the IFN approach achieve better performance than systems trained either without normalization or with global normalization.
---
paper_title: Emotion recognition based on human gesture and speech information using RT middleware
paper_content:
A bi-modal emotion recognition approach is proposed for recognition of four emotions that integrate information from gestures and speech. The outputs from two uni-modal emotion recognition systems based on affective speech and expressive gesture are fused on a decision level fusion by using weight criterion fusion and best probability plus majority vote fusion methods, and the performance of classifier which performs better than each uni-modal and is helpful in recognizing suitable emotions for communication situations. To validate the proposal, fifty Japanese words (or phrases) and 8 types of gestures that are recorded from five participants are used, and the emotion recognition rate increases up to 85.39%. The proposal is able to extent to using more than other modalities and useful in automatic emotion recognition system for human-robot communication.
---
paper_title: Feature Analysis and Evaluation for Automatic Emotion Identification in Speech
paper_content:
The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined.
---
paper_title: Speech Emotion Recognition Using Fourier Parameters
paper_content:
Recently, studies have been performed on harmony features for speech emotion recognition. It is found in our study that the first- and second-order differences of harmony features also play an important role in speech emotion recognition. Therefore, we propose a new Fourier parameter model using the perceptual content of voice quality and the first- and second-order differences for speaker-independent speech emotion recognition. Experimental results show that the proposed Fourier parameter (FP) features are effective in identifying various emotional states in speech signals. They improve the recognition rates over the methods using Mel frequency cepstral coefficient (MFCC) features by 16.2, 6.8 and 16.6 points on the German database (EMODB), Chinese language database (CASIA) and Chinese elderly emotion database (EESDB). In particular, when combining FP with MFCC, the recognition rates can be further improved on the aforementioned databases by 17.5, 10 and 10.5 points, respectively.
---
paper_title: Learning Salient Features for Speech Emotion Recognition Using Convolutional Neural Networks
paper_content:
As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect- related , discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features.
---
paper_title: Survey on speech emotion recognition: Features, classification schemes, and databases
paper_content:
Recently, increasing attention has been directed to the study of the emotional content of speech signals, and hence, many systems have been proposed to identify the emotional content of a spoken utterance. This paper is a survey of speech emotion classification addressing three important aspects of the design of a speech emotion recognition system. The first one is the choice of suitable features for speech representation. The second issue is the design of an appropriate classification scheme and the third issue is the proper preparation of an emotional speech database for evaluating system performance. Conclusions about the performance and limitations of current speech emotion recognition systems are discussed in the last section of this survey. This section also suggests possible ways of improving speech emotion recognition systems.
---
paper_title: Using a Smartphone to Measure Heart Rate Changes during Relived Happiness and Anger
paper_content:
This study demonstrates the feasibility of measuring heart rate (HR) differences associated with emotional states such as anger and happiness with a smartphone. Novice experimenters measured higher HRs during relived anger and happiness (replicating findings in the literature) outside a laboratory environment with a smartphone app that relied on photoplethysmography.
---
paper_title: Emotion Recognition Based on Multi-Variant Correlation of Physiological Signals
paper_content:
Emotion recognition based on affective physiological changes is a pattern recognition problem, and selecting specific physiological signals is necessary and helpful to recognize the emotions. Fingertip blood oxygen saturation (OXY), galvanic skin response (GSR) and heart rate (HR) are acquired while amusement, anger, grief and fear of 101 subjects are individually elicited by films. The affective physiological changes in multi-subject GSR, the first derivative of GSR (FD_GSR) and HR are detected by the multi-variant correlation method. The correlation analysis reveals that multi-subject HR, GSR and FD_GSR fluctuations respectively have common intra-class affective patterns. In addition to the conventional features of HR and GSR, the affective HR, GSR and FD_GSR fluctuations are quantified by the local scaling dimension and applied as the affective features. The multi-subject affective database containing 477 cases is classified by a Random Forests classifier. An overall correct rate of 74 percent for quinary classification of amusement, anger, grief, fear and the baseline state are obtained.
---
paper_title: Multimodal emotion recognition by combining physiological signals and facial expressions: A preliminary study
paper_content:
Lately, multimodal approaches for automatic emotion recognition have gained significant scientific interest. In this paper, emotion recognition by combining physiological signals and facial expressions was studied. Heart rate variability parameters, respiration frequency, and facial expressions were used to classify person's emotions while watching pictures with emotional content. Three classes were used for both valence and arousal. The preliminary results show that, over the proposed channels, detecting arousal seem to be easier compared to valence. While the classification performance of 54.5% was attained with arousal, only 38.0% of the samples were classified correctly in terms of valence. In future, additional modalities as well as feature selection will be utilized to improve the results.
---
paper_title: Robust EEG emotion classification using segment level decision fusion
paper_content:
In this paper we address single-trial binary classification of emotion dimensions (arousal, valence, dominance and liking) using electroencephalogram (EEG) signals that represent responses to audio-visual stimuli. We propose an innovative three step solution to this problem: (1) in contrast to the typical feature extraction on the response-level, we represent the EEG signal as a sequence of overlapping segments and extract feature vectors on the segment level; (2) transform segment level features to the response level features using projections based on a novel non-parametric nearest neighbor model; and (3) perform classification on the obtained response-level features. We demonstrate the efficacy of our approach by performing binary classification of emotion dimensions on DEAP (Dataset for Emotion Analysis using electroencephalogram, Physiological and Video Signals) and report state-of-the-art classification accuracies for all emotional dimensions.
---
paper_title: Multimodal emotion recognition using EEG and eye tracking data
paper_content:
This paper presents a new emotion recognition method which combines electroencephalograph (EEG) signals and pupillary response collected from eye tracker. We select 15 emotional film clips of 3 categories (positive, neutral and negative). The EEG signals and eye tracking data of five participants are recorded, simultaneously, while watching these videos. We extract emotion-relevant features from EEG signals and eye tracing data of 12 experiments and build a fusion model to improve the performance of emotion recognition. The best average accuracies based on EEG signals and eye tracking data are 71.77% and 58.90%, respectively. We also achieve average accuracies of 73.59% and 72.98% for feature level fusion strategy and decision level fusion strategy, respectively. These results show that both feature level fusion and decision level fusion combining EEG signals and eye tracking data can improve the performance of emotion recognition model.
---
paper_title: Affective Assessment by Digital Processing of the Pupil Diameter
paper_content:
Previous research found that the pupil diameter (PD) can be an indication of affective state, but this approach to the detection of the affective state of a computer user has not been investigated fully. We propose a new affective sensing approach to evaluate the computer user's affective states as they transition from "relaxationa#x201D; to "stress,a#x201D; through processing the PD signal. Wavelet denoising and Kalman filtering were used to preprocess the PD signal. Then, three features were extracted from it and five classification algorithms were used to evaluate the overall performance of the identification of "stressa#x201D; states in the computer users, achieving an average accuracy of 83.16 percent, with the highest accuracy of 84.21 percent reached with a Multilayer Perceptron and a Naive Bayes classifier. The Galvanic Skin Response (GSR) signal was also analyzed to study the comparative efficiency of affective sensing through the PD signal. We compared the discriminating power of the three features derived from the preprocessed PD signal to three features derived from the preprocessed GSR signal in terms of their Receiver Operating Characteristic curves. The results confirm that the PD signal should be considered a powerful physiological factor to involve in future automated affective classification systems for human-computer interaction.
---
paper_title: Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data
paper_content:
The study at hand aims at the development of a multimodal, ensemble-based system for emotion recognition. Special attention is given to a problem often neglected: missing data in one or more modalities. In offline evaluation the issue can be easily solved by excluding those parts of the corpus where one or more channels are corrupted or not suitable for evaluation. In real applications, however, we cannot neglect the challenge of missing data and have to find adequate ways to handle it. To address this, we do not expect examined data to be completely available at all time in our experiments. The presented system solves the problem at the multimodal fusion stage, so various ensemble techniques-covering established ones as well as rather novel emotion specific approaches-will be explained and enriched with strategies on how to compensate for temporarily unavailable modalities. We will compare and discuss advantages and drawbacks of fusion categories and extensive evaluation of mentioned techniques is carried out on the CALLAS Expressivity Corpus, featuring facial, vocal, and gestural modalities.
---
paper_title: The Faces of Engagement: Automatic Recognition of Student Engagementfrom Facial Expressions
paper_content:
Student engagement is a key concept in contemporary education, where it is valued as a goal in its own right. In this paper we explore approaches for automatic recognition of engagement from students’ facial expressions. We studied whether human observers can reliably judge engagement from the face; analyzed the signals observers use to make these judgments; and automated the process using machine learning. We found that human observers reliably agree when discriminating low versus high degrees of engagement (Cohen’s $\kappa = 0.96$ ). When fine discrimination is required (four distinct levels) the reliability decreases, but is still quite high ( $\kappa = 0.56$ ). Furthermore, we found that engagement labels of 10-second video clips can be reliably predicted from the average labels of their constituent frames (Pearson $r=0.85$ ), suggesting that static expressions contain the bulk of the information used by observers. We used machine learning to develop automatic engagement detectors and found that for binary classification (e.g., high engagement versus low engagement), automated engagement detectors perform with comparable accuracy to humans. Finally, we show that both human and automatic engagement judgments correlate with task performance. In our experiment, student post-test performance was predicted with comparable accuracy from engagement labels ( $r=0.47$ ) as from pre-test scores ( $r=0.44$ ).
---
paper_title: Automatically Recognizing Facial Expression: Predicting Engagement and Frustration
paper_content:
Learning involves a rich array of cognitive and affective states. Recognizing and understanding these cognitive and affective dimensions of learning is key to designing informed interventions. Prior research has highlighted the importance of facial expressions in learning-centered affective states, but tracking facial expression poses significant challenges. This paper presents an automated analysis of fine-grained facial movements that occur during computer-mediated tutoring. We use the Computer Expression Recognition Toolbox (CERT) to track fine-grained facial movements consisting of eyebrow raising (inner and outer), brow lowering, eyelid tightening, and mouth dimpling within a naturalistic video corpus of tutorial dialogue (N=65). Within the dataset, upper face movements were found to be predictive of engagement, frustration, and learning, while mouth dimpling was a positive predictor of learning and self-reported performance. These results highlight how both intensity and frequency of facial expressions predict tutoring outcomes. Additionally, this paper presents a novel validation of an automated tracking tool on a naturalistic tutoring dataset, comparing CERT results with manual annotations across a prior video corpus. With the advent of readily available fine-grained facial expression recognition, the developments introduced here represent a next step toward automatically understanding moment-by-moment affective states during learning.
---
paper_title: Affective e-Learning: Using "Emotional" Data to Improve Learning in Pervasive Learning Environment
paper_content:
Using emotion detection technologies from biophysical signals, this study explored how emotion evolves during learning process and how emotion feedback could be used to improve learning experiences. This article also described a cutting-edge pervasive e-Learning platform used in a Shanghai online college and proposed an affective e-Learning model, which combined learners’ emotions with the Shanghai e-Learning platform. The study was guided by Russell’s circumplex model of affect and Kort’s learning spiral model. The results about emotion recognition from physiological signals achieved a best-case accuracy (86.3%) for four types of learning emotions. And results from emotion revolution study showed that engagement and confusion were the most important and frequently occurred emotions in learning, which is consistent with the findings from AutoTutor project. No evidence from this study validated Kort’s learning spiral model. An experimental prototype of the affective e-Learning model was built to help improve students’ learning experience by customizing learning material delivery based on students’ emotional state. Experiments indicated the superiority of emotion aware over non-emotion-aware with a performance increase of 91%.
---
paper_title: Physiological signals of autistic children can be useful
paper_content:
This article covers the latest research concerning the measurement of physiological signals of children with autism, particularly for the study of changing emotions in various environments. Answers to important questions regarding autistic children's physiological activity are examined, and we will see that within a non-social environment, physiological responses are the same between children with and without autism but different in environments with social contexts. Moreover, physiological signals can be used as a reliable indicator of emotions of children with autism. Also covered are the latest developments in wearable sensor technologies avail- able for measuring on-the-go. I review additional research that identifies body signals in response to stimuli and may help ex- plain core social deficits in children with autism.
---
paper_title: The emotional hearing aid: an assistive tool for children with Asperger syndrome
paper_content:
People diagnosed along the autistic spectrum often have difficulties interacting with others in natural social environments. The emotional hearing aid is a portable assistive computer-based technology designed to help children with Asperger syndrome read and respond to the facial expressions of people they interact with. The tool implements the two principal elements that constitute one’s ability to empathize with others: the ability to identify a person’s mental state, a process known as mind-reading or theory of mind, and the ability to react appropriately to it (known as sympathizing). An automated mind-reading system attributes a mental state to a person by observing the behaviour of that person in real-time. Then the reaction advisor suggests to the user of the emotional hearing an appropriate reaction to the recognized mental state. This paper describes progress in the development and validation of the emotional hearing aid on two fronts. First, the implementation of the reaction advisor is described, showing how it takes into account the persistence, intensity and degree of confidence of a mental state inference. Second, the paper presents an experimental evaluation of the automated mind-reading system on six classes of complex mental states. In light of this progress, the paper concludes with a discussion of the challenges that still need to be addressed in developing and validating the emotional hearing aid.
---
paper_title: Intelligent Facial Action and emotion recognition for humanoid robots
paper_content:
This research focuses on the development of a realtime intelligent facial emotion recognition system for a humanoid robot. In our system, Facial Action Coding System is used to guide the automatic analysis of emotional facial behaviours. The work includes both an upper and a lower facial Action Units (AU) analyser. The upper facial analyser is able to recognise six AUs including Inner and Outer Brow Raiser, Upper Lid Raiser etc, while the lower facial analyser is able to detect eleven AUs including Upper Lip Raiser, Lip Corner Puller, Chin Raiser, etc. Both of the upper and lower analysers are implemented using feedforward Neural Networks (NN). The work also further decodes six basic emotions from the recognised AUs. Two types of facial emotion recognisers are implemented, NN-based and multi-class Support Vector Machine (SVM) based. The NN-based facial emotion recogniser with the above recognised AUs as inputs performs robustly and efficiently. The Multi-class SVM with the radial basis function kernel enables the robot to outperform the NN-based emotion recogniser in real-time posed facial emotion detection tasks for diverse testing subjects.
---
paper_title: Emotion and Gesture Recognition with Soft Computing Tool for Drivers Assistance System in Human Centered Transportation
paper_content:
Expression recognition or Emotional state recognition using holistic and feature information is the vital step in Driver Assistance System. Many researchers have work on Facial Gesture or Emotion recognition independently. The purpose of the present paper is to deal with Simultaneous Facial Gesture tracking and Emotion recognition with Soft Computing tool like Fuzzy rule based system (FBS). In Human Centered Transportation large number of road accidents took place due to drowsiness or bad mood of the driver. The system proposed in this paper take into account both the Facial Gesture tracking and Emotion recognition so that if there is any sign of less attentiveness of the driver or driver's fatigue the car will be switch to automatic mode. A novel fuzzy system is created, whose rules is being defined through analysis of Facial Gesture variations. The idea behind this paper is to detect Facial Gesture by detecting the motion of eyes & lips along with classification of different facial expressions into one of the four basic human emotions, viz. happy, anger, sad, and surprise with fuzzy rule based system for better system performance. The given system proposes 91.66% accuracy for Facial Gesture detection & 90% accuracy for Emotion recognition while using Simultaneous Facial Gesture detection and Emotion recognition it provides 94.58% accuracy.
---
paper_title: Smart recognition and synthesis of emotional speech for embedded systems with natural user interfaces
paper_content:
The importance of the emotion information in human speech has been growing in recent years due to increasing use of natural user interfacing in embedded systems. Speech-based human-machine communication has the advantage of a high degree of usability, but it need not be limited to speech-to-text and text-to-speech capabilities. Emotion recognition in uttered speech has been considered in this research to integrate a speech recognizer/synthesizer with the capacity to recognize and synthesize emotion. This paper describes a complete framework for recognizing and synthesizing emotional speech based on smart logic (fuzzy logic and artificial neural networks). Time-domain signal-processing algorithms has been applied to reduce computational complexity at the feature-extraction level. A fuzzy-logic engine was modeled to make inferences about the emotional content of the uttered speech. An artificial neural network was modeled to synthesize emotive speech. Both were designed to be integrated into an embedded handheld device that implements a speech-based natural user interface (NUI).
---
paper_title: Directing Physiology and Mood through Music: Validation of an Affective Music Player
paper_content:
Music is important in everyday life, as it provides entertainment and influences our moods. As music is widely available, it is becoming increasingly difficult to select songs to suit our mood. An affective music player can remove this obstacle by taking a desired mood as input and then selecting songs that direct toward that desired mood. In the present study, we validate the concept of an affective music player directing the energy dimension of mood. User models were trained for 10 participants based on skin conductance changes to songs from their own music database. Based on the resulting user models, the songs that most increased or decreased the skin conductance level of the participants were selected to induce either a relatively energized or a calm mood. Experiments were conducted in a real-world office setting. The results showed that a reliable prediction can be made of the impact of a song on skin conductance, that skin conductance and mood can be directed toward an energized or calm state and that skin conductance remains in these states for at least 30 minutes. All in all, this study shows that the concept and models of the affective music player worked in an ecologically valid setting, suggesting the feasibility of using physiological responses in real-life affective computing applications.
---
paper_title: Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty
paper_content:
This paper proposes to maintain player's engagement by adapting game difficulty according to player's emotions assessed from physiological signals. The validity of this approach was first tested by analyzing the questionnaire responses, electroencephalogram (EEG) signals, and peripheral signals of the players playing a Tetris game at three difficulty levels. This analysis confirms that the different difficulty levels correspond to distinguishable emotions, and that, playing several times at the same difficulty level gives rise to boredom. The next step was to train several classifiers to automatically detect the three emotional classes from EEG and peripheral signals in a player-independent framework. By using either type of signals, the emotional classes were successfully recovered, with EEG having a better accuracy than peripheral signals on short periods of time. After the fusion of the two signal categories, the accuracy raised up to 63%.
---
paper_title: A study on human age estimation under facial expression changes
paper_content:
In this paper, we study human age estimation in face images under significant expression changes. We will address two issues: (1) Is age estimation affected by facial expression changes and how significant is the influence? (2) How to develop a robust method to perform age estimation undergoing various facial expression changes? This systematic study will not only discover the relation between age estimation and expression changes, but also contribute a robust solution to solve the problem of cross-expression age estimation. This study is an important step towards developing a practical and robust age estimation system that allows users to present their faces naturally (with various expressions) rather than constrained to the neutral expression only. Two databases originally captured in the Psychology community are introduced to Computer Vision, to quantitatively demonstrate the influence of expression changes on age estimation, and evaluate the proposed framework and corresponding methods for cross-expression age estimation.
---
paper_title: Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition
paper_content:
Psycholinguistic studies on human communication have shown that during human interaction individuals tend to adapt their behaviors mimicking the spoken style, gestures, and expressions of their conversational partners. This synchronization pattern is referred to as entrainment. This study investigates the presence of entrainment at the emotion level in cross-modality settings and its implications on multimodal emotion recognition systems. The analysis explores the relationship between acoustic features of the speaker and facial expressions of the interlocutor during dyadic interactions. The analysis shows that 72 percent of the time the speakers displayed similar emotions, indicating strong mutual influence in their expressive behaviors. We also investigate the cross-modality, cross-speaker dependence, using mutual information framework. The study reveals a strong relation between facial and acoustic features of one subject with the emotional state of the other subject. It also shows strong dependence between heterogeneous modalities across conversational partners. These findings suggest that the expressive behaviors from one dialog partner provide complementary information to recognize the emotional state of the other dialog partner. The analysis motivates classification experiments exploiting cross-modality, cross-speaker information. The study presents emotion recognition experiments using the IEMOCAP and SEMAINE databases. The results demonstrate the benefit of exploiting this emotional entrainment effect, showing statistically significant improvements.
---
paper_title: Comparison of Gender- and Speaker-adaptive Emotion Recognition
paper_content:
Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.
---
paper_title: Tracking changes in continuous emotion states using body language and prosodic cues
paper_content:
Human expressive interactions are characterized by an ongoing unfolding of verbal and nonverbal cues. Such cues convey the interlocutor's emotional state which is continuous and of variable intensity and clarity over time. In this paper, we examine the emotional content of body language cues describing a participant's posture, relative position and approach/withdraw behaviors during improvised affective interactions, and show that they reflect changes in the participant's activation and dominance levels. Furthermore, we describe a framework for tracking changes in emotional states during an interaction using a statistical mapping between the observed audiovisual cues and the underlying user state. Our approach shows promising results for tracking changes in activation and dominance.
---
paper_title: Facial Expression Recognition Influenced by Human Aging
paper_content:
Facial expression recognition (FER) is an active research topic in computer vision. However, there is no study yet to discover whether FER is affected by human aging, from a computational perspective. We perform a computational study of FER within and across age groups and compare the FER accuracies. Two databases from the psychology society are introduced to the computer vision community and used for our study. We found that the FER is influenced significantly by human aging, and we analyze the influence and interpret it from a computational viewpoint. Next, we propose some schemes to reduce the influence of aging on FER and evaluate the effectiveness in dealing with lifespan FER.
---
paper_title: Context-Sensitive Dynamic Ordinal Regression for Intensity Estimation of Facial Action Units
paper_content:
Modeling intensity of facial action units from spontaneously displayed facial expressions is challenging mainly because of high variability in subject-specific facial expressiveness, head-movements, illumination changes, etc. These factors make the target problem highly context-sensitive. However, existing methods usually ignore this context-sensitivity of the target problem. We propose a novel Conditional Ordinal Random Field (CORF) model for context-sensitive modeling of the facial action unit intensity, where the W5+ ( who , when , what , where , why and how ) definition of the context is used. While the proposed model is general enough to handle all six context questions, in this paper we focus on the context questions: who (the observed subject), how (the changes in facial expressions), and when (the timing of facial expressions and their intensity). The context questions who and how are modeled by means of the newly introduced context-dependent covariate effects, and the context question when is modeled in terms of temporal correlation between the ordinal outputs, i.e., intensity levels of action units. We also introduce a weighted softmax-margin learning of CRFs from data with skewed distribution of the intensity levels, which is commonly encountered in spontaneous facial data. The proposed model is evaluated on intensity estimation of pain and facial action units using two recently published datasets (UNBC Shoulder Pain and DISFA) of spontaneously displayed facial expressions. Our experiments show that the proposed model performs significantly better on the target tasks compared to the state-of-the-art approaches. Furthermore, compared to traditional learning of CRFs, we show that the proposed weighted learning results in more robust parameter estimation from the imbalanced intensity data.
---
paper_title: Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification
paper_content:
Human emotional expression tends to evolve in a structured manner in the sense that certain emotional evolution patterns, i.e., anger to anger, are more probable than others, e.g., anger to happiness. Furthermore, the perception of an emotional display can be affected by recent emotional displays. Therefore, the emotional content of past and future observations could offer relevant temporal context when classifying the emotional content of an observation. In this work, we focus on audio-visual recognition of the emotional content of improvised emotional interactions at the utterance level. We examine context-sensitive schemes for emotion recognition within a multimodal, hierarchical approach: bidirectional Long Short-Term Memory (BLSTM) neural networks, hierarchical Hidden Markov Model classifiers (HMMs), and hybrid HMM/BLSTM classifiers are considered for modeling emotion evolution within an utterance and between utterances over the course of a dialog. Overall, our experimental results indicate that incorporating long-term temporal context is beneficial for emotion recognition systems that encounter a variety of emotional manifestations. Context-sensitive approaches outperform those without context for classification tasks such as discrimination between valence levels or between clusters in the valence-activation space. The analysis of emotional transitions in our database sheds light into the flow of affective expressions, revealing potentially useful patterns.
---
| Title: A Survey on Human Emotion Recognition Approaches, Databases and Applications
Section 1: Introduction
Description 1: This section introduces the field of emotion recognition, its significance in AI and Human-Computer Interaction, and the survey's structure.
Section 2: Emotions and affective computing
Description 2: This section explains the various theories of emotion and the interdisciplinary field of Affective Computing (AC).
Section 3: Emotion recognition modalities and approaches
Description 3: This section discusses the different modalities (facial expressions, bodily expressions, speech signals, physiological signals) and approaches used in emotion recognition systems.
Section 4: Facial Expressions
Description 4: This section focuses on the use of facial expressions in emotion recognition, including methodologies and challenges.
Section 5: Bodily expressions
Description 5: This section explains the role and methodologies involving body gestures for emotion recognition.
Section 6: Speech Signals
Description 6: This section outlines how verbal communication and audio features are used to recognize emotions.
Section 7: Physiological signals
Description 7: This section reviews approaches that use physiological signals like heart rate and galvanic skin response for emotion recognition.
Section 8: Other Modalities
Description 8: This section discusses less common or emerging modalities and their application in emotion recognition systems.
Section 9: Emotion recognition databases
Description 9: This section provides an overview of various databases used in emotion recognition research, categorized by types such as image, video, speech, and multimodal databases.
Section 10: Emotion recognition Applications
Description 10: This section outlines the diverse application areas of emotion recognition systems, including education, aid for the disabled, HCI/robotics, safety aids, and entertainment.
Section 11: Conclusion and Future Directions
Description 11: This section summarizes the survey and provides insights into future research directions and challenges in emotion recognition. |
Human-Computer Interaction : Overview on State of the Art | 16 | ---
paper_title: Human-Computer Interaction: Developing Effective Organizational Information Systems
paper_content:
This book is about developing interactive information systems that support people at work or when conducting business. Specifically, it emphasizes the need to study and practice the development of HCI for real-world organizations in given contexts. Developing an effective information system means achieving a good fit among the users. In order to do this, designers need to have a good understanding of important factors that come into play. Designers need to understand why and how people interact with computers in order to accomplish their work and personal goals, what are the physical, cognitive, affective and behavioral constraints on the users’ side what pleases or annoys them, what makes human-computer interaction a satisfying experience or an effective. This knowledge is the foundation of human-computer interaction (HCI) development.
---
paper_title: Designing the User Interface. Strategies for Effective Human-Computer Interaction
paper_content:
Ben Shneiderman again provides a complete, current, and authoritative introduction to user-interface design. Students will learn practical techniques and guidelines needed to develop good systems designs - systems with interfaces the typical user can understand, predict, and control. This third edition features new chapters on the World Wide Web, information visualization, and computer-supported cooperative work. It contains expanded and earlier coverage of development methodologies, evaluation techniques, and user-interface building tools. The author provides provocative discussion of speech input/output, natural-language interaction, anthropomorphic design, virtual environments, and intelligent (software) agents.
---
paper_title: Human-Computer Interaction: Developing Effective Organizational Information Systems
paper_content:
This book is about developing interactive information systems that support people at work or when conducting business. Specifically, it emphasizes the need to study and practice the development of HCI for real-world organizations in given contexts. Developing an effective information system means achieving a good fit among the users. In order to do this, designers need to have a good understanding of important factors that come into play. Designers need to understand why and how people interact with computers in order to accomplish their work and personal goals, what are the physical, cognitive, affective and behavioral constraints on the users’ side what pleases or annoys them, what makes human-computer interaction a satisfying experience or an effective. This knowledge is the foundation of human-computer interaction (HCI) development.
---
paper_title: Introduction to Virtual Reality
paper_content:
1 Virtual Reality.- 1.1 Introduction.- 1.2 What Is VR?.- 1.3 Who Should Read This Book?.- 1.4 The Aims and Objectives of This Book.- 1.5 Assumptions Made in This Book.- 1.6 How to Use This Book.- 1.7 Some VR Concepts and Terms.- 1.8 Navigation and Interaction.- 1.9 Immersion and Presence.- 1.10 What Is Not VR?.- 1.11 The Internet.- 1.12 Summary.- 2 The Benefits of VR.- 2.1 Introduction.- 2.2 3D Visualization.- 2.3 Navigation.- 2.4 Interaction.- 2.5 Physical Simulation.- 2.6 VEs.- 2.7 Applications.- 2.8 Summary.- 3 3D Computer Graphics.- 3.1 Introduction.- 3.2 From Computer Graphics to VR.- 3.3 Modelling Objects.- 3.4 Dynamic Objects.- 3.5 Constraints.- 3.6 Collision Detection.- 3.7 Perspective Views.- 3.8 3D Clipping.- 3.9 Stereoscopic Vision.- 3.10 Rendering the Image.- 3.11 Rendering Algorithms.- 3.12 Texture Mapping.- 3.13 Bump Mapping.- 3.14 Environment Mapping.- 3.15 Shadows.- 3.16 Radiosity.- 3.17 Other Computer Graphics Techniques.- 3.18 Summary.- 4 Human Factors.- 4.1 Introduction.- 4.2 Vision.- 4.3 Vision and Display Technology.- 4.4 Hearing.- 4.5 Tactile.- 4.6 Equilibrium.- 4.7 Summary.- 5 VR Hardware.- 5.1 Introduction.- 5.2 Computers.- 5.3 Tracking.- 5.4 Input Devices.- 5.5 Output Devices.- 5.6 Glasses.- 5.7 Displays.- 5.8 Audio.- 5.9 Summary.- 6 VR Software.- 6.1 Introduction.- 6.2 VR Software Features.- 6.3 Web-Based VR.- 6.4 Division's dVISE.- 6.5 Blueberry3D.- 6.6 Boston Dynamics.- 6.7 MultiGen.- 6.8 Summary.- 7 VR Applications.- 7.1 Introduction.- 7.2 Industrial.- 7.3 Training Simulators.- 7.4 Entertainment.- 7.5 VR Centres.- 7.6 Summary.- 8 Conclusion.- 8.1 The Past.- 8.2 Today.- 8.3 Conclusion.- Appendices.- Appendix A VRML Web Sites.- Appendix B HMDs.- Appendix C Trackers.- Appendix D VRML Program.- Appendix E Web Sites for VR Products.- Referebces.
---
paper_title: Haptic interfaces and devices
paper_content:
Haptic interfaces enable person‐machine communication through touch, and most commonly, in response to user movements. We comment on a distinct property of haptic interfaces, that of providing for simultaneous information exchange between a user and a machine. We also comment on the fact that, like other kinds of displays, they can take advantage of both the strengths and the limitations of human perception. The paper then proceeds with a description of the components and the modus operandi of haptic interfaces, followed by a list of current and prospective applications and a discussion of a cross‐section of current device designs.
---
paper_title: Human-Computer Interaction: Developing Effective Organizational Information Systems
paper_content:
This book is about developing interactive information systems that support people at work or when conducting business. Specifically, it emphasizes the need to study and practice the development of HCI for real-world organizations in given contexts. Developing an effective information system means achieving a good fit among the users. In order to do this, designers need to have a good understanding of important factors that come into play. Designers need to understand why and how people interact with computers in order to accomplish their work and personal goals, what are the physical, cognitive, affective and behavioral constraints on the users’ side what pleases or annoys them, what makes human-computer interaction a satisfying experience or an effective. This knowledge is the foundation of human-computer interaction (HCI) development.
---
paper_title: Wireless Technology: Protocols, Standards, and Techniques
paper_content:
PART 1: INTRODUCTION WIRELESS NETWORK Introduction Intelligent Network Network Architecture Protocol Architecture Channel Structure Narrowband and Wideband Systems Multiple Access Summary CELLULAR PRINCIPLES Introduction Cellular Hierarchy System Management System Performance Cellular Reuse Pattern Macrocellular Reuse Pattern Microcellular Reuse Pattern Interference in Narrowband and Wideband Systems Interference in Narrowband Macrocellular Systems Interference in Narrowband Microcellular Systems Interference in Wideband Systems Network Capacity Summary MULTIPLE ACCESS Introduction Signal Domains Duplexing Multiple Access Categories Scheduled Multiple Access Random Multiple Access Controlled Multiple Access Hybrid Multiple Access Summary Part II: 2G SYSTEMS GSM Introduction Features and Services Architecture Multiple Access The Logical Channels Messages Call Management Frequency Hopping Discontinuous Transmission Power Control Spectral Efficiency Summary cdmaONE Introduction Features and Services Architecture Multiple Access Structure The Logical Channels Signaling Format Messages, Orders, and Parameters Messages and Orders and Logical Channels Mobile Station Call Processing Base Station Call Processing Authentication, Message Encryption, and Voice Privacy Authentication Message Encryption Voice Privacy Roaming Handoff Power Control Call Procedures EIA/TIA/IS-95B Summary PART III: WIRELESS DATA WIRELESS DATA TECHNOLOGY Introduction General Packet Radio Service (GPRS) EIA/TIA/IS-95B High Data Rate (HDR) Summary PART IV: 3G SYSTEMS IMT-2000 Introduction Some Definitions Frequency Allocation Features and Services Traffic Classes IMT-2000 System and IMT-2000 Family Specific Functions Network Architecture Physical Entities and Functional Entities Functional Entities and their Interrelations Application of IMT-2000 Family Member Concept Toward 3G Summary UTRA Introduction Network Architecture Protocol Architecture Radio Interface Protocol Architecture Logical Channels Transport channels and Indicators Physical Channels and Physical Signals Mapping of Channels Physical Layer Transmission Chain Channel and Frame Structures Spreading and Modulation Spreading Codes UTRA Procedures Interference Issues Summary cdma2000 Introduction Network Architecture Radio Interface Protocol Architecture Logical Channels Physical Channels Mapping of Channels Achievable Rates Forward Link Reverse Link Forward Physical Channels Reverse Physical Channels High Rate Packet Data Access Summary PART V: APPENDICES OPEN SYSTEMS INTERCONNECTION SIGNALING SYSTEM NUMBER 7 SPREAD SPECTRUM Correlation Pseudonoise Sequence Walsh Codes Orthogonal Variable Spreading Factor Codes Rake Receiver Processing Gain Direct Sequence Spread Spectrum Frequency Hopping Spread Spectrum POSITIONING OF THE INTERFERERS IN A MICROCELLULAR GRID Collinear Type Even Noncollinear Type Odd Nonprime Noncollinear Prime Noncollinear
---
paper_title: The importance of the sense of touch in virtual and real environments
paper_content:
What would be worse, losing your sight or your sense of touch? Although touch (more generally, somesthesis) is commonly underrated, major somesthetic loss can't be adequately compensated for by sight. It results in catastrophic impairments of hand dexterity, haptic capabilities, walking, perception of limb position, and so on. Providing users with inadequate somesthetic feedback in virtual environments might impair their performance, just as major somesthetic loss does
---
paper_title: A probabilistic approach to reference resolution in multimodal user interfaces
paper_content:
Multimodal user interfaces allow users to interact with computers through multiple modalities, such as speech, gesture, and gaze. To be effective, multimodal user interfaces must correctly identify all objects which users refer to in their inputs. To systematically resolve different types of references, we have developed a probabilistic approach that uses a graph-matching algorithm. Our approach identifies the most probable referents by optimizing the satisfaction of semantic, temporal, and contextual constraints simultaneously. Our preliminary user study results indicate that our approach can successfully resolve a wide variety of referring expressions, ranging from simple to complex and from precise to ambiguous ones.
---
paper_title: Fundamentals Of Speech Recognition
paper_content:
1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition.
---
paper_title: The Visual Analysis of Human Movement: A Survey
paper_content:
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions.
---
paper_title: Readings in Intelligent User Interfaces
paper_content:
Introduction I: Multimedia Input Analysis II: Multimedia Presentation Design III: Automated Graphic Design IV: Automated Layout V: User and Discourse Modeling VI: Model Based Interfaces VII: Agent Interfaces VIII: Evaluation
---
paper_title: Human-Computer Interaction: Developing Effective Organizational Information Systems
paper_content:
This book is about developing interactive information systems that support people at work or when conducting business. Specifically, it emphasizes the need to study and practice the development of HCI for real-world organizations in given contexts. Developing an effective information system means achieving a good fit among the users. In order to do this, designers need to have a good understanding of important factors that come into play. Designers need to understand why and how people interact with computers in order to accomplish their work and personal goals, what are the physical, cognitive, affective and behavioral constraints on the users’ side what pleases or annoys them, what makes human-computer interaction a satisfying experience or an effective. This knowledge is the foundation of human-computer interaction (HCI) development.
---
paper_title: Evaluation of eye gaze interaction
paper_content:
Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.
---
paper_title: Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions
paper_content:
The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human-computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. In this article, we summarize the emerging architectural approaches for interpreting speech and pen-based gestural input in a robust manner-including early and late fusion approaches, and the new hybrid symbolic-statistical approach. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Before this new class of systems can proliferate, toolkits also will be needed to promote software development for both simulated and functioning systems.
---
paper_title: Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie
paper_content:
Rapid quality assessment of alcoholic beverages, including brand identification and detection of products of unacceptable quality or counterfeits is an important practical task. In the present work the multisensor electronic tongue system (ET), based on array of potentiometric chemical sensors was applied to recognition and classification of spirits such as vodka and ethanol used for vodka production and also for eau-de-vie in cognac production. The ET system was capable of detecting presence of contaminant substances in vodka in concentrations exceeding allowed levels as well as of distinguishing vodka complying and not complying with state quality standards. Ten brands of vodka produced at the same distillery using water and ethanol of different purity and various taste additives were discriminated using the instrument. The ET could distinguish synthetic and alimentary grain ethanol as well as alimentary ethanol of different grades (i.e. different degree of purification). A feasibility study was run on several eau-de-vie samples, which included fresh and aged eau-de-vie as well as samples produced using different distillation technology and samples kept in contact with different kinds of oak. The electronic tongue showed a promise as an analytical instrument for rapid quality assessment of spirits.
---
paper_title: Multimodal human-computer interaction: A survey
paper_content:
In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.
---
paper_title: A Breadth-First Survey of Eye Tracking Applications
paper_content:
Eye-tracking applications are surveyed in a breadth-first manner, reporting on work from the following domains: neuroscience, psychology, industrial engineering and human factors, marketing/advertising, and computer science. Following a review of traditionally diagnostic uses, emphasis is placed on interactive applications, differentiating between selective and gaze-contingent approaches.
---
paper_title: Evaluation of eye gaze interaction
paper_content:
Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.
---
paper_title: Perceptual user interfaces using vision-based eye tracking
paper_content:
We present a multi-camera vision-based eye tracking method to robustly locate and track user's eyes as they interact with an application. We propose enhancements to various vision-based eye-tracking approaches, which include (a) the use of multiple cameras to estimate head pose and increase coverage of the sensors and (b) the use of probabilistic measures incorporating Fisher's linear discriminant to robustly track the eyes under varying lighting conditions in real-time. We present experiments and quantitative results to demonstrate the robustness of our eye tracking in two application prototypes.
---
paper_title: Designing, Playing, and Performing with a Vision-based Mouth Interface
paper_content:
The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action to control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experience with various gesture-to-sound mappings and musical applications, and describe a live performance which used the Mouthesizer interface.
---
paper_title: Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction
paper_content:
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities to control the computer such as voice, gesture, and force-feedback are emerging. Among these, voice and vision are two natural modalities in human-to-human communication. Automatic speech recognition (ASR) technology has matured enough to allow users to dictate to a word processor or operate the computer using voice commands. Computer vision techniques have enabled the computer to see. Interacting with computers in these modalities is much more natural for people, and the progression is towards the kind of interaction between humans. Despite these advances, one necessary ingredient for natural interaction is still missing—emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in some applications such as computer-aided learning or user-friendly online help. ::: This thesis addresses the problem of detecting human emotional expressions by computer from the voice and facial motions of the user. The computer is equipped with a microphone to listen to the user's voice, and a video camera to look at the user. Prosodic features in the audio and facial motions exhibited on the face can help the computer make some inferences about the user's emotional state, assuming the users are willing to show their emotions. Another problem it addresses is the coupling between voice and the facial expression. Sometimes the user moves the lips to produce the speech, and sometimes the user only exhibits facial expression without speaking any words. Therefore, it is important to handle these two modalities accordingly. In particular, a pure “facial expression detector” will not function properly when the person is speaking, and a pure “vocal emotion recognizer” is useless when the user is not speaking. In this thesis, a complementary relationship between audio and video is proposed. Although these two modalities do not couple strongly in time, they seem to complement each other. In some cases, similar facial expressions may have different vocal characteristics, and vocal emotions having similar properties may have distinct facial behaviors.
---
paper_title: Perception of non-verbal emotional listener feedback
paper_content:
This paper reports on a listening test assessing the perception of short non-verbal emotional vocalisations emitted by a listener as feedback to the speaker. We clarify the concepts backchannel and feedback, and investigate the use of affect bursts as a means of giving emotional feedback via the backchannel. Experiments with German and Dutch subjects confirm that the recognition of emotion from affect bursts in a dialogical context is similar to their perception in isolation. We also investigate the acceptability of affect bursts when used as listener feedback. Acceptability appears to be linked to display rules for emotion expression. While many ratings were similar between Dutch and German listeners, a number of clear differences was found, suggesting language-specific affect bursts.
---
paper_title: Fundamentals Of Speech Recognition
paper_content:
1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition.
---
paper_title: Speaker recognition: A tutorial
paper_content:
A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person's claimed identity. Speech processing and the basic components of automatic speaker-recognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct decalcification. Last, the performances of various systems are compared.
---
paper_title: The importance of the sense of touch in virtual and real environments
paper_content:
What would be worse, losing your sight or your sense of touch? Although touch (more generally, somesthesis) is commonly underrated, major somesthetic loss can't be adequately compensated for by sight. It results in catastrophic impairments of hand dexterity, haptic capabilities, walking, perception of limb position, and so on. Providing users with inadequate somesthetic feedback in virtual environments might impair their performance, just as major somesthetic loss does
---
paper_title: Multimodal human-computer interaction: A survey
paper_content:
In this paper we review the major approaches to multimodal human computer interaction from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition, and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.
---
paper_title: Determining driver visual attention with one camera
paper_content:
This paper presents a system for analyzing human driver visual attention. The system relies on estimation of global motion and color statistics to robustly track a person's head and facial features. The system is fully automatic, it can initialize automatically, and reinitialize when necessary. The system classifies rotation in all viewing directions, detects eye/mouth occlusion, detects eye blinking and eye closure, and recovers the three dimensional gaze of the eyes. In addition, the system is able to track both through occlusion due to eye blinking, and eye closure, large mouth movement, and also through occlusion due to rotation. Even when the face is fully occluded due to rotation, the system does not break down. Further the system is able to track through yawning, which is a large local mouth motion. Finally, results are presented, and future work on how this system can be used for more advanced driver visual attention monitoring is discussed.
---
paper_title: A Survey of Research on Context-Aware Homes
paper_content:
The seamless integration of people, devices and computation will soon become part of our daily life. Sensors, actuators, wireless networks and ubiquitous devices powered by intelligent computation will blend into future environments in which people will live. Despite showing great promise, research into future computing technologies is often far removed from the needs of users. The nature of such future systems is often too obtrusive, seemingly denying their purpose. Furthermore, most research on context-aware environments and ubiquitous computing conducted so far has concentrated on supporting people while at work. This paper presents research issues that need to be addressed to enhance the quality of life for people living in context-aware homes. We survey current research and present strategies that facilitate the diffusion of information technology into homes in order to inspire positive emotions, encourage effortless exploration of content and help occupants to achieve tasks at hand.
---
paper_title: Automatic analysis of multimodal group actions in meetings
paper_content:
This paper investigates the recognition of group actions in meetings. A framework is employed in which group actions result from the interactions of the individual participants. The group actions are modeled using different HMM-based approaches, where the observations are provided by a set of audiovisual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modeling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis.
---
paper_title: Building Multi-modal Personal Sales Agents as Interfaces to E-commerce Applications
paper_content:
The research presented explores a new paradigm for human-computer interaction with electronic retailing applications. A paradigm that deploys face-to-face interaction with intelligent, visual, lifelike, multimodal conversational agents, which take on the role of electronic sales assistants. This paper discusses the motivations for enriching current e-commerce application interfaces with multi-modal interface agents and discusses the technical development issues they raise, as realised in the MAPPA (EU project EP28831) system architecture design and development.The paper addresses three distinct components of an overall framework for developing lifelike, multi-modal agents for real-time and dynamic applications: Knowledge Representation and Manipulation, Grounded Affect Models, and the convergence of both into support for multimedia visualisation of lifelike, social behaviour. The research presents a novel specification for such a medium and a functional agent-based system scenario (e-commerce) that is implemented with it. Setting forth a framework for building multi-modal interface agents and yielding a conversational form of human-machine interaction, which may have potential for shaping tomorrows interface to the world of e-commerce.
---
paper_title: “Put-that-there”: Voice and gesture at the graphics interface
paper_content:
Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality. The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference.
---
paper_title: A probabilistic approach to reference resolution in multimodal user interfaces
paper_content:
Multimodal user interfaces allow users to interact with computers through multiple modalities, such as speech, gesture, and gaze. To be effective, multimodal user interfaces must correctly identify all objects which users refer to in their inputs. To systematically resolve different types of references, we have developed a probabilistic approach that uses a graph-matching algorithm. Our approach identifies the most probable referents by optimizing the satisfaction of semantic, temporal, and contextual constraints simultaneously. Our preliminary user study results indicate that our approach can successfully resolve a wide variety of referring expressions, ranging from simple to complex and from precise to ambiguous ones.
---
paper_title: Rules of Play: Game Design Fundamentals
paper_content:
This text offers an introduction to game design and a unified model for looking at all kinds of games, from board games and sports to computer and video games. Also included are concepts, strategies, and methodologies for creating and understanding games.
---
paper_title: MATCHkiosk: A Multimodal Interactive City Guide
paper_content:
Multimodal interfaces provide more flexible and compelling interaction and can enable public information kiosks to support more complex tasks for a broader community of users. MATCHKiosk is a multimodal interactive city guide which provides users with the freedom to interact using speech, pen, touch or multimodal inputs. The system responds by generating multimodal presentations that synchronize synthetic speech with a life-like virtual agent and dynamically generated graphics.
---
paper_title: Assistive multimodal system based on speech recognition and head tracking
paper_content:
In this paper the assistive multimodal system is presented, which is aimed for the disabled people, which need other kinds of interfaces than ordinary people. The group of users of this system is persons with hands disabilities. The interaction between a user and a machine is performed by voice and head movements. It gives the opportunity for disabled people to carry out a work with PC. The work of the multimodal systems is presented during EUSIPCO-2005 Conference in framework of Similar Demonstration Session “Multimedia Tools for Disabled”.
---
paper_title: A probabilistic approach to reference resolution in multimodal user interfaces
paper_content:
Multimodal user interfaces allow users to interact with computers through multiple modalities, such as speech, gesture, and gaze. To be effective, multimodal user interfaces must correctly identify all objects which users refer to in their inputs. To systematically resolve different types of references, we have developed a probabilistic approach that uses a graph-matching algorithm. Our approach identifies the most probable referents by optimizing the satisfaction of semantic, temporal, and contextual constraints simultaneously. Our preliminary user study results indicate that our approach can successfully resolve a wide variety of referring expressions, ranging from simple to complex and from precise to ambiguous ones.
---
paper_title: MATCHkiosk: A Multimodal Interactive City Guide
paper_content:
Multimodal interfaces provide more flexible and compelling interaction and can enable public information kiosks to support more complex tasks for a broader community of users. MATCHKiosk is a multimodal interactive city guide which provides users with the freedom to interact using speech, pen, touch or multimodal inputs. The system responds by generating multimodal presentations that synchronize synthetic speech with a life-like virtual agent and dynamically generated graphics.
---
paper_title: Assistive multimodal system based on speech recognition and head tracking
paper_content:
In this paper the assistive multimodal system is presented, which is aimed for the disabled people, which need other kinds of interfaces than ordinary people. The group of users of this system is persons with hands disabilities. The interaction between a user and a machine is performed by voice and head movements. It gives the opportunity for disabled people to carry out a work with PC. The work of the multimodal systems is presented during EUSIPCO-2005 Conference in framework of Similar Demonstration Session “Multimedia Tools for Disabled”.
---
paper_title: Human computing and machine understanding of human behavior: a survey
paper_content:
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, humanlike interactive functions including understanding and emulating certain human behaviors such as affecti0ve and social signaling. This article discusses how far are we from enabling computers to understand human behavior.
---
paper_title: Automatic prediction of frustration
paper_content:
Predicting when a person might be frustrated can provide an intelligent system with important information about when to initiate interaction. For example, an automated Learning Companion or Intelligent Tutoring System might use this information to intervene, providing support to the learner who is likely to otherwise quit, while leaving engaged learners free to discover things without interruption. This paper presents the first automated method that assesses, using multiple channels of affect-related information, whether a learner is about to click on a button saying ''I'm frustrated.'' The new method was tested on data gathered from 24 participants using an automated Learning Companion. Their indication of frustration was automatically predicted from the collected data with 79% accuracy (chance=58%). The new assessment method is based on Gaussian process classification and Bayesian inference. Its performance suggests that non-verbal channels carrying affective cues can help provide important information to a system for formulating a more intelligent response.
---
paper_title: Analysis of emotion recognition using facial expressions, speech and multimodal information
paper_content:
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expression gave better performance than the system based on just acoustic information for the emotions considered. Results also show the complementarily of the two modalities and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably.
---
paper_title: Bi-modal emotion recognition from expressive face and body gestures
paper_content:
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone.
---
paper_title: Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures
paper_content:
This paper presents an architecture for fusion of multimodal input streams for natural interaction with a humanoid robot as well as results from a user study with our system. The presented fusion architecture consists of an application independent parser of input events, and application specific rules. In the presented user study, people could interact with a robot in a kitchen scenario, using speech and gesture input. In the study, we could observe that our fusion approach is very tolerant against falsely detected pointing gestures. This is because we use speech as the main modality and pointing gestures mainly for disambiguation of objects. In the paper we also report about the temporal correlation of speech and gesture events as observed in the user study.
---
paper_title: Building a Multimodal Human-Robot Interface
paper_content:
When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our system's components, users can choose any combination of our interface's modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.
---
| Title: Human-Computer Interaction: Overview on State of the Art
Section 1: Introduction
Description 1: Provide an introduction to the evolution and the importance of Human-Computer Interaction (HCI), including the main focus of the paper.
Section 2: Overview on HCI
Description 2: Present a general overview of HCI, including an examination of existing technologies and the future direction of HCI research.
Section 3: Existing HCI Technologies
Description 3: Discuss the different existing HCI technologies, categorized by the human senses they engage (vision, audition, touch) and example devices.
Section 4: Recent Advances in HCI
Description 4: Explore recent advancements in the field, with a focus on intelligent and adaptive interfaces, and ubiquitous computing.
Section 5: Intelligent and Adaptive HCI
Description 5: Detail the design and functionality of intelligent and adaptive interfaces, including examples and applications.
Section 6: Ubiquitous Computing and Ambient Intelligence
Description 6: Describe the concept of ubiquitous computing, its history, and its significance as the third wave of computing.
Section 7: HCI Systems Architecture
Description 7: Explain the different configurations and designs of HCI systems, focusing on unimodal and multimodal systems.
Section 8: Unimodal HCI Systems
Description 8: Discuss unimodal HCI systems, including visual-based, audio-based, and sensor-based interactions.
Section 9: Multimodal HCI Systems
Description 9: Describe multimodal HCI systems which combine multiple modalities to enhance user interaction.
Section 10: Applications
Description 10: Highlight different real-world applications of multimodal HCI systems and their advantages over traditional interfaces.
Section 11: Multimodal Systems for Disabled people
Description 11: Explain how multimodal systems can be used to assist disabled individuals using alternative interfaces.
Section 12: Emotion Recognition Multimodal Systems
Description 12: Discuss systems that recognize human emotions through multimodal signals for better HCI.
Section 13: Map-Based Multimodal Applications
Description 13: Detail map-based multimodal applications and the integration of multiple input methods like speech and gestures.
Section 14: Multimodal Human-Robot Interface Applications
Description 14: Explain human-robot interfaces using multimodal inputs and their practical applications.
Section 15: Multi-Modal HCI in Medicine
Description 15: Highlight the applications of multimodal HCI in medical fields, specifically in neuro-surgery.
Section 16: Conclusion
Description 16: Summarize the key points discussed and emphasize the importance of evolving HCI towards natural, adaptive methods. |
Security and privacy issues in P2P streaming systems: A Survey | 15 | ---
paper_title: A modeling framework of content pollution in Peer-to-Peer video streaming systems
paper_content:
Peer-to-Peer (P2P) live video streaming systems are known to suffer from intermediate attacks due to its inherent vulnerabilities. The content pollution is one of the common attacks that have received little attention in P2P live streaming systems. In this paper, we propose a modeling framework of content pollution in P2P live streaming systems. This model considers both unstructured and structured overlays, and captures the key factors including churns, user interactions, multiple attackers and defensive techniques. The models are verified with simulations and implemented in a real working system, Anysee. We analyze content pollution and its effect in live streaming system. We show that: (1) the impact from content pollution can exponentially increase, similar to the random scanning worms, leading to playback interruption and unnecessary bandwidth consumption; (2) content pollution is influenced by peer cooperation, peer degree and bandwidth in unstructured overlays, and topology breadth in structured ones; (3) the structured overlay is more resilient to content pollution; (4) a hybrid overlay result in better reliability and pollution resistance; (5) hash-based chunk signature scheme is most promising against content pollution.
---
paper_title: The pollution attack in P2P live video streaming: measurement results and defenses
paper_content:
P2P mesh-pull live video streaming applications ---such as Cool-Streaming, PPLive, and PPStream --- have become popular in the recent years. In this paper, we examine the stream pollution attack, for which the attacker mixes polluted chunks into the P2P distribution, degrading the quality of the rendered media at the receivers. Polluted chunks received by an unsuspecting peer not only effect that single peer, but since the peer also forwards chunks to other peers, and those peers in turn forward chunks to more peers, the polluted content can potentially spread through much of the P2P network. The contribution of this paper is twofold. First, by way of experimenting and measuring a popular P2P live video streaming system, we show that the pollution attack can be devastating. Second, we evaluate the applicability of four possible defenses to the pollution attack: blacklisting, traffic encryption, hash verification, and chunk signing. Among these, we conclude that the chunk signing solutions are most suitable.
---
paper_title: Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems
paper_content:
This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.
---
paper_title: Building peer-to-peer systems with chord, a distributed lookup service
paper_content:
We argue that the core problem facing peer-to-peer Systems is locating documents in a decentralized network and propose Chord, a distributed lookup primitive. Chord provides an efficient method of locating documents while placing few constraints on the applications that use it. As proof that Chord's functionality is useful in the development of peer-to-peer applications, we outline the implementation of a peer-to-peer file sharing system based on Chord.
---
paper_title: Peer-to-peer internet telephony using SIP
paper_content:
P2P systems inherently have high scalability, robustness and fault tolerance because there is no centralized server and the network self-organizes itself. This is achieved at the cost of higher latency for locating the resources of interest in the P2P overlay network. Internet telephony can be viewed as an application of P2P architecture where the participants form a self-organizing P2P overlay network to locate and communicate with other participants. We propose a pure P2P architecture for the Session Initiation Protocol (SIP)-based IP telephony systems. Our P2P-SIP architecture supports basic user registration and call setup as well as advanced services such as offline message delivery, voice/video mails and multi-party conferencing. Additionally, we give an overview of our implementation.
---
paper_title: A Survey of Peer-to-Peer Security Issues
paper_content:
Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications. We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems.
---
paper_title: The SPARTA pseudonym and authorization system
paper_content:
This paper deals with privacy-preserving (pseudonymized) access to a service resource. In such a scenario, two opposite needs seem to emerge. On one side, the service provider may want to control, in first place, the user accessing its resources, i.e., without being forced to delegate the issuing of access permissions to third parties to meet privacy requirements. On the other side, it should be technically possible to trace back the real identity of a user upon dishonest behavior, and of course, this must be necessary accomplished by an external authority distinct from the provider itself. The framework described in this paper aims at coping with these two opposite needs. This is accomplished through (i) a distributed third-party-based infrastructure devised to assign and manage pseudonym certificates, decoupled from (ii) a two-party procedure, devised to bind an authorization permission to a pseudonym certificate with no third-party involvement. The latter procedure is based on a novel blind signature approach which allows the provider to blindly verify, at service subscription time, that the user possesses the private key of the still undisclosed pseudonym certificate, thus avoiding transferability of the authorization permission.
---
paper_title: An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation
paper_content:
A credential system is a system in which users can obtain credentials from organizations and demonstrate possession of these credentials. Such a system is anonymous when transactions carried out by the same user cannot be linked. An anonymous credential system is of significant practical relevance because it is the best means of providing privacy for users. In this paper we propose a practical anonymous credential system that is based on the strong RSA assumption and the decisional Diffie-Hellman assumption modulo a safe prime product and is considerably superior to existing ones: (1) We give the first practical solution that allows a user to unlinkably demonstrate possession of a credential as many times as necessary without involving the issuing organization. (2) To prevent misuse of anonymity, our scheme is the first to offer optional anonymity revocation for particular transactions. (3) Our scheme offers separability: all organizations can choose their cryptographic keys independently of each other. Moreover, we suggest more effective means of preventing users from sharing their credentials, by introducing all-or-nothing sharing: a user who allows a friend to use one of her credentials once, gives him the ability to use all of her credentials, i.e., taking over her identity. This is implemented by a new primitive, called circular encryption, which is of independent interest, and can be realized from any semantically secure cryptosystem in the random oracle model.
---
paper_title: Application-Layer Traffic Optimization (ALTO) Problem Statement
paper_content:
Peer-to-peer applications, such as file sharing, real-time ::: communication, and live media streaming, use a significant amount of ::: Internet resources. Such applications often transfer large amounts of ::: data in direct peer-to-peer connections. However, they usually have ::: little knowledge of the underlying network topology. As a result, they ::: may choose their peers based on measurements and statistics that, in ::: many situations, may lead to suboptimal choices. This document ::: describes problems related to optimizing traffic generated by peer-to- ::: peer applications and associated issues such optimizations raise in ::: the use of network-layer information.
---
paper_title: SecureStream: An Intrusion-Tolerant Protocol for Live-Streaming Dissemination
paper_content:
Peer-to-peer (P2P) dissemination systems are vulnerable to attacks that may impede nodes from receiving data in which they are interested. The same properties that lead P2P systems to be scalable and efficient also lead to security problems and lack of guarantees. Within this context, live-streaming protocols deserve special attention since their time sensitive nature makes them more susceptible to the packet loss rates induced by malicious behavior. While protocols based on dissemination trees often present obvious points of attack, more recent protocols based on pulling packets from a number of different neighbors present a better chance of standing attacks. We explore this in SecureStream, a P2P live-streaming system built to tolerate malicious behavior at the end level. SecureStream is built upon Fireflies, an intrusion-tolerant membership protocol, and employs a pull-based approach for streaming data. We present the main components of SecureStream and present simulation and experimental results on the Emulab testbed that demonstrate the good resilience properties of pull-based streaming in the face of attacks. This and other techniques allow our system to be tolerant to a variety of intrusions, gracefully degrading even in the presence of a large percentage of malicious peers.
---
paper_title: Understanding P2P-TV Systems Through Real Measurements
paper_content:
In this paper, we consider two popular peer-to-peer TV (P2P-TV) systems: PPLive, one of the today most widely used P2P-TV systems, and Joost, a promising new generation application of which no previous measurement study has been considered. Besides the traditional measurements like the amount of generated traffic for signaling and data transmission, the novel contribution of the paper consists in investigating the content distribution mechanisms. In particular, we evaluate the characteristics of both data distribution and signaling process for the overlay network discovery and maintenance. By considering two or more clients in the same sub-network, we observe the capability of the system to exploit the locality of peers. We also explore how the system adapts to different network conditions. The methodology we develop allows also to identify periodic behavior of the application, highlighting bursts of both data and signaling traffic.
---
paper_title: A Survey of Peer-to-Peer Security Issues
paper_content:
Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications. We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems.
---
paper_title: Securing peer-to-peer media streaming systems from selfish and malicious behavior
paper_content:
We present a flexible framework for throttling attackers in peer-to-peer media streaming systems. In such systems, selfish nodes (e.g., free riders) and malicious nodes (e.g., DoS attackers) can overwhelm the system by issuing too many requests in a short interval of time. Since peer-to-peer systems are decentralized, it is difficult for individual peers to limit the aggregate download bandwidth consumed by other remote peers. This could potentially allow selfish and malicious peers to exhaust the system's available upload bandwidth. In this paper, we propose a framework to provide a solution to this problem by utilizing a subset of trusted peers (called kantoku nodes) that collectively monitor the bandwidth usage of untrusted peers in the system and throttle attackers. This framework has been evaluated through simulation thus far. Experiments with a full implementation on a network testbed are part of our future work.
---
paper_title: The pollution attack in P2P live video streaming: measurement results and defenses
paper_content:
P2P mesh-pull live video streaming applications ---such as Cool-Streaming, PPLive, and PPStream --- have become popular in the recent years. In this paper, we examine the stream pollution attack, for which the attacker mixes polluted chunks into the P2P distribution, degrading the quality of the rendered media at the receivers. Polluted chunks received by an unsuspecting peer not only effect that single peer, but since the peer also forwards chunks to other peers, and those peers in turn forward chunks to more peers, the polluted content can potentially spread through much of the P2P network. The contribution of this paper is twofold. First, by way of experimenting and measuring a popular P2P live video streaming system, we show that the pollution attack can be devastating. Second, we evaluate the applicability of four possible defenses to the pollution attack: blacklisting, traffic encryption, hash verification, and chunk signing. Among these, we conclude that the chunk signing solutions are most suitable.
---
paper_title: Defending against eclipse attacks on overlay networks
paper_content:
Overlay networks are widely used to deploy functionality at edge nodes without changing network routers. Each node in an overlay network maintains pointers to a set of neighbor nodes. These pointers are used both to maintain the overlay and to implement application functionality, for example, to locate content stored by overlay nodes. If an attacker controls a large fraction of the neighbors of correct nodes, it can "eclipse" correct nodes and prevent correct overlay operation. This Eclipse attack is more general than the Sybil attack. Attackers can use a Sybil attack to launch an Eclipse attack by inventing a large number of seemingly distinct overlay nodes. However, defenses against Sybil attacks do not prevent Eclipse attacks because attackers may manipulate the overlay maintenance algorithm to mount an Eclipse attack. This paper discusses the impact of the Eclipse attack on several types of overlay and it proposes a novel defense that prevents the attack by bounding the degree of overlay nodes. Our defense can be applied to any overlay and it enables secure implementations of overlay optimizations that choose neighbors according to metrics like proximity. We present preliminary results that demonstrate the importance of defending against the Eclipse attack and show that our defense is effective.
---
paper_title: "Won't You Be My Neighbor?" Neighbor Selection Attacks in Mesh-based Peer-to-Peer Streaming
paper_content:
P2P streaming has grown in popularity, allowing people in many places to benefit from live audio and television services. Mesh-based P2P streaming has emerged as the predominant architecture in realworld use because of its resilience to churn and node failures, scalability, and ease of maintenance. The proliferation of these applications on the public Internet raises questions about how they can be deployed in a secure and robust manner. Failing to address security vulnerabilities could facilitate attacks with significant consequences such as content censorship, unfair business competition, or external impact on the Internet itself. In this paper, we identify and evaluate neighbor selection attacks against mesh-based P2P streaming which allow insider attackers to control the mesh overlay formation and maintenance. We demonstrate the effect of the attacks against a mesh-based P2P streaming system and propose a solution to mitigate the attacks. Our solution is scalable, has low overhead, and works in realistic heterogeneous networks. We evaluate our solution using a mesh-based P2P streaming system with real-world experiments on the PlanetLab Internet testbed and simulations using the OverSim P2P simulator.
---
paper_title: SecureStream: An Intrusion-Tolerant Protocol for Live-Streaming Dissemination
paper_content:
Peer-to-peer (P2P) dissemination systems are vulnerable to attacks that may impede nodes from receiving data in which they are interested. The same properties that lead P2P systems to be scalable and efficient also lead to security problems and lack of guarantees. Within this context, live-streaming protocols deserve special attention since their time sensitive nature makes them more susceptible to the packet loss rates induced by malicious behavior. While protocols based on dissemination trees often present obvious points of attack, more recent protocols based on pulling packets from a number of different neighbors present a better chance of standing attacks. We explore this in SecureStream, a P2P live-streaming system built to tolerate malicious behavior at the end level. SecureStream is built upon Fireflies, an intrusion-tolerant membership protocol, and employs a pull-based approach for streaming data. We present the main components of SecureStream and present simulation and experimental results on the Emulab testbed that demonstrate the good resilience properties of pull-based streaming in the face of attacks. This and other techniques allow our system to be tolerant to a variety of intrusions, gracefully degrading even in the presence of a large percentage of malicious peers.
---
paper_title: Preventing DoS attacks in peer-to-peer media streaming systems
paper_content:
This paper presents a framework for preventing both selfishness and denial-of-service attacks in peer-to-peer media streaming systems. Our framework, called Oversight, achieves prevention of these undesirable activities by running a separate peer-to-peer download rate enforcement protocol along with the underlying peer-to-peer media streaming protocol. This separate Oversight protocol enforces download rate limitations on each participating peer. These limitations prevent selfish or malicious nodes from downloading an overwhelming amount of media stream data that could potentially exhaust the entire system. Since Oversight is based on a peer-to-peer architecture, it can accomplish this enforcement functionality in a scalable, efficient, and decentralized way that fits better with peer-to-peer media streaming systems compared to other solutions based on central server architectures. As peer-to-peer media streaming systems continue to grow in popularity, the threat of selfish and malicious peers participating in such large peer-to-peer networks will continue to grow as well. For example, since peer-to-peer media streaming systems allow users to send small request messages that result in the streaming of large media objects, these systems provide an opportunity for malicious users to exhaust resources in the system with little effort expended on their part. However, Oversight addresses these threats associated with selfish or malicious peers who cause such disruptions with excessive download requests. We evaluated our Oversight solution through simulations and our results show that applying Oversight to peer-to-peer media streaming systems can prevent both selfishness and denial-of-service attacks by effectively limiting the download rates of all nodes in the system.
---
paper_title: Securing peer-to-peer media streaming systems from selfish and malicious behavior
paper_content:
We present a flexible framework for throttling attackers in peer-to-peer media streaming systems. In such systems, selfish nodes (e.g., free riders) and malicious nodes (e.g., DoS attackers) can overwhelm the system by issuing too many requests in a short interval of time. Since peer-to-peer systems are decentralized, it is difficult for individual peers to limit the aggregate download bandwidth consumed by other remote peers. This could potentially allow selfish and malicious peers to exhaust the system's available upload bandwidth. In this paper, we propose a framework to provide a solution to this problem by utilizing a subset of trusted peers (called kantoku nodes) that collectively monitor the bandwidth usage of untrusted peers in the system and throttle attackers. This framework has been evaluated through simulation thus far. Experiments with a full implementation on a network testbed are part of our future work.
---
paper_title: Preventing DDoS Attacks Based on Credit Model for P2P Streaming System
paper_content:
Distributed Denial of Service (DDoS) attack is a serious threat to the Internet communications especially to P2P streaming system. P2P streaming system is vulnerable to DDoS attacks due to its high bandwidth demand and strict time requirement. In this paper, we propose a distributed framework to defense DDoS attack based on Credit Model(CM) which takes the responsibility to identify malicious nodes and categorize nodes into different credit level. We also introduce a Message Rate Controlling Model (MRCM)to control the message rate of a node according to its credit level. Combining CM and MRCS together, our framework can improve the resistibility against DDoS for P2P streaming system.
---
paper_title: Stochastic Graph Processes for Performance Evaluation of Content Delivery Applications in Overlay Networks
paper_content:
This paper proposes a new methodology to model the distribution of finite-size content to a group of users connected through an overlay network. Our methodology describes the distribution process as a constrained stochastic graph process (CSGP), where the constraints dictated by the content distribution protocol and the characteristics of the overlay network define the interaction among nodes. A CSGP is a semi-Markov process whose state is described by the graph itself. CSGPs offer a powerful description technique that can be exploited by Monte Carlo integration methods to compute in a very efficient way not only the mean but also the full distribution of metrics such as the file download times or the number of hops from the source to the receiving nodes. We model several distribution architectures based on trees and meshes as CSGPs and solve them numerically. We are able to study scenarios with a very large number of nodes, and we can precisely quantify the performance differences between the tree-based and mesh-based distribution architectures.
---
paper_title: Graph Based Analysis of Mesh Overlay Streaming Systems
paper_content:
This paper studies fundamental properties of stream-based content distribution services. We assume the presence of an overlay network (such as those built by P2P systems) with limited degree of connectivity, and we develop a mathematical model that captures the essential features of overlay-based streaming protocols and systems. The methodology is based on stochastic graph theory, and models the streaming system as a stochastic process, whose characteristics are related to the streaming protocol. The model captures the elementary properties of the streaming system such as the number of active connections, the different play-out delay of nodes, and the probability of not receiving the stream due to node failures/misbehavior. Besides the static properties, the model is able to capture the transient behavior of the distribution graphs, i.e., the evolution of the structure over time, for instance in the initial phase of the distribution process. Contributions of this paper include a detailed definition of the methodology, its comparison with other analytical approaches and with simulative results, and a discussion of the additional insights enabled by this methodology. Results show that mesh based architectures are able to provide bounds on the receiving delay and maintain rate fluctuations due to system dynamics very low. Additionally, given the tight relationship between the stochastic process and the properties of the distribution protocol, this methodology gives basic guidelines for the design of such protocols and systems.
---
paper_title: Gossiping in distributed systems
paper_content:
Gossip-based algorithms were first introduced for reliably disseminating data in large-scale distributed systems. However, their simplicity, robustness, and flexibility make them attractive for more than just pure data dissemination alone. In particular, gossiping has been applied to data aggregation, overlay maintenance, and resource allocation. Gossiping applications more or less fit the same framework, with often subtle differences in algorithmic details determining divergent emergent behavior. This divergence is often difficult to understand, as formal models have yet to be developed that can capture the full design space of gossiping solutions. In this paper, we present a brief introduction to the field of gossiping in distributed systems, by providing a simple framework and using that framework to describe solutions for various application domains.
---
paper_title: Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches
paper_content:
Existing approaches to P2P streaming can be divided into two general classes: (i) tree-based approaches use push-based content delivery over multiple tree-shaped overlays, and (ii) mesh-based approaches use swarming content delivery over a randomly connected mesh. Previous studies have often focused on a particular P2P streaming mechanism and no comparison between these two classes has been conducted. In this paper, we compare and contrast the performance of representative protocols from each class using simulations. We identify the similarities and differences between these two approaches. Furthermore, we separately examine the behavior of content delivery and overlay construction mechanisms for both approaches in static and dynamic scenarios. Our results indicate that the mesh-based approach consistently exhibits a superior performance over the tree-based approach. We also show that the main factors attributing in the inferior performance of the tree-based approach are (i) the static mapping of content to a particular tree, and (ii) the placement of each peer as an internal node in one tree and as a leaf in all other trees.
---
paper_title: Source vs Data-driven Approach for Live P2P Streaming
paper_content:
Live streaming applications are increasing on the Internet. These applications are delay sensitive and need group communication. Presently, protocols designed for this kind of communication do not rely on the classical client/server model used in the Internet but organize the receivers into an overlay network, where they are supposed to collaborate with each other following the peer-to-peer model. Live p2p streaming protocols can be classified in three different categories: source-driven, receiver-driven and datadriven protocols. Each of them manages the overlay differently. In this paper we compare them by simulation to specify what is the most appropriate approach for these protocols. We implement a new simulator of p2p network and we choose two well-known protocols for simulations: a sourcedriven and a data-driven protocol. To our knowledge, our works are the first to compare with the same simulator and scenarii different approaches for live p2p streaming. Our simulations show that nodes organization on the overlay influences drastically network global performances, and data-driven approach seems to be the most appropriate approach for these protocols because it is less sensitive to dynamicity of nodes which is the main problem to resolve for these applications.
---
paper_title: Experimental comparison of peer-to-peer streaming overlays: An application perspective
paper_content:
We compare two representative streaming systems using mesh-based and multiple tree-based overlay routing through deployments on the PlanetLab wide-area experimentation platform. To the best of our knowledge, this is the first study to compare streaming overlay architectures in real Internet settings, considering not only intuitive aspects such as scalability and performance under churn, but also less studied factors such as bandwidth and latency heterogeneity of overlay participants. Overall, our study indicates that mesh-based systems are superior for nodes with high bandwidth capabilities and low round trip times, while multi-tree based systems currently cope better with stringent real time deadlines under heterogeneous conditions.
---
paper_title: Detecting Malicious Peers in Overlay Multicast Streaming
paper_content:
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or cause confusions to other peers. We propose two new schemes to detect malicious peers in overlay multicast streaming. These schemes compute a level of trust for each peer in the network. Peers with a trust value below a threshold are considered to be malicious. Results from our simulations indicate that the proposed schemes can detect malicious peers with medium to high accuracy, depending on cheating patterns and malicious peer percentages.
---
paper_title: "Won't You Be My Neighbor?" Neighbor Selection Attacks in Mesh-based Peer-to-Peer Streaming
paper_content:
P2P streaming has grown in popularity, allowing people in many places to benefit from live audio and television services. Mesh-based P2P streaming has emerged as the predominant architecture in realworld use because of its resilience to churn and node failures, scalability, and ease of maintenance. The proliferation of these applications on the public Internet raises questions about how they can be deployed in a secure and robust manner. Failing to address security vulnerabilities could facilitate attacks with significant consequences such as content censorship, unfair business competition, or external impact on the Internet itself. In this paper, we identify and evaluate neighbor selection attacks against mesh-based P2P streaming which allow insider attackers to control the mesh overlay formation and maintenance. We demonstrate the effect of the attacks against a mesh-based P2P streaming system and propose a solution to mitigate the attacks. Our solution is scalable, has low overhead, and works in realistic heterogeneous networks. We evaluate our solution using a mesh-based P2P streaming system with real-world experiments on the PlanetLab Internet testbed and simulations using the OverSim P2P simulator.
---
paper_title: SecureStream: An Intrusion-Tolerant Protocol for Live-Streaming Dissemination
paper_content:
Peer-to-peer (P2P) dissemination systems are vulnerable to attacks that may impede nodes from receiving data in which they are interested. The same properties that lead P2P systems to be scalable and efficient also lead to security problems and lack of guarantees. Within this context, live-streaming protocols deserve special attention since their time sensitive nature makes them more susceptible to the packet loss rates induced by malicious behavior. While protocols based on dissemination trees often present obvious points of attack, more recent protocols based on pulling packets from a number of different neighbors present a better chance of standing attacks. We explore this in SecureStream, a P2P live-streaming system built to tolerate malicious behavior at the end level. SecureStream is built upon Fireflies, an intrusion-tolerant membership protocol, and employs a pull-based approach for streaming data. We present the main components of SecureStream and present simulation and experimental results on the Emulab testbed that demonstrate the good resilience properties of pull-based streaming in the face of attacks. This and other techniques allow our system to be tolerant to a variety of intrusions, gracefully degrading even in the presence of a large percentage of malicious peers.
---
paper_title: Experimental comparison of peer-to-peer streaming overlays: An application perspective
paper_content:
We compare two representative streaming systems using mesh-based and multiple tree-based overlay routing through deployments on the PlanetLab wide-area experimentation platform. To the best of our knowledge, this is the first study to compare streaming overlay architectures in real Internet settings, considering not only intuitive aspects such as scalability and performance under churn, but also less studied factors such as bandwidth and latency heterogeneity of overlay participants. Overall, our study indicates that mesh-based systems are superior for nodes with high bandwidth capabilities and low round trip times, while multi-tree based systems currently cope better with stringent real time deadlines under heterogeneous conditions.
---
paper_title: Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches
paper_content:
Existing approaches to P2P streaming can be divided into two general classes: (i) tree-based approaches use push-based content delivery over multiple tree-shaped overlays, and (ii) mesh-based approaches use swarming content delivery over a randomly connected mesh. Previous studies have often focused on a particular P2P streaming mechanism and no comparison between these two classes has been conducted. In this paper, we compare and contrast the performance of representative protocols from each class using simulations. We identify the similarities and differences between these two approaches. Furthermore, we separately examine the behavior of content delivery and overlay construction mechanisms for both approaches in static and dynamic scenarios. Our results indicate that the mesh-based approach consistently exhibits a superior performance over the tree-based approach. We also show that the main factors attributing in the inferior performance of the tree-based approach are (i) the static mapping of content to a particular tree, and (ii) the placement of each peer as an internal node in one tree and as a leaf in all other trees.
---
paper_title: CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming
paper_content:
This paper presents DONet, a data-driven overlay network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement, as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient, as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents. We have extensively evaluated the performance of DONet over the PlanetLab. Our experiments, involving almost all the active PlanetLab nodes, demonstrate that DONet achieves quite good streaming quality even under formidable network conditions. Moreover, its control overhead and transmission delay are both kept at low levels. An Internet-based DONet implementation, called CoolStreaming v.0.9, was released on May 30, 2004, which has attracted over 30000 distinct users with more than 4000 simultaneously being online at some peak times. We discuss the key issues toward designing CoolStreaming in this paper, and present several interesting observations from these large-scale tests; in particular, the larger the overlay size, the better the streaming quality it can deliver.
---
paper_title: Experimental comparison of peer-to-peer streaming overlays: An application perspective
paper_content:
We compare two representative streaming systems using mesh-based and multiple tree-based overlay routing through deployments on the PlanetLab wide-area experimentation platform. To the best of our knowledge, this is the first study to compare streaming overlay architectures in real Internet settings, considering not only intuitive aspects such as scalability and performance under churn, but also less studied factors such as bandwidth and latency heterogeneity of overlay participants. Overall, our study indicates that mesh-based systems are superior for nodes with high bandwidth capabilities and low round trip times, while multi-tree based systems currently cope better with stringent real time deadlines under heterogeneous conditions.
---
paper_title: Gossiping in distributed systems
paper_content:
Gossip-based algorithms were first introduced for reliably disseminating data in large-scale distributed systems. However, their simplicity, robustness, and flexibility make them attractive for more than just pure data dissemination alone. In particular, gossiping has been applied to data aggregation, overlay maintenance, and resource allocation. Gossiping applications more or less fit the same framework, with often subtle differences in algorithmic details determining divergent emergent behavior. This divergence is often difficult to understand, as formal models have yet to be developed that can capture the full design space of gossiping solutions. In this paper, we present a brief introduction to the field of gossiping in distributed systems, by providing a simple framework and using that framework to describe solutions for various application domains.
---
paper_title: Self-stabilizing and Byzantine-Tolerant Overlay Network ⋆
paper_content:
Network overlays have been the subject of intensive research in recent years. The paper presents an overlay structure, S-Fireflies, that is self-stabilizing and is robust against permanent Byzantine faults. The overlay structure has a logarithmic diameter with high probability, which matches the diameter of less robust overlays. The overlay can withstand high churn without affecting the ability of active and correct members to disseminate their messages. The construction uses a randomized technique to choose the neighbors of each member, while limiting the ability of Byzantine members to affect the randomization or to disturb the construction. The basic ideas generalize the original Fireflies construction that withstands Byzantine failures but was not self-stabilizing.
---
paper_title: How robust are gossip-based communication protocols?
paper_content:
Gossip-based communication protocols are often touted as being robust. Not surprisingly, such a claim relies on assumptions under which gossip protocols are supposed to operate. In this paper, we discuss and in some cases expose some of these assumptions and discuss how sensitive the robustness of gossip is to these assumptions. This analysis gives rise to a collection of new research challenges.
---
paper_title: Real-Time, Byzantine-Tolerant Information Dissemination in Unreliable and Untrustworthy Distributed Systems
paper_content:
In unreliable and untrustworthy systems, information dissemination may suffer network failures and attacks from Byzantine nodes which are controlled by traitors or adversaries, and can perform destructive behaviors. Typically, Byzantine nodes together or individually "swallow" messages, or fake disseminated information. In this paper, we present an authentication-free, gossip-based real-time information dissemination mechanism called RT-LASIRC, in which "healthy" nodes utilize Byzantine features to defend against Byzantine attacks. We show that RT-LASIRC is robust against blackhole and message-faking attacks. Our experimental studies verify RT-LASIRC's effectiveness.
---
paper_title: BAR gossip
paper_content:
We present the first peer-to-peer data streaming application that guarantees predictable throughput and low latency in the BAR (Byzantine/Altruistic/Rational) model, in which non-altruistic nodes can behave in ways that are self-serving (rational) or arbitrarily malicious (Byzantine). At the core of our solution is a BAR-tolerant version of gossip, a well-known technique for scalable and reliable data dissemination. BAR Gossip relies on verifiable pseudo-random partner selection to eliminate non-determinism that can be used to game the system while maintaining the robustness and rapid convergence of traditional gossip. A novel fair enough exchange primitive entices cooperation among selfish nodes on short timescales, avoiding the need for long-term node reputations. Our initial experience provides evidence for BAR Gossip's robustness. Our BAR-tolerant streaming application provides over 99% convergence for broadcast updates when all clients are selfish but not colluding, and over 95% convergence when up to 40% of clients collude while the rest follow the protocol. BAR Gossip also performs well when the client population consists of both selfish and Byzantine nodes, achieving over 93% convergence even when 20% of the nodes are Byzantine.
---
paper_title: Understanding P2P-TV Systems Through Real Measurements
paper_content:
In this paper, we consider two popular peer-to-peer TV (P2P-TV) systems: PPLive, one of the today most widely used P2P-TV systems, and Joost, a promising new generation application of which no previous measurement study has been considered. Besides the traditional measurements like the amount of generated traffic for signaling and data transmission, the novel contribution of the paper consists in investigating the content distribution mechanisms. In particular, we evaluate the characteristics of both data distribution and signaling process for the overlay network discovery and maintenance. By considering two or more clients in the same sub-network, we observe the capability of the system to exploit the locality of peers. We also explore how the system adapts to different network conditions. The methodology we develop allows also to identify periodic behavior of the application, highlighting bursts of both data and signaling traffic.
---
paper_title: Detecting Malicious Peers in Overlay Multicast Streaming
paper_content:
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or cause confusions to other peers. We propose two new schemes to detect malicious peers in overlay multicast streaming. These schemes compute a level of trust for each peer in the network. Peers with a trust value below a threshold are considered to be malicious. Results from our simulations indicate that the proposed schemes can detect malicious peers with medium to high accuracy, depending on cheating patterns and malicious peer percentages.
---
| Title: Security and Privacy Issues in P2P Streaming Systems: A Survey
Section 1: Overview
Description 1: Provide a broad introduction to P2P streaming systems, their advantages, and associated security challenges.
Section 2: Examples
Description 2: Illustrate real-world examples of security attacks on P2P streaming systems to highlight the importance of addressing security issues.
Section 3: Security Considerations for P2P Streaming
Description 3: Discuss the specific security concerns that arise in P2P streaming systems, detailing the important aspects of securing such systems.
Section 4: Threat Model
Description 4: Define possible sources and targets of attacks in P2P streaming systems, outlining the main aspects of the threat landscape.
Section 5: System-level Security Goals in P2P Streaming
Description 5: Identify and describe the goals that a secure P2P streaming system must achieve to mitigate security threats.
Section 6: Data and Content Security Goals in P2P Streaming
Description 6: Discuss the security properties required to protect the integrity and confidentiality of data and content in P2P streaming.
Section 7: Protection Mechanisms
Description 7: Present mechanisms and strategies that can be employed to protect against security threats in P2P streaming systems.
Section 8: Common Attacks in P2P Streaming Systems
Description 8: Provide an overview of typical attacks targeting P2P streaming systems and the conditions under which these vulnerabilities are exploited.
Section 9: Security Practices
Description 9: Discuss existing security solutions in P2P streaming, their vulnerabilities, and how they can be improved to provide better protection.
Section 10: Tree-based Approaches
Description 10: Examine tree-based overlay topologies in P2P streaming and their associated security challenges and solutions.
Section 11: Mesh-based Approaches
Description 11: Investigate mesh-based overlay topologies, comparing their security strengths and weaknesses to other approaches.
Section 12: Gossiping and Byzantine Faults
Description 12: Analyze gossip protocols and their applicability in protecting against Byzantine faults in P2P streaming systems.
Section 13: Discussion
Description 13: Reflect on the trade-offs between security and performance in P2P streaming systems, and the balance required to maintain both effectively.
Section 14: Further Work
Description 14: Identify open issues and future research directions in the field of P2P streaming security.
Section 15: Conclusions
Description 15: Summarize the findings of the survey and provide final thoughts on the state and future of security and privacy in P2P streaming systems. |
An Overview of Free Space Optics with Quantum Cascade Lasers | 5 | ---
paper_title: Free-space Optical Data Link Using Quantum Cascade Laser
paper_content:
The paper presents construction of a broadband optical system devoted to free space optical communication link. The main elements of the system are a quantum cascade laser and an HgCdTe heterostructural photodetector operating at the wavelength of 10"m. The described analyses showed that the system is characterized by lower sensitivity to adverse mete- orological conditions when compared with the systems operating in near infrared waveband. 1. INTRODUCTION Free-Space Optics (FSO) products are deployed in a line-of sight point-to-point conflguration. Free space optical systems ofier a ∞exible networking solution that delivers on the promise of broadband communications. Only FSO provides the essential combination of qualities required for modern networking. Since FSO transceivers can transmit and receive through windows, it is possible to mount FSO systems inside buildings, reducing the need to compete for roof space, simplifying wiring and cabling, and permitting the equipment to operate in a very favourable environment. The only essential for FSO is a line of sight between the two ends of the link. Free Space Optics is far more secure than RF technologies for several reasons: requires no RF spectrum licensing, FSO laser beams cannot be detected with RF meters or spectrum analyzers, the laser beams generated by FSO systems are narrow and invisible, making them harder to flnd and even harder to intercept and crack, FSO laser transmissions are optical and travel along a line of sight path that cannot be intercepted easily, data can be transmitted over an encrypted connection adding to the degree of security available in Free Space Optics network transmissions. Another advantage of FSO, when compared to RF, is signiflcant reduction in end-to-end delay. Most FSO products are plug-and- play units independent of the transmitted protocol and data rate. Quantum cascade lasers are very suitable for such applications because their emission wavelength can be chosen in the so- called atmospheric window regions, i.e., around 3{5"m and 8{14"m. In addition, the fast internal lifetimes of the devices should allow for reasonable modulation frequencies of up to 5{10GHz (1). In this paper, we analyze FSO-10"m system compared to shorter wavelengths, i.e., 0.8"m and 1.5"m. 2. ATMOSPHERIC INFLUENCES The atmosphere is a mixture of dry air and water vapour. Carrier-class Free Space Optics systems must be designed to accommodate heavy atmospheric attenuation, particularly by fog. Longer wavelengths are favoured in haze and light fog, but under the conditions of very low visibility this long-wavelength advantage does not apply. Additionally, the fact that 10-"m-based systems are allowed to transmit hundreds times more eye-safe power makes them very attractive for modern FSO links. The atmospheric transmission of optical signals, is given by the Beer's law equation
---
paper_title: Multimode instabilities in mid-infrared quantum cascade lasers
paper_content:
We present experimental evidence for mid-infrared frequency comb based on AlGaAs/GaAs quantum cascade laser. The comb-like emission spectra span over ~50 cm -1 at a center wavenumber ~1080 cm -1 . The measured width of the comb lines is < 20 MHz. The instability sets in when the intracavity power is sufficiently large. In addition, intersubband transitions feature strong third-order optical nonlinearities chi (3) , due to the large matrix element between the upper and lower laser states, allowing parametric process due to four-wave mixing. The observed spectral behavior is similar in many ways to the Risken-Nummedal-Graham-Haken (RNGH) instability. Full Text: PDF References H. Risken, K. Nummedal, "Self-Pulsing in Lasers", J. Appl. Phys. 39, 4662 (1968). CrossRef P. Graham, H. Haken, "Quantum theory of light propagation in a fluctuating laser-active medium", Z. Phys. 213, 420 (1968). CrossRef E. M. Pessina, G. Bonfrate, F. Fontana, L. A. Lugiato, "Experimental observation of the Risken-Nummedal-Graham-Haken multimode laser instability", Phys. Rev. A 56, 4086 (1997). CrossRef L. Allen and J. H. Eberly, Optical Resonance and Two Level Atoms (Dover, New York, 1987). R. W. Boyd, Nonlinear Optics, 2nd ed. (Academic Press, London, 2003). R. Paiella et al., "Self-Mode-Locking of Quantum Cascade Lasers with Giant Ultrafast Optical Nonlinearities", Science 290, 1739 (2000). CrossRef A. Soibel, F. Capasso, C. Gmachl, M. L. Peabody, A. M. Sergent, R. Paiella, D. L. Sivco, A. Y. Cho, H. C. Liu, "Stability of pulse emission and enhancement of intracavity second-harmonic generation in self-mode-locked quantum cascade lasers", IEEE J. Quantum Electron. 40, 197 (2004). CrossRef D. J. Derickson, at al., "Short pulse generation using multisegment mode-locked semiconductor lasers ", IEEE J. Quantum Electron. 28, 2186 (1992). CrossRef C. Y. Wang et al., "Mode-locked pulses from mid-infrared Quantum Cascade Lasers", Opt. Express 17, 12929 (2009). CrossRef A. Gordon et al., "Multimode regimes in quantum cascade lasers: From coherent instabilities to spatial hole burning", Phys. Rev. A 77, 053804 (2008). CrossRef T. Udern, R. Holzwarth, T. W. Hansch, "Review article Optical frequency metrology", Nature 416, 233 (2002). CrossRef A. Hugi, G. Villares, S. Blaser, H. C. Liu, J. Faist, "Mid-infrared frequency comb based on a quantum cascade laser", Nature 492, 229 (2012). CrossRef K. Kosiel, M. Bugajski, A. Szerling et al., "77 K operation of AlGaAs quantum cascade laser at 9um", Photonics Letters of Poland 1, 16 (2009) DirectLink K. Kosiel, J. Kubacka-Traczyk, P. Karbownik, A. Szerling, J. Muszalski, M. Bugajski, P. Romanowski, J.Gaca, M. Wojcik, "Molecular-beam epitaxy growth and characterization of mid-infrared quantum cascade laser structures", Microelectronics Journal, 40, 565 (2009) CrossRef K. Kosiel, M. Bugajski, A. Szerling et al., "Room temperature AlGaAs/GaAs quantum cascade lasers", Photonics Letters of Poland 3, 55 (2011) DirectLink M. Bugajski et al., "High performance GaAs/AlGaAs quantum cascade lasers: optimization of electrical and thermal properties", Proc. of SPIE Vol. 8432, 84320I (2012) CrossRef K. Pierścinski, D. Pierścinska, M. Iwinska, K. Kosiel, A. Szerling, P. Karbownik, M. Bugajski, "Investigation of thermal properties of mid-infrared AlGaAs/GaAs quantum cascade lasers ", J. Appl. Phys. 112, 043112 (2012) CrossRef C. Y. Wang et al., "Coherent instabilities in a semiconductor laser with fast gain recovery", Phys. Rev. A 75, 031802 (2007) CrossRef D. Walrod, S. Y. Auyang, P. A. Wolf, M. Sugimoto, "Observation of third order optical nonlinearity due to intersubband transitions in AlGaAs/GaAs superlattices", Appl. Phys. Lett. 59, 2932 (1991). CrossRef H. Choi, V-M. Gkortsas, L. Diehl et al., "Ultrafast Rabi flopping and coherent pulse propagation in a quantum cascade laser", Nature Photonics CrossRef
---
paper_title: Optically induced fast wavelength modulation in a quantum cascade laser
paper_content:
An optically induced fast wavelength shift is demonstrated in a standard middle infrared (MIR) quantum cascade laser (QCL) by illuminating the front facet with a femtosecond (fs) near infrared (NIR) laser, allowing fast optical frequency modulation (FM) for free space optical communication (FSOC) and FM spectroscopy. Using an etalon as a narrow band-pass wavelength filter, the wavelength modulation (WM) was clearly observed at frequencies up to 1.67 GHz. This approach can also be used for wavelength conversion and might be extended to QCLs operating in different wavelength regions.
---
paper_title: Electrical and optical characterisation of mid-IR GaAs/AlGaAs quantum cascade lasers
paper_content:
We report on the study of the temperature influence on optical and electrical performance of the mid-IR GaAs/AlGaAs ::: QCLs. The temperature dependence of the threshold current, output power, slope efficiency, wall-plug efficiency, ::: characteristic temperatures T 0 and T 1 , and waveguide losses is investigated. In addition, the influence of different mesa ::: dimensions on the QCL parameters is analyzed. Experimental results clearly indicate that among the examined ::: geometries the 25μm wide mesa devices exhibit the best operational parameters i.e., the highest T max and T 0 , highest ::: wall-plug and slope efficiency, as well as a small temperature increases and the smallest thermal resistivity in the active ::: area. The knowledge of the above parameters is crucial for designing GaAs/AlGaAs-based devices for high temperature ::: operation.¬«
---
paper_title: Type-II superlattices and quantum cascade lasers for MWIR and LWIR free-space communications
paper_content:
Free-space optical communications has recently been touted as a solution to the "last mile" bottleneck of ::: high-speed data networks providing highly secure, short to long range, and high-bandwidth connections. However, ::: commercial near infrared systems experience atmospheric scattering losses and scintillation effects which can adversely ::: affect a link's operating budget. By moving the operating wavelength into the mid- or long-wavelength infrared ::: enhanced link uptimes and increased operating range can be achieved due to less susceptibility to atmospheric affects. ::: The combination of room-temperature, continuous-wave, high-power quantum cascade lasers and high operating ::: temperature type-II superlattice photodetectors offers the benefits of mid- and long-wavelength infrared systems as well ::: as practical operating conditions for next generation free-space communications systems.
---
| Title: An Overview of Free Space Optics with Quantum Cascade Lasers
Section 1: Introduction
Description 1: Introduce the significance of laser beam transmission in various weather conditions and the role of QC lasers in advancing FSO systems.
Section 2: Quantum Cascade Lasers
Description 2: Discuss the fundamental principles, characteristics, and configurations of quantum cascade lasers, including their spectral ranges, power levels, and modulation capabilities.
Section 3: Overview of FSO Systems
Description 3: Provide a detailed account of the development and implementation of FSO systems using QC lasers, focusing on laboratory setups and technological advancements.
Section 4: Applications and Experimental Results
Description 4: Outline specific examples of QC laser applications in optical communication systems, including experimental results from various tests and setups.
Section 5: Technological Advancements and Future Directions
Description 5: Summarize the progress made in the field of FSO with QC lasers and discuss ongoing research and potential future developments in the technology.
Section 6: Summary
Description 6: Conclude with an analysis of the advantages of QC lasers in FSO systems, the challenges faced, and the implications for future research. |
Application of new information technology on concrete: an overview | 12 | ---
paper_title: Applications of Computers and Information Technology
paper_content:
This chapter focuses on the recent developments in computers and information technology that influence the direction of research on the material science and technology of concrete. A major advance in concrete science and technology has already resulted from the development, application, and integration of computer-based simulation models, databases, and artificial intelligence decision-support systems. These advances in computers and communications have enormous implications in concrete science and technology in areas such as (1) prediction of concrete performance through fundamental computational models resulting in more reliable service life designs of concrete; (2) collaboration in research and in problem-solving efforts in the concrete field through sharing of data, information, and knowledge among geographically dispersed colleagues, research organizations, manufacturing companies, and contractors; (3) development of computer-integrated knowledge systems that have the potential for representing virtually all scientific and engineering knowledge of concrete and making the knowledge readily available to those who need it. A database system is defined as a computerized recordkeeping system, while a database is a collection of computerized data files with persistent data.
---
paper_title: Computing and Information Technology (IT) Research in Civil Engineering—Self-Fulfilling or Industry Transforming?
paper_content:
Numerous exciting research projects on various aspects of computing and information technology IT in civil engineering have been undertaken over the last twenty years. Note that Computing and IT are used broadly to cover both software and hardwarerelated projects. The outcomes of many of these projects have been widely published in the Journal of Computing in Civil Engineering. Many of these projects have resulted in proof-ofconcept prototype IT systems that tend to have very little or no use beyond the end of the project. In light of this, there are some critical questions that must be asked with regard to both the quality of the research undertaken and the publications associated with them: • How many of these computing and IT research projects have constituted stepping stones for other researchers in the domain? • How many of the resulting prototype systems have evolved into successful commercial systems over time? • How many of the outcomes of these projects have been adopted by the industry, especially at the field/jobsite level? • How many of these have moved computing and IT in civil engineering forward in a meaningful way? These questions challenge the long-term utility of proof-ofconcept prototype systems, the underlying research and the associated publications. It may be rightly argued that research prototypes do not need to address all the above issues; but it could also be argued that those that do not address any of these questions are of dubious value. This may be a harsh conclusion, but it is one that funding agencies are increasingly being required to explore. The first question relates to the scientific merit of the research undertaken. Surely, if the computing/IT model, prototype or system developed is sound and underpinned by excellent and rigorous research, then other researchers will build on it without having to reinvent the wheel. We recognize that there are legal and other considerations that limit the extent to which research results are exploited by other researchers. However, there are very few instances of researchers being denied access to results and systems on those bases. The lack of use of previous research may be because the prototypes and other outputs are not as robust as we claim in our publications, and often keel over under close examination. It could also be that we are not as confident about
---
paper_title: European Research on Intelligent Computing in Civil Engineering
paper_content:
Over the past decade there have been strengthening links between the ASCE research community in civil engineering information technology ~IT! and the European Group for Intelligent Computing in Engineering ~EG-ICE!. The links have resulted in both communities inviting personnel to their workshops and, as one would expect, there is now a growing trend toward collaborative research. This has to be welcomed and can only be beneficial. Also, as a research community working in IT with its inherent usage of networking, we should be at the forefront of international collaboration. As a part of this ongoing relationship, ASCE invited EG-ICE to submit a series of papers for a special issue. A total of 10
---
paper_title: Modelling creep and shrinkage of concrete by means of effective stresses
paper_content:
A novel model of mechanical performance of concrete at early ages and beyond, and in particular, evolution of its strength properties (aging) and deformations (shrinkage and creep strains), described in terms of effective stress is briefly presented. This model reproduces such␣ phenomena known from experiments like drying creep or some additional strains, as compared to pure shrinkage, which appear during autogenous deformations of a maturing, sealed concrete sample. Creep is described by means of the modified microprestress-solidification theory with some modifications to take into account the effects of temperature and relative humidity on concrete aging. Shrinkage strains are modelled by using effective stresses giving a good agreement with experimental data also for low values of relative humidity. Results of four numerical examples based on the real experimental tests are solved to validate the model. They demonstrate its possibilities to analyze both autogenous deformations in maturing concrete, and creep and shrinkage phenomena, including drying creep, in concrete elements of different age, sealed or drying, exposed to external load or without any load.
---
paper_title: Principles underlying the steam curing of concrete at atmospheric pressure
paper_content:
Summary: This paper summarizes the conclulions drawn from experimental work carried out at the Cement and Concrete Association Research Station and elsewhere concerning the principles underlying steam curing at atmospheric pressure. It is shown that if the temperature gradient of the concrete after the time of mixing does not exceed a certain value, the concrete gains strength during and after treatment in relation to its “maturity” (reckoned in temperature-time) approximately in accordance with the same law as holds for normally cured concrete. Concrete which is raised in temperature more rapidly is shown not to obey this law, and to be adversely affected in strength at a later age. The use of the too rapid early temperature rises often employed in practice introduces various opposing variables which suggest optimum temperatures, delayed treatments and other arrangements of the curing cycle; such expediencies are unnecessary, however, if a slow initial temperature gradient is used. The paper contains tab...
---
paper_title: Employing Inductive Databases in Concrete Applications
paper_content:
In this paper we present the application of the inductive database approach to two practical analytical case studies: Web usage mining in Web logs and financial data. As far as concerns the Web domain, we have considered the enriched XML Web logs, that we call conceptual logs, produced by specific Web applications. These ones have been built by using a conceptual model, namely WebML, and its accompanying CASE tool, WebRatio. The Web conceptual logs integrate the usual information about user requests with meta-data concerning the Web site structure. As far as concerns the analysis of financial data, we have considered the trade stock exchange index Dow Jones and studied its component stocks from 1997 to 2002 using the so-called technical analysis. Technical analysis consists in the identification of the relevant (graphical) patterns that occur in the plot of evolution of a stock quote as time proceeds, often adopting different time granularities. On the plots the correlations between distinctive variables of the stocks quote are pointed out, such as the quote trend, the percentage variation and the volume of the stocks exchanged. In particular we adopted candle-sticks, a figurative pattern representing in a condensed diagram the evolution of the stock quotes in a daily stock exchange. In technical analysis, candle-sticks have been frequently used by practitioners to predict the trend of the stocks quotes in the market. ::: ::: We then apply a data mining language, namely MINE RULE, to these data in order to identify different types of patterns. As far as Web data is concerned, recurrent navigation paths, page contents most frequently visited, and anomalies such as intrusion attempts or a harmful usage of the resources are among the most important patterns. As far as concerns the financial domain, we searched for the sets of stocks which frequently exhibited a positive daily exchange in the same days, so as to constitute a collection of quotes for the constitution of the customers' portfolio, or the candle-sticks frequently associated to certain stocks, or finally the most similar stocks, in the sense that they mostly presented in the same dates the same typology of candle-stick, that is the same behaviour in time. ::: ::: The purpose of this paper is to show that the exploitation of the nuggets of information embedded in the data and of the specialised mining constructs provided by the query languages, enables the rapid customization of the mining procedures following to the users' need. Given our experience, we also claim that the use of queries in advanced languages, as opposed to ad-hoc heuristics, eases the specification and the discovery of a large spectrum of patterns.
---
paper_title: Principles of Distributed Database Systems
paper_content:
This third edition of a classic textbook can be used to teach at the senior undergraduate and graduate levels. The material concentrates on fundamental theories as well as techniques and algorithms. The advent of the Internet and the World Wide Web, and, more recently, the emergence of cloud computing and streaming data applications, has forced a renewal of interest in distributed and parallel data management, while, at the same time, requiring a rethinking of some of the traditional techniques. This book covers the breadth and depth of this re-emerging field. The coverage consists of two parts. The first part discusses the fundamental principles of distributed data management and includes distribution design, data integration, distributed query processing and optimization, distributed transaction management, and replication. The second part focuses on more advanced topics and includes discussion of parallel database systems, distributed object management, peer-to-peer data management, web data management, data stream systems, and cloud computing. New in this Edition: New chapters, covering database replication, database integration, multidatabase query processing, peer-to-peer data management, and web data management. Coverage of emerging topics such as data streams and cloud computing Extensive revisions and updates based on years of class testing and feedback Ancillary teaching materials are available.
---
paper_title: Prediction of Concrete Strength Using Neural-Expert System
paper_content:
Over the years, many methods have been developed to predict the concrete strength. In recent years, artificial neural networks (ANNs) have been applied to many civil engineering problems with some degree of success. In the present paper, ANN is used as an attempt to obtain more accurate concrete strength prediction based on parameters like concrete mix design, size and shape of specimen, curing technique and period, environmental conditions, etc. A total of 864 concrete specimens were cast for compressive strength measurement and verification through the ANN model. The back propagation-learning algorithm is employed to train the network for extracting knowledge from training examples. The predicted strengths found by employing ANN are compared with the actual values. The results indicate that ANN is a useful technique for predicting the concrete strength. Further, an effort to build an expert system for the problem is described in this paper. To overcome the bottleneck of intricate knowledge acquisition, an expert system is used as a mechanism to transfer engineering experience into usable knowledge through rule-based knowledge representation techniques.
---
paper_title: An expert system for mix design of high performance concrete
paper_content:
This paper describes a prototype expert system called HPCMIX that provides proportion of trial mix of High Performance Concrete (HPC) and recommendations on mix adjustment. The knowledge was acquired from various textual sources and human experts. The system was developed using hybrid knowledge representation technique. It is capable of selecting proportions of mixing water, cement, supplementary cementitious materials, aggregates and superplasticizer, considering the effects of air content as well as water contributed by superplasticizer and moisture conditions of aggregates. Similar to most expert systems, this system has explanation facilities, can be incrementally expanded, and has an easy to understand knowledge base. The system was tested using a sample project. The system's selection of mix proportions and recommendations regarding mix adjustment were compared favourably with those of experts. The system is user-friendly and can be used as an educational tool.
---
paper_title: Quality Control Expert System of Pump Concrete Based onTechnology of Data Mining
paper_content:
This paper introduced the whole process that how to apply the technology of data mining into the design of the system of Pump Concrete Expert System. Owing to the problems existing in the technologies and the environment of construction, the application of data mining was imperative under the situation. Firstly the technology of data mining was analyzed and then the statistic analysis and decision trees were used successfully in the system based of the actual requirement. The model of data mining and the whole framework founded on the quality control were provided at the same time. The system was triumphantly applied into the factual production, which was worth mentioning.
---
paper_title: Appraisal of long-term effects of fly ash and silica fume on compressive strength of concrete by neural networks
paper_content:
Abstract This study focuses on studying the effects of fly ash and silica fume replacement content on the strength of concrete cured for a long-term period of time by neural networks (NNs). Applicability of NNs to evaluate the effects of FA and SF for a long period of time is investigated. The investigations covered concrete mixes at different water cementitious materials ratio, which contained low and high volumes of FA, and with or without the additional small amount of SF. 24 different mixes with 144 different samples were gathered form the literature for this purpose. These samples consist concretes that were cured for 3, 7, 28, 56 and 180 days. A NN model is constructed trained and tested using these data. The data used in the NN model are arranged in a format of eight input parameters that cover the fly ash replacement ratio (FA), silica fume replacement ratio (SF), total cementitious material (TCM), fine aggregate (ssa), coarse aggregate (ca), water content (W), high rate water reducing agent (HRWRA) and age of samples (AS) and an output parameter which is compressive strength of concrete ( f c ). A NN program was devised in MATLAB and the NN model was constructed in this program. The results showed that NNs have strong potential as a feasible tool for evaluation of the effect of cementitious material on the compressive strength of concrete. It was found that FA content contributed little at early ages but much at later ages to the strength of concrete. It can also be concluded that the enhancement effect of low content of SF on compressive strength was not significant.
---
paper_title: Neural networks surrogate models for simulating payment risk in pavement construction
paper_content:
A common provision in quality control/quality assurance (QC/QA) highway pavement construction contracts is the adjustment of the pay that a contractor receives on the basis of the quality of the construction. It is important to both the contractor and the contracting agency to examine the amount of pay that the contractor can expect to receive for a given level of construction quality. Previous studies have shown that computer simulations can provide a better, more de- tailed examination of the pay schedule than is possible by simply determining the expected pay. In particular, the simula- tion process can provide an indication of the variability of pay at various quality levels and can identify the factors most responsible for pay adjustments. Stochastic simulation models are very useful in estimating and analyzing payment risk in highway pavement construction. However, such models are constrained by their computational requirements, and it is of- ten necessary to couple them with simpler models to speed up the process of decision-making. This paper investigates the use of Neural Networks (NN) to build surrogate models for a pavement construction payment-risk prediction model. The results show that although the average error associated with the NN predictions are acceptable; in some particular cases the errors may be unacceptably high.
---
paper_title: Neural network prediction of concrete degradation by sulphuric acid attack
paper_content:
Microbiologically induced corrosion is a leading cause of the deterioration of wastewater collection, transmission and treatment infrastructure around the world. This paper examines the feasibility of using artificial neural networks (ANNs) to predict the compressive strength of concrete and its degradation under exposure to sulphuric acid of various concentrations. A database incorporating 78 concrete mixtures performed by the authors was developed to train and test the ANN models. Data were arranged in a patterned format in such a manner that each pattern contains input variables (concrete mixture parameters) and the corresponding output vector (weight loss of concrete by H2SO4 attack and compressive strength at different ages). Results show that the ANN model I successfully predicted the weight loss of concrete specimens subjected to sulphuric acid attack, not only for mixtures used in the training process, but also for new mixtures unfamiliar to the ANN model designed within the practical range of the...
---
paper_title: Using neural networks to predict workability of concrete incorporating metakaolin and fly ash
paper_content:
This paper details the development of neural network models that provide effective predictive capability in respect of the workability of concrete incorporating metakaolin (MK) and fly ash (FA). The predictions produced reflect the effect of graduated variations in pozzolanic replacement in Portland cement (PC) of up to 15% MK and 40% FA. The results show that the models are reliable and accurate and illustrate how neural networks can be used to beneficially predict the workability parameters of slump, compacting factor and Vebe time across a wide range of PC-FA-MK compositions.
---
paper_title: Prediction of elastic modulus of normal and high strength concrete by artificial neural networks
paper_content:
Abstract In the present paper, application of artificial neural networks (ANNs) to predict elastic modulus of both normal and high strength concrete is investigated. The paper aims to show a possible applicability of ANN to predict the elastic modulus of both high and normal strength concrete. An ANN model is built, trained and tested using the available test data gathered from the literature. The ANN model is found to predict elastic modulus of concrete well within the ranges of the input parameters considered. The average value of the experimental elastic modulus to the predicted elastic modulus ratio is found to be 1.00. The elastic modulus results predicted by ANN are also compared to those obtained using empirical results of the buildings codes and various models. These comparisons show that ANNs have strong potential as a feasible tool for predicting elastic modulus of both normal and high strength within the range of input parameters considered.
---
paper_title: Predicting the compressive strength of ground granulated blast furnace slag concrete using artificial neural network
paper_content:
In this study, an artificial neural networks study was carried out to predict the compressive strength of ground granulated blast furnace slag concrete. A data set of a laboratory work, in which a total of 45 concretes were produced, was utilized in the ANNs study. The concrete mixture parameters were three different water-cement ratios (0.3, 0.4, and 0.5), three different cement dosages (350, 400, and 450kg/m^3) and four partial slag replacement ratios (20%, 40%, 60%, and 80%). Compressive strengths of moist cured specimens (22+/-2^oC) were measured at 3, 7, 28, 90, and 360 days. ANN model is constructed, trained and tested using these data. The data used in the ANN model are arranged in a format of six input parameters that cover the cement, ground granulated blast furnace slag, water, hyperplasticizer, aggregate and age of samples and, an output parameter which is compressive strength of concrete. The results showed that ANN can be an alternative approach for the predicting the compressive strength of ground granulated blast furnace slag concrete using concrete ingredients as input parameters.
---
paper_title: Analysis of durability of high performance concrete using artificial neural networks
paper_content:
Abstract This study aims to determine the influence of the content of water and cement, water–binder ratio, and the replacement of fly ash and silica fume on the durability of high performance concrete (HPC) by using artificial neural networks (ANNs). To achieve this, an ANNs model is developed to predict the durability of high performance concrete which is expressed in terms of chloride ions permeability in accordance with ASTM C1202-97 or AASHTO T277. The model is developed, trained and tested by using 86 data sets from experiments as well as previous researches. To verify the model, regression equations are carried out and compared with the trained neural network. The results indicate that the developed model is reliable and accurate. Based on the simulating durability model built using trained neural networks, the optimum cement content for designing HPC in terms of durability is in the range of 450–500 kg/m3. The results also revealed that the durability of concrete expressed in terms of total charge passed over a 6-h period can be significantly improved by using at least 20% fly ash to replace cement. Furthermore, it can be concluded that increasing silica fume results in reducing the chloride ions penetrability to a higher degree than fly ash. This study also illustrates how ANNs can be used to beneficially predict durability in terms of chloride ions permeability across a wide range of mix proportion parameters of HPC.
---
paper_title: Modeling of hydration reactions using neural networks to predict the average properties of cement paste
paper_content:
This paper presents a hydration model that describes the evolution of cement paste microstructure as a function of the changing composition of the hydration products. The hydration model extends an earlier version by considering the reduction in the hydration rate that occurs due to the reduction of free water and the reduction of the interfacial area of contact between the free water and the hydration products. The BP Neural Network method is used to determine the coefficients of the model. Using the proposed model, this paper predicts the following properties of hardening cement paste: the degree of hydration, the rate of heat evolution, the relative humidity and the total porosity. The agreement between simulation and experimental results proves that the new model is quite effective and potentially useful as a component within larger-scale models designed to predict the performance of concrete structures.
---
paper_title: A new way of prediction elastic modulus of normal and high strength concrete : fuzzy logic
paper_content:
In this paper, the theory of fuzzy sets, especially fuzzy modeling is discussed to determine elastic modulus of both normal and high-strength concrete. A fuzzy logic algorithm has been devised for estimating elastic modulus from compressive strength of concrete. The main advantage of fuzzy models is their ability to describe knowledge in a descriptive human-like manner in the form of simple rules using linguistic variables only. On the other hand, many parameters will be effected and elastic modulus can be taken into account easily by using the proposed fuzzy model.
---
paper_title: Neural Networks in Civil Engineering. I: Principles and Understanding
paper_content:
This is the first of two papers providing a discourse on the understanding, usage, and potential for application of artificial neural networks within civil engineering. The present paper develops an understanding of how these devices operate and explains the main issues concerning their use. A simple structural‐analysis problem is solved using the most popular form of neural‐networking system—a feedforward network trained using a supervised scheme. A graphical interpretation of the way in which neural networks operate is first presented. This is followed by discussions of the primary concepts and issues concerning their use, including factors affecting their ability to learn and generalize, the selection of an appropriate set of training patterns, theoretical limitations of alternative network configurations, and network validation. The second paper demonstrates the ways in which different types of civil engineering problems can be tackled using neural networks. The objective of the two papers is to ensur...
---
paper_title: Prediction of compressive strength of SCC and HPC with high volume fly ash using ANN
paper_content:
An artificial neural network (ANN) is presented to predict a 28-day compressive strength of a normal and high strength self compacting concrete (SCC) and high performance concrete (HPC) with high volume fly ash. The ANN is trained by the data available in literature on normal volume fly ash because data on SCC with high volume fly ash is not available in sufficient quantity. Further, while predicting the strength of HPC the same data meant for SCC has been used to train in order to economise on computational effort. ::: The compressive strengths of SCC and HPC as well as slump flow of SCC estimated by the proposed neural network are validated by experimental results.
---
paper_title: A Concrete Mix Proportion Design Algorithm Based on Artificial Neural Networks
paper_content:
Abstract The concepts of five parameters of nominal water–cement ratio, equivalent water–cement ratio, average paste thickness, fly ash–binder ratio, grain volume fraction of fine aggregates and Modified Tourfar's Model were introduced. It was verified that the five parameters and the mix proportion of concrete can be transformed each other when Modified Tourfar's Model is applied. The behaviors (strength, slump, et al.) of concrete primarily determined by the mix proportion of concrete now depend on the five parameters. The prediction models of strength and slump of concrete were built based on artificial neural networks (ANNs). The calculation models of average paste thickness and equivalent water–cement ratio can be obtained by the reversal deduction of the two prediction models, respectively. A concrete mix proportion design algorithm based on a way from aggregates to paste, a least paste content, Modified Tourfar's Model and ANNs was proposed. The proposed concrete mix proportion design algorithm is expected to reduce the number of trial and error, save cost, laborers and time. The concrete designed by the proposed algorithm is expected to have lower cement and water contents, higher durability, better economical and ecological effects.
---
paper_title: Prediction of compressive strength of concretes containing metakaolin and silica fume by artificial neural networks
paper_content:
Neural networks have recently been widely used to model some of the human activities in many areas of civil engineering applications. In the present paper, the models in artificial neural networks (ANN) for predicting compressive strength of concretes containing metakaolin and silica fume have been developed at the age of 1, 3, 7, 28, 56, 90 and 180days. For purpose of building these models, training and testing using the available experimental results for 195 specimens produced with 33 different mixture proportions were gathered from the technical literature. The data used in the multilayer feed forward neural networks models are arranged in a format of eight input parameters that cover the age of specimen, cement, metakaolin (MK), silica fume (SF), water, sand, aggregate and superplasticizer. According to these input parameters, in the multilayer feed forward neural networks models are predicted the compressive strength values of concretes containing metakaolin and silica fume. The training and testing results in the neural network models have shown that neural networks have strong potential for predicting 1, 3, 7, 28, 56, 90 and 180days compressive strength values of concretes containing metakaolin and silica fume.
---
paper_title: Prediction of Cement Degree of Hydration Using Artificial Neural Networks
paper_content:
This paper presents the development of a computer model for the prediction of cement degree of hydration. The model is established by incorporating large experimental data sets using the neural networks (NNs) technology. NNs are computational paradigms, primarily based on the structural formation and the knowledge processing faculties of the human brain. Initially, the degree of hydration was estimated in the laboratory by preparing portland cement paste with the water-cement ratio ranging from 0.2 to 0.6, curing times from 0.25 days to 90 days, and subjected to curing temperatures from 3 deg C (37 deg F) to 43 deg C (109 deg F). A total of 390 specimens were tested, producing 195 data points divided into five sets. The networks were trained using data in Sets 1, 2, and 3. Once the NNs were fully trained, verification of the performance was carried out using Sets 4 and 5 of the experimental data. Results indicate that the NNs are very efficient in predicting concrete degree of hydration with great accuracy using minimal processing of data.
---
paper_title: Exploring Concrete Slump Model Using Artificial Neural Networks
paper_content:
Fly ash and slag concrete (FSC) is a highly complex material whose behavior is difficult to model. This paper describes a method of modeling slump of FSC using artificial neural networks. The slump is a function of the content of all concrete ingredients, including cement, fly ash, blast furnace slag, water, superplasticizer, and coarse and fine aggregate. The model built was examined with response trace plots to explore the slump behavior of FSC. This study led to the conclusion that response trace plots can be used to explore the complex nonlinear relationship between concrete components and concrete slump.
---
paper_title: A new approach to determination of compressive strength of fly ash concrete using fuzzy logic
paper_content:
In this study, determination of effect of fly ash (FA) content on the compressive strength of concrete depending on water/cement ratio and concrete age was investigated by use of fuzzy logic (FL) approach. In the approach of modelling with FL, compressive strength values of various sample of concrete that produced by replacement of cement by F class of FA by ratio of 0 (control), 10%, 20% and 30% were used. Water/binder ratio of these concrete samples was varied between 0.27-0.60 in six different values. Experimental compressive strength values of the concrete specimens at 3, 7, 28, 90, 180 and 365 days compared with FL values obtained using the fuzzy sets. Optimum FA content and water/binder ratio for the best compressive strength for early age and hardened concrete can be obtained with FL.
---
paper_title: Prediction of compressive strength of concrete containing fly ash using artificial neural networks and fuzzy logic
paper_content:
Abstract In this study, artificial neural networks and fuzzy logic models for predicting the 7, 28 and 90 days compressive strength of concretes containing high-lime and low-lime fly ashes have been developed. For purpose of constructing these models, 52 different mixes with 180 specimens were gathered from the literature. The data used in the artificial neural networks and fuzzy logic models are arranged in a format of nine input parameters that cover the day, Portland cement, water, sand, crushed stone I (4–8 mm), crushed stone II (8–16 mm), high range water reducing agent replacement ratio, fly ash replacement ratio and CaO, and an output parameter which is compressive strength of concrete. In the models of the training and testing results have shown that artificial neural networks and fuzzy logic systems have strong potential for predicting 7, 28 and 90 days compressive strength of concretes containing fly ash.
---
paper_title: Prediction of mechanical properties of recycled aggregate concretes containing silica fume using artificial neural networks and fuzzy logic
paper_content:
Artificial neural networks and fuzzy logic have been widely used in many areas in civil engineering applications. In this study, the models in artificial neural networks and fuzzy logic systems for predicting compressive and splitting tensile strengths of recycled aggregate concretes containing silica fume have been developed at the age of 3, 7, 14, 28, 56 and 90 days. For purpose of constructing these models, experimental results for 210 specimens produced with 35 different mixture proportions were gathered from the literature. The data used in the artificial neural networks and fuzzy logic models are arranged in a format of eight input parameters that cover the age of specimen, cement, water, sand, aggregate, recycled aggregate, superplasticizer and silica fume. According to these input, in the artificial neural networks and fuzzy logic models are predicted the compressive and splitting tensile strengths values from mechanical properties of recycled aggregate concretes containing silica fume. In the models of the training and testing results have shown that artificial neural networks and fuzzy logic systems have strong potential for predicting 3, 7, 14, 28, 56 and 90 days compressive and splitting tensile strengths values of recycled aggregate concretes containing silica fume.
---
paper_title: Contractor prequalification model using fuzzy sets
paper_content:
Abstract Contractor prequalification makes it possible to admit for tendering only competent contractors. The undertaken decisions demand taking into consideration many criteria, including among others, experience and financial standing of the candidates. It is often difficult to be quantified. The objectives of the construction owner in a given project are also meaningful. All these factors cause difficulties in working out a mathematical prequalification model. In the paper a model based on fuzzy sets theory is proposed. It takes into consideration both different criteria, objectives and evaluations of numerous decision ‐ makers. To illustrate the model operation a simple numerical example is presented.
---
paper_title: A new way of prediction elastic modulus of normal and high strength concrete : fuzzy logic
paper_content:
In this paper, the theory of fuzzy sets, especially fuzzy modeling is discussed to determine elastic modulus of both normal and high-strength concrete. A fuzzy logic algorithm has been devised for estimating elastic modulus from compressive strength of concrete. The main advantage of fuzzy models is their ability to describe knowledge in a descriptive human-like manner in the form of simple rules using linguistic variables only. On the other hand, many parameters will be effected and elastic modulus can be taken into account easily by using the proposed fuzzy model.
---
paper_title: Fuzzy probability analysis of the fatigue resistance of steel structural members under bending
paper_content:
Abstract The paper is aimed at the fuzzy probabilistic analysis of fatigue resistance due to uncertainty of input parameters. The fatigue resistance of the steel member is evaluated by linear fracture mechanics as the number of cycles leading to the propagation of initial cracks into a critical crack resulting in brittle fracture. When the histogram of stress range is known, the fatigue resistance is a random variable. In the event that the histogram is unknown or was acquired from a small number of experiments, another source of uncertainty is of an epistemic origin. Two basic approaches, which make provision for uncertainty of input histograms of stress range, are illustrated in the paper. Uncertainty of histograms of stress range is taken into account by the variability of equivalent stress range in the first stochastic approach. Input histograms as considered as members of a fuzzy set in the second approach.
---
paper_title: GENETIC ALGORITHM IN MIX PROPORTIONING OF HIGH-PERFORMANCE CONCRETE
paper_content:
High-performance concrete is defined as concrete that meets special combinations of performance and uniformity requirements that cannot always be achieved routinely using conventional constituents and normal mixing, placing, and curing practices. Ever since the term high-performance concrete was introduced into the industry, it has been widely used in large-scale concrete construction that demands high strength, high flowability, and high durability. To obtain such performances that cannot be obtained from conventional concrete and by the current method, a large number of trial mixes are required to select the desired combination of materials that meets special performance. Therefore, in this paper, using genetic algorithm that is a global optimization technique modeled on biological evolutionary process-natural selection and natural genetics-and can be used to find a near optimal solution to a problem that may have many solutions, the new design method for high-performance concrete mixtures is suggested to reduce the number of trial mixtures with desired properties in the field test. Experimental and analytic investigations were carried out to develop the design method for high-performance concrete mixtures and to verify the proposed mix design.
---
paper_title: Prediction of Concrete Strength Using Neural-Expert System
paper_content:
Over the years, many methods have been developed to predict the concrete strength. In recent years, artificial neural networks (ANNs) have been applied to many civil engineering problems with some degree of success. In the present paper, ANN is used as an attempt to obtain more accurate concrete strength prediction based on parameters like concrete mix design, size and shape of specimen, curing technique and period, environmental conditions, etc. A total of 864 concrete specimens were cast for compressive strength measurement and verification through the ANN model. The back propagation-learning algorithm is employed to train the network for extracting knowledge from training examples. The predicted strengths found by employing ANN are compared with the actual values. The results indicate that ANN is a useful technique for predicting the concrete strength. Further, an effort to build an expert system for the problem is described in this paper. To overcome the bottleneck of intricate knowledge acquisition, an expert system is used as a mechanism to transfer engineering experience into usable knowledge through rule-based knowledge representation techniques.
---
paper_title: Computer modeling of the replacement of " coarse " cement particles by inert fillers in low w / c ratio concretes Hydration and strength
paper_content:
Abstract In concretes with water-to-cement (w/c) ratios below about 0.38, a portion of the cement particles will always remain unhydrated due to space limitations within the material. Thus, in many of the high-performance concretes currently being produced, cement clinker is in effect being wasted. This communication examines the possibility of replacing the coarser fraction of a cement powder by an inert filler, to conserve cement without sacrificing material performance. Using the NIST CEMHYD3D cement hydration model, it is demonstrated that for “initial” w/c ratios of 0.25 and 0.30, a portion of the coarser cement particles can be replaced by inert fillers with little projected loss in compressive strength development. Of course, the optimal replacement fraction depends on the initial w/c ratio, suggesting that blended portland/inert filler cements need to be produced with the end concrete mixture proportions in mind. This further implies that a cement/inert mixture of specific proportions will only perform optimally in a limited range of concrete mixture proportions.
---
paper_title: Influence of silica fume on diffusivity in cement-based materials. II. Multi-scale modeling of concrete diffusivity
paper_content:
Based on a set of multi-scale computer models, an equation is developed for predicting the chloride ion diffusivity of concrete as a function of water-to-cement (w/c) ratio, silica fume addition, degree of hydration and aggregate volume fraction. Silica fume influences concrete diffusivity in several ways: (1) densifying the microstructure of the interfacial transition zone (ITZ) regions, (2) reducing the overall (bulk and ITZ) capillary porosity for a fixed degree of cement hydration, and (3) producing a pozzolanic C-S-H gel with a relative diffusivity about 25 times less than that of the C-S-H gel produced from conventional cement hydration. According to the equation and in agreement with results from the literature, silica fume is most efficient for reducing diffusivity in lower w/c ratio concretes (w/c <0.4). In these systems, for moderate additions of silica fume (e.g., 10%), the reduction in concrete diffusivity may be a factor of 15 or more, which may substantially increase the service life of steel-reinforced concrete exposed in a severe corrosion environment.
---
paper_title: An argument for using coarse cements in high-performance concretes
paper_content:
Abstract The NIST cement hydration microstructural model and experimental studies are used to investigate the effects of particle size distribution and water-to-cement ratio on hydration kinetics. Cement hydration is limited by the availability of water at the cement particle surfaces. Because a reaction layer forms around each cement particle during hydration, this availability is controlled by two factors: the overall water-to-cement ratio and the particle size distribution of the cement. As the water-to-cement ratio is decreased, the effects of particle size distribution on ultimate degree of hydration become less significant. Thus, in the new generation, high-performance concretes, more coarsely ground cements may provide good performance, resulting in substantial energy savings due to a reduction in grinding time. This implies that cement fineness should be optimized based on the mixture proportions of the concrete in which it will be used. Published by Elsevier Science Ltd. All rights reserved.
---
| Title: Application of new information technology on concrete: an overview
Section 1: Introduction
Description 1: This section should introduce the rapid development of computers and information technology in concrete technology, highlighting key techniques and their impact on the field.
Section 2: Brief history of computer applications in concrete technology
Description 2: This section provides a historical overview of computer applications in concrete technology, including early simulation models and collaborative research efforts.
Section 3: Simulation models for cement and concrete based materials
Description 3: Discuss various simulation models developed for cement and concrete characterization, hydration, compressive strength, shrinkage, and durability.
Section 4: Databases
Description 4: Explain the use and development of databases in concrete technology, including distributed databases and online resources.
Section 5: Artificial Intelligence
Description 5: Describe the application of various AI techniques such as Expert Systems, Artificial Neural Networks, Fuzzy Logic, and Genetic Algorithms in concrete technology.
Section 6: Applications of Expert systems (ESs)
Description 6: Provide details on the development and use of expert systems in civil engineering and concrete technology for tasks such as diagnosis, design, and control.
Section 7: Application of Artificial Neural Networks (ANNs)
Description 7: Outline the use of ANNs for modeling and predicting various properties of concrete, discussing the training and validation phases.
Section 8: Application of Fuzzy Logic (FL)
Description 8: Explain how FL is applied to handle imprecision in concrete technology through numerical computation and linguistic fuzzy rules.
Section 9: Application of Genetic Algorithms (GAs)
Description 9: Discuss the use of GAs for optimization in concrete mix proportioning and their integration with other AI techniques.
Section 10: Technology of Integrated System
Description 10: Describe Computer Integrated Knowledge Systems (CIKS) and their applications in concrete technology, including prototypes and virtual systems.
Section 11: Virtual Cement and Concrete Test Laboratory (VCCTL)
Description 11: Detail the development and objectives of VCCTL, including its software modules and predictive capabilities for cement and concrete properties.
Section 12: Conclusions
Description 12: Summarize the impact of information technology on concrete technology, stressing the need for universally valid systems and further validation through laboratory tests. |
A survey on diagnostics methods for automotive engines | 6 | ---
paper_title: A review of condition monitoring and fault diagnosis for diesel engines
paper_content:
Technical advances and environmental legislation in recent years have stimulated the development of a number of techniques for condition monitoring and fault diagnosis (CMFD) in diesel engines. This paper firstly summarises common faults, fault mechanisms and their effect on diesel engine performance. Corresponding measurands are presented. Standard CMFD methods for parameters and CMFD systems for diesel engines are reviewed. Finally, some advanced CMFD techniques, including neural networks and fuzzy logic, which may be more powerful, are discussed.
---
paper_title: Diagnosis and Prognosis of Automotive Systems: motivations, history and some results
paper_content:
Abstract The field of automotive engineering has seen an explosion in the presence of on-board electronic components and systems vehicles since the 1970s. This growth was initially motivated by the introduction of emissions regulations that led to the widespread application of electronic engine controls. A secondary but important consequence of these developments was the adoption of on-board diagnostics regulations aimed at insuring that emission control systems would operate as intended for a prescribed period of time (or vehicle mileage). In addition, the presence of micro-controllers on-board the vehicle has led to a proliferation of other functions related to safety and customer convenience, and implemented through electronic systems and related software, thus creating the need for more sophisticated on-board diagnostics. Today, a significant percentage of the software code in an automobile is devoted to diagnostic functions. This paper presents an overview of diagnostic needs and requirements in the automotive industry, illustrates some of the challenges that are associated with satisfying these requirements, and proposes some future directions.
---
paper_title: Robust Model-Based Fault Diagnosis for Dynamic Systems
paper_content:
There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field. The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.
---
paper_title: Fault Detection and Diagnosis in Industrial Systems
paper_content:
The appearance of this book is quite timely as it provides a much needed state-of-the-art exposition on fault detection and diagnosis, a topic of much interest to industrialists. The material included is well organized with logical and clearly identified parts; the list of references is quite comprehensive and will be of interest to readers who wish to explore a particular subject in depth. The presentation of the subject material is clear and concise, and the contents are appropriate to postgraduate engineering students, researchers and industrialists alike. The end-of-chapter homework problems are a welcome feature as they provide opportunities for learners to reinforce what they learn by applying theory to problems, many of which are taken from realistic situations. However, it is felt that the book would be more useful, especially to practitioners of fault detection and diagnosis, if a short chapter on background statistical techniques were provided. Joe Au
---
paper_title: Data-driven methods for fault detection and diagnosis in chemical processes
paper_content:
I. Introduction.- 1. Introduction.- II. Background.- 2. Multivariate Statistics.- 3. Pattern Classification.- III. Methods.- 4. Principal Component Analysis.- 5. Fisher Discriminant Analysis.- 6. Partial Least Squares.- 7. Canonical Variate Analysis.- IV. Application.- 8. Tennessee Eastman Process.- 9. Application Description.- 10. Results and Discussion.- V. Other Approaches.- 11. Overview of Analytical and Knowledge-based Approaches.- References.
---
paper_title: Statistical process control of multivariate processes
paper_content:
Abstract With process computers routinely collecting measurements on large numbers of process variables, multivariate statistical methods for the analysis, monitoring and diagnosis of process operating performance have received increasing attention. Extensions of traditional univariate Shewhart, CUSUM and EWMA control charts to multivariate quality control situations are based on Hotelling's T 2 statistic. Recent approaches to multivariate statistical process control which utilize not only product quality data (Y), but also all of the available process variable data (X) are based on multivariate statistical projection methods (Principal Component Analysis (PCA) and Partial Least Squares (PLS)). This paper gives an overview of these methods, and their use for the statistical process control of both continuous and batch multivariate processes. Examples are provided of their use for analysing the operations of a mineral processing plant, for on-line monitoring and fault diagnosis of a continuous polymerization process and for the on-line monitoring of an industrial batch polymerization reactor.
---
paper_title: Nonlinear PCA With the Local Approach for Diesel Engine Fault Detection and Diagnosis
paper_content:
This brief examines the application of nonlinear statistical process control to the detection and diagnosis of faults in automotive engines. In this statistical framework, the computed score variables may have a complicated nonparametric distribution function, which hampers statistical inference, notably for fault detection and diagnosis. This brief shows that introducing the statistical local approach into nonlinear statistical process control produces statistics that follow a normal distribution, thereby enabling a simple statistical inference for fault detection. Further, for fault diagnosis, this brief introduces a compensation scheme that approximates the fault condition signature. Experimental results from a Volkswagen 1.9-L turbo-charged diesel engine are included.
---
paper_title: Fault diagnosis in internal combustion engines using non-linear multivariate statistics
paper_content:
AbstractThis paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce non-linear relationships between the recorded engine variables, the paper proposes the use of non-linear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new non-linear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady state operating conditions. More precisely, non-linear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause o...
---
paper_title: Principal Component Analysis
paper_content:
When large multivariate datasets are analyzed, it is often desirable to reduce their dimensionality. Principal component analysis is one technique for doing this. It replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables. Often, it is possible to retain most of the variability in the original variables with q very much smaller than p. Despite its apparent simplicity, principal component analysis has a number of subtleties, and it has many uses and extensions. A number of choices associated with the technique are briefly discussed, namely, covariance or correlation, how many components, and different normalization constraints, as well as confusion with factor analysis. Various uses and extensions are outlined. ::: ::: ::: Keywords: ::: ::: dimension reduction; ::: factor analysis; ::: multivariate analysis; ::: variance maximization
---
paper_title: Application of a data-driven monitoring technique to diagnose air leaks in an automotive diesel engine: A case study
paper_content:
This paper presents a case study of the application of a data-driven monitoring technique to diagnose air leaks in an automotive diesel engine. Using measurement signals taken from the sensors/actuators which are present in a modern automotive vehicle, a data-driven diagnostic model is built for condition monitoring purposes. Detailed investigations have shown that measured signals taken from the experimental test-bed often contain redundant information and noise due to the nature of the process. In order to deliver a clear interpretation of these measured signals, they therefore need to undergo a 'compression' and an 'extraction' stage in the modelling process. It is at this stage that the proposed data-driven monitoring technique plays a significant role by taking only the important information of the original measured signals for fault diagnosis purposes. The status of the engine's performance is then monitored using this diagnostic model. This condition monitoring process involves two separate stages of fault detection and root-cause diagnosis. ::: ::: The effectiveness of this diagnostic model was validated using an experimental automotive 1.9L four-cylinder diesel engine embedded in a chassis dynamometer in an engine test-bed. Two joint diagnostics plots were used to provide an accurate and sensitive fault detection process. Using the proposed model, small air leaks in the inlet manifold plenum chamber with a diameter size of 2-6 turn were accurately detected. Further analyses using contribution to T-2 and Q statistics show the effect of these air leaks on fuel consumption. It was later discovered that these air leaks may contribute to emissions fault. ::: ::: In comparison to the existing model-based approaches, the proposed method has several benefits: (i) it makes no simplifying assumptions, as the model is built entirely from the measured signals; (ii) it is simple and straight-forward; (iii) there is no additional hardware required for modelling; (iv) it is a time and cost-efficient way to deliver condition monitoring (i.e. fault diagnosis application); (v) it is capable of pin-pointing the root-cause and the effect of the problem; and (vi) it is feasible to be implemented in practice. (c) 2005 Elsevier Ltd. All rights reserved.
---
paper_title: Diagnosis and Prognosis of Automotive Systems: motivations, history and some results
paper_content:
Abstract The field of automotive engineering has seen an explosion in the presence of on-board electronic components and systems vehicles since the 1970s. This growth was initially motivated by the introduction of emissions regulations that led to the widespread application of electronic engine controls. A secondary but important consequence of these developments was the adoption of on-board diagnostics regulations aimed at insuring that emission control systems would operate as intended for a prescribed period of time (or vehicle mileage). In addition, the presence of micro-controllers on-board the vehicle has led to a proliferation of other functions related to safety and customer convenience, and implemented through electronic systems and related software, thus creating the need for more sophisticated on-board diagnostics. Today, a significant percentage of the software code in an automobile is devoted to diagnostic functions. This paper presents an overview of diagnostic needs and requirements in the automotive industry, illustrates some of the challenges that are associated with satisfying these requirements, and proposes some future directions.
---
paper_title: Fault diagnosis in chemical processes using Fisher discriminant analysis, discriminant partial least squares, and principal component analysis
paper_content:
Abstract Principal component analysis (PCA) is the most commonly used dimensionality reduction technique for detecting and diagnosing faults in chemical processes. Although PCA contains certain optimality properties in terms of fault detection, and has been widely applied for fault diagnosis, it is not best suited for fault diagnosis. Discriminant partial least squares (DPLS) has been shown to improve fault diagnosis for small-scale classification problems as compared with PCA. Fisher's discriminant analysis (FDA) has advantages from a theoretical point of view. In this paper, we develop an information criterion that automatically determines the order of the dimensionality reduction for FDA and DPLS, and show that FDA and DPLS are more proficient than PCA for diagnosing faults, both theoretically and by applying these techniques to simulated data collected from the Tennessee Eastman chemical plant simulator.
---
paper_title: Overview and Recent Advances in Partial Least Squares
paper_content:
Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48,49,52].
---
paper_title: Application of Partial Least-squares Regression to oil atomic emitting spectrum data of a type diesel engine
paper_content:
Aiming at relation between the concentrations of wearing elements of diesel engine and it's loads(X 1 ), cylinders' clearances(X 2 , X 3 and X 4 ) and runtime after renewing oil(X 5 ), the Partial Least-squares Regression(PLSR) has been used to analyze the oil atomic emitting spectrum data of a 6-cylinder diesel engine. The results show that Cu concentrations variance explained by the five components is largest. These components are derived from X 1 , X 2 , X 3 , X 4 and X 5 . The PLSR-function concerning Cu can forecast Cu concentrations well. It has proved perfect in forecasting all the Cu concentrations of the 69 samples in the seven kinds of operating conditions. The effect of X 1 , X 2 , X 3 , X 4 and X 5 upon Cu concentrations has been effectively evaluated by the Variable Important in Projection (VIP). As compared with the obvious effect of cylinder's clearances(X 2 , X 3 and X 4 ) and that of runtime(X 5 ), the effect of the loads is small(X 1 ).
---
paper_title: Data-driven methods for fault detection and diagnosis in chemical processes
paper_content:
I. Introduction.- 1. Introduction.- II. Background.- 2. Multivariate Statistics.- 3. Pattern Classification.- III. Methods.- 4. Principal Component Analysis.- 5. Fisher Discriminant Analysis.- 6. Partial Least Squares.- 7. Canonical Variate Analysis.- IV. Application.- 8. Tennessee Eastman Process.- 9. Application Description.- 10. Results and Discussion.- V. Other Approaches.- 11. Overview of Analytical and Knowledge-based Approaches.- References.
---
paper_title: Identification and monitoring of automotive engines
paper_content:
The objective of this paper is to extend and refine the nonlinear canonical variate analysis (NLCVA) methods developed in the previous work for system identification and monitoring of automotive engines. The use of additional refinements in the nonlinear modeling are developed including the use of more general bases of nonlinear functions. One such refinement in the NLCVA system identification is the selection of basis functions using the method of Leaps and Bounds with the Akaike information criterion AIC. Delay estimation procedures are used to considerably reduce the state order of the identified engine models. This also considerably reduces the number of estimated parameters that directly affects the identified model accuracy. This increased accuracy also affects the ability to monitor changes or faults in dynamic engine characteristics. A further objective of this paper is the development and use of nonlinear monitoring methods as extensions of several previously used linear CVA monitoring procedures. For the case of linear Gaussian systems, these monitoring methods have optimal properties in detecting faults or system changes in terms of the general maximum likelihood method. In the nonlinear case, departures from optimality are investigated, but the procedure is shown to still work quite effectively for detecting and identifying system faults and changes.
---
paper_title: Fault Detection Using Canonical Variate Analysis
paper_content:
The system identification method canonical variate analysis (CVA) has attracted much attention from researchers for its ability to identify multivariable state-space models using experimental data. A model identified using CVA can use several methods for fault detection. Two standard methods are investigated in this paper: the first is based on Kalman filter residuals for the CVA model, the second on canonical variable residuals. In addition, a third method is proposed that uses the local approach for detecting changes in the canonical variable coefficients. The detection methods are evaluated using three simulation examples; the examples consider the effects of feedback control; process nonlinearities; and multivariable, serially correlated data. The simulations consider several types of common process faults, including sensor faults, load disturbances, and process changes. The simulation results indicate that the local approach provides a very sensitive method for detecting process changes that are dif...
---
paper_title: NEURAL NETWORK APPLICATIONS FOR MODEL BASED FAULT DETECTION WITH PARITY EQUATIONS
paper_content:
Abstract The rising complexity of modern automotive engines with an increasing number of actuators and sensors to minimise emissions and fuel consumption and to maximise engine driveability require a detailed supervision for fault detection and on-board diagnosis. The European Community Directive 98/69/EC requires on-board diagnosis for spark ignition engines and will require it for diesel engines as of January 2003, mainly to prevent excessive emissions. Beside this regulation it is also in the interest of the automobile manufactures to establish capable diagnosis systems for maintenance, repair and the benefit of their customers. This paper will describe applications of neural networks for modelling complex fluid- and thermodynamics with unknown physical model structure. Reference models, which describe the fault free process, are set up and identified with the special neural network LOLIMOT (Local-Linear-Model-Tree). Fault detection algorithms, which employ the method of parity equations, were successfully implemented and tested in real time with a 2 litre diesel engine and a Rapid Control Prototyping System. Measurements of online fault detection are shown for several built-in faults in the intake system of this diesel engine.
---
paper_title: Model-based diagnosis of large diesel engines based on angular speed variations of the crankshaft
paper_content:
This work aims at monitoring large diesel engines by analyzing the crankshaft angular speed variations. It focuses on a powerful 20-cylinder diesel engine with crankshaft natural frequencies within the operating speed range. First, the angular speed variations are modeled at the crankshaft free end. This includes modeling both the crankshaft dynamical behavior and the excitation torques. As the engine is very large, the first crankshaft torsional modes are in the low frequency range. A model with the assumption of a flexible crankshaft is required. The excitation torques depend on the in-cylinder pressure curve. The latter is modeled with a phenomenological model. Mechanical and combustion parameters of the model are optimized with the help of actual data. Then, an automated diagnosis based on an artificially intelligent system is proposed. Neural networks are used for pattern recognition of the angular speed waveforms in normal and faulty conditions. Reference patterns required in the training phase are computed with the model, calibrated using a small number of actual measurements. Promising results are obtained. An experimental fuel leakage fault is successfully diagnosed, including detection and localization of the faulty cylinder, as well as the approximation of the fault severity.
---
paper_title: On-line sensor fault detection, isolation, and accommodation in automotive engines
paper_content:
This paper describes the hybrid solution, based on artificial neural networks (ANNs), and the production rule adopted in the realization of an instrument fault detection, isolation, and accommodation scheme for automotive applications. Details on ANN architectures and training are given together with diagnostic and dynamic performance of the scheme.
---
paper_title: Real-Time Implementation of IFDIA Scheme in Automotive Systems
paper_content:
The paper describes the implementation of an instrument fault detection, isolation, and accommodation (IFDIA) system developed for real-time automotive applications. The IFDIA architecture, which is based on artificial neural networks, was used to locate and accommodate faults that could occur in the main sensors involved in managing engine operation. Numerous online tests carried out in a variety of engine operating and vehicle drive conditions have confirmed the validity of the diagnostic procedure and its onboard applicability
---
paper_title: Semi-physical Neural Network Model in Detecting Engine Transient Faults using the Local Approach
paper_content:
Abstract This paper investigates detection of an air leak fault in the intake manifold subsystem of an automotive engine during transient operation. Previously, it was shown that integrating the local approach with an auto-associative neural network model of the engine, significantly increased the sensitivity of fault detection. However, the drawback then is that the computational load is naturally dependent on the network complexity. This paper proposes the use of the available physical models to pre-process the original signals prior to model building for fault detection. This not only extracts existing relationships among the variables, but also helps in reducing the number of variables to be modelled and the related model complexity. The benefits of this improvement are demonstrated by practical application to a modern spark ignition 1.8 litre Nissan petrol engine.
---
paper_title: Neural network fault classification of transient data in an automotive engine air path
paper_content:
Classification of automotive engine air path faults from transient data is investigated using Neural Networks (NNs). A generic Spark Ignition (SI) Mean Value Engine Model (MVEM) is used for experimentation. Several faults are considered, including sensor faults, Exhaust Gas Recycle (EGR) valve and leakage in intake manifold. Consideration of different fault intensities for all the sensor and component faults is a unique feature of this research. The identification of a fault and its intensity has been considered both equally important. Radial Basis Function (RBF) NNs are trained to detect and diagnose the faults, and also to indicate fault size, by recognising the different fault patterns occurring in the transient data. Three dynamic cases of fault occurrence are considered with increasing generality of engine operation: (1) engine accelerates or retards from mean speed, (2) engine runs at different steady speeds and (3) engine accelerates or retards from any initial speed. The approach successfully classifies the faults in each case.
---
paper_title: Issues of Fault Diagnosis for Dynamic Systems
paper_content:
There is an increasing demand for dynamic systems to become safer, more reliable and more economical in operation. This requirement extends beyond the normally accepted safety-critical systems e.g., nuclear reactors, aircraft and many chemical processes, to systems such as autonomous vehicles and some process control systems where the system availability is vital. The field of fault diagnosis for dynamic systems (including fault detection and isolation) has become an important topic of research. Many applications of qualitative and quantitative modelling, statistical processing and neural networks are now being planned and developed in complex engineering systems. Issues of Fault Diagnosis for Dynamic Systems has been prepared by experts in fault detection and isolation (FDI) and fault diagnosis with wide ranging experience.Subjects featured include: - Real plant application studies; - Non-linear observer methods; - Robust approaches to FDI; - The use of parity equations; - Statistical process monitoring; - Qualitative modelling for diagnosis; - Parameter estimation approaches to FDI; - Fault diagnosis for descriptor systems; - FDI in inertial navigation; - Stuctured approaches to FDI; - Change detection methods; - Bio-medical studies. Researchers and industrial experts will appreciate the combination of practical issues and mathematical theory with many examples. Control engineers will profit from the application studies.
---
paper_title: Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
paper_content:
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
---
paper_title: Neural network application to comprehensive engine diagnostics
paper_content:
The authors examine the application of trainable classification systems to the problem of diagnosing faults in engines at the manufacturing plant. It is demonstrated how a combination of conventional statistical processing methods and neural networks can be combined to create a classifier system for engine diagnostics. The most significant computational effort is required for the principal component analysis and to properly develop the hard-shell classifiers using data sets augmented with Monte Carlo methods. Once these procedures are carried out, the application of neural networks to the data set to obtain the trainable classifier is quite straightforward. >
---
paper_title: Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
paper_content:
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
---
paper_title: An incremental neural learning framework and its application to vehicle diagnostics
paper_content:
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.
---
paper_title: Neuro-fuzzy-based fault detection of the air flow sensor of an idling gasoline engine
paper_content:
Abstract This paper presents a neuro-fuzzy-based diagnostic system for detecting the faults of an air flow sensor of an idling gasoline engine. Based on the Takagi-cSugeno fuzzy system model, the diagnostic system is formulated by the input-output relationships between symptoms and faults. The system parameters are regulated with the learning datum from experiments, using the steepest descent method and back-propagation algorithm. The proposed diagnostic system consists of two parts: one is to judge the fault of the sensor and the other is to identify the bias degree of the sensor. The experimental results show that the fault source and fault (bias) degree can be identified with the proposed diagnostic system, and indicate that the neuro-fuzzy strategy is an efficient and available approach for fault diagnosis problems of the gasoline engine system.
---
paper_title: Fault detection in internal combustion engines using fuzzy logic
paper_content:
AbstractIn this study, a complementary fuzzy-logic-based fault diagnosis system was developed to diagnose the faults of an internal combustion engine (ICE) and the system incorporated with an engine test stand. The input variables of the fuzzy logic classifier were acquired via a data acquisition card and RS-232 port. The rule base of this system was developed by considering the theoretical knowledge, the expert knowledge, and the experiment results. The accuracy of the fuzzy logic classifier was tested by experimental studies which were performed under different fault conditions. Using the developed fault diagnosis system, ten general faults which were observed in the internal combustion engine were successfully diagnosed in real time. With these characteristics, the system could easily be used for fault diagnosis in test laboratories and in service workshops.
---
paper_title: Fault detection and isolation for an experimental internal combustion engine via fuzzy identification
paper_content:
Certain engine faults can be detected and isolated by examining the pattern of deviations of engine signals from their nominal unfailed values. In this brief paper, we show how to construct a fuzzy identifier to estimate the engine signals necessary to calculate the deviation from nominal engine behavior, so that we may determine if the engine has certain actuator and sensor "calibration faults". We compare the fuzzy identifier to a nonlinear ARMAX technique and provide experimental results showing the effectiveness of our fuzzy identification based failure detection and identification strategy. >
---
paper_title: Fault diagnosis in internal combustion engines using non-linear multivariate statistics
paper_content:
AbstractThis paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce non-linear relationships between the recorded engine variables, the paper proposes the use of non-linear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new non-linear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady state operating conditions. More precisely, non-linear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause o...
---
paper_title: Fault Diagnosis: Models, Artificial Intelligence, Applications
paper_content:
1. Introduction.- 2. Models in the diagnostics of processes.- 3. Process diagnostics methodology.- 4. Methods of signal analysis.- 5. Control theory methods in designing diagnostic systems.- 6. Optimal detection observers based on eigenstructure assignment.- 7. Robust H?-optimal synthesis of FDI systems.- 8. Evolutionary methods in designing diagnostic systems.- 9. Artificial neural networks in fault diagnosis.- 10. Parametric and neural network Wiener and Hammerstein models in fault detection and isolation.- 11. Application of fuzzy logic to diagnostics.- 12. Observers and genetic programming in the identification and fault diagnosis of non-linear dynamic systems.- 13. Genetic algorithms in the multi-objective optimisation of fault detection observers.- 14. Pattern recognition approach to fault diagnostics.- 15. Expert systems in technical diagnostics.- 16. Selected methods of knowledge engineering in systems diagnosis.- 17. Methods of acqusition of diagnostic knowledge.- 18. State monitoring algorithms for complex dynamic systems.- 19. Diagnostics of industrial processes in decentralised structures.- 20. Detection and isolation of manoeuvres in adaptive tracking filtering based on multiple model switching.- 21. Detecting and locating leaks in transmission pipelines.- 22. Models in the diagnostics of processes.- 23. Diagnostic systems.
---
paper_title: Application of Dempster-Shafer theory in condition monitoring applications: a case study
paper_content:
Abstract This paper is concerned with the use of Dempster–Shafer theory in `fusion' classifiers. We argue that the use of predictive accuracy for basic probability assignments can improve the overall system performance when compared to `traditional' mass assignment techniques. We demonstrate the effectiveness of this approach in a case study involving the detection of static thermostatic valve faults in a diesel engine cooling system.
---
paper_title: A Multi-Net System for the Fault Diagnosis of a Diesel Engine
paper_content:
A multi-net fault diagnosis system designed to provide an early warning of combustion-related faults in a diesel engine is presented. Two faults (a leaking exhaust valve and a leaking fuel injector nozzle) were physically induced (at separate times) in the engine. A pressure transducer was used to sense the in-cylinder pressure changes during engine cycles under both of these conditions, and during normal operation. Data corresponding to these measurements were used to train artificial neural nets to recognise the faults, and to discriminate between them and normal operation. Individually trained nets, some of which were trained on subtasks, were combined to form a multi-net system. The multi-net system is shown to be effective when compared with the performance of the component nets from which it was assembled. The system is also shown to outperform a decision-tree algorithm (C5.0), and a human expert; comparisons which show the complexity of the required discrimination. The results illustrate the improvements in performance that can come about from the effective use of both problem decomposition and redundancy in the construction of multi-net systems.
---
paper_title: FAULT DETECTION AND IDENTIFICATION OF AUTOMOTIVE ENGINES USING NEURAL NETWORKS
paper_content:
Abstract Fault detection and isolation (FDI) in dynamic data from an automotive engine air path using artificial neural networks is investigated. A generic SI mean value engine model is used for experimentation. Several faults are considered, including leakage, EGR valve and sensor faults, with different fault intensities. RBF neural networks are trained to detect and diagnose the faults, and also to indicate fault size, by recognising the different fault patterns occurring in the dynamic data. Three dynamic cases of fault occurrence are considered with increasing generality of engine operation. The approach is shown to be successful in each case.
---
paper_title: Radial basis function neural network in fault detection of automotive engines
paper_content:
Fault detection and isolation have become one of the most important aspects of automobile design. A fault detection (FD) scheme is developed for automotive engines in this paper. The method uses an independent Radial Basis Function (RBF) Neural Network model to model engine dynamics, and the modelling errors are used to form the basis for residual generation. A dependent RBFNN model is a model which uses output data of a plant as a target output then use it to train the neural network, while, The independent RBFNN model is a higher accuracy than the dependent model and the errors can be detected by this model, this is because this model does not dependent on the output of the plant and it will use its output as a target, so if any faults in the plant will be not effect in the model and this faults will be detected easily and clearly. The method is developed and the performance assessed using the engine benchmark, the Mean Value Engine Model (MVEM) with Matlab/Simulink. Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults considered are 10-20% change superimposed on the outputs of manifold pressure, temperature and crankshaft speed sensors; one component fault considered is air leakage in intake manifold and Exhaust Gas Recycle (EGR); the actuator fault considered is the malfunction of fuel injector. The simulation results showed that all the simulated faults can be clearly detected in the dynamic condition throughout the operating range. ::: ::: Keywords : Automotive engine, independent RBFNN model, RBF neural network, fault detection ::: ::: International Journal of Engineering, Science and Technology , Vol. 2, No. 10, 2010, pp. 1-8
---
paper_title: Robust Model-Based Fault Diagnosis for Dynamic Systems
paper_content:
There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field. The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.
---
paper_title: Trends in the application of model-based fault detection and diagnosis of technical processes
paper_content:
Abstract After a short overview of the historical development of model-based fault detection, some proposals for the terminology in the field of supervision, fault detection and diagnosis are stated, based on the work within the IFAC SAFEPROCESS Technical Committee. Some basic fault-detection and diagnosis methods are briefly considered. Then, an evaluation of publications during the last 5 years shows some trends in the application of model-based fault-detection and diagnosis methods.
---
paper_title: Model-based fault detection and diagnosis: status and applications
paper_content:
Abstract For the improvement of reliability, safety and efficiency advanced methods of supervision, fault-detection and fault diagnosis become increasingly important for many technical processes. This holds especially for safety related processes like aircraft, trains, automobiles, power plants and chemical plants. The classical approaches are limit or trend checking of some measurable output variables. Because they do not give a deeper insight and usually do not allow a fault diagnosis, model-based methods of fault-detection were developed by using input and output signals and applying dynamic process models. These methods are based, e.g., on parameter estimation, parity equations or state observers. Also signal model approaches were developed. The goal is to generate several symptoms indicating the difference between nominal and faulty status. Based on different symptoms fault diagnosis procedures follow, determining the fault by applying classification or inference methods. This contribution gives a short introduction into the field and shows some applications for an actuator, a passenger car and a combustion engine.
---
paper_title: Robust Model-Based Fault Diagnosis for Dynamic Systems
paper_content:
There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field. The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.
---
paper_title: Application of model-based fault detection and diagnosis to the quality assurance of an automotive actuator
paper_content:
Abstract The increased degree of automation of technical processes requires an increased number of mechanical-electronic components. In order to maintain the dependability and availability despite of the increased complexity, new methods for the quality assurance of mechanical-electronic systems are necessary. This paper describes an automatic diagnostic system for an automotive actuator. Analytic models of the process under investigation are used in order to extract detailed information about the process, using only the usually measured signals and evaluating the signals e.g. by parameter estimation and state estimation. However, some relations, especially the cause — effect relations between the underlying faults and the observable symptoms, are quite difficult to represent by analytic models. A rule-based approach is more suitable to acquire, represent and process the diagnostic knowledge base. In order to cope with uncertainty, a fuzzy structure is applied to the classification of faults.
---
paper_title: Intelligent model-based diagnostics for vehicle health management
paper_content:
The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed. In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.© (2003) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Robust Model-Based Fault Diagnosis for Dynamic Systems
paper_content:
There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field. The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.
---
paper_title: An iterative learning observer for fault detection and accommodation in nonlinear time-delay systems
paper_content:
This article addresses fault detection, estimation, and compensation problem in a class of disturbance driven time delay nonlinear systems. The proposed approach relies on an iterative learning observer (ILO) for fault detection, estimation, and compensation. When there are no faults in the system, the ILO supplies accurate disturbance estimation to the control system where the effect of disturbances on estimation error dynamics is attenuated. At the same time, the proposed ILO can detect sudden changes in the nonlinear system due to faults. As a result upon the detection of a fault, the same ILO is used to excite an adaptive control law in order to offset the effect of faults on the system. Further, the proposed ILO-based adaptive fault compensation strategy can handle multiple faults. The overall fault detection and compensation strategy proposed in the paper is finally demonstrated in simulation on an automotive engine example to illustrate the effectiveness of this approach. Copyright © 2005 John Wiley & Sons, Ltd.
---
paper_title: Diagnosis of multiple sensor and actuator failures in automotive engines
paper_content:
Unlike cases where only a single failure occurs, fault detection and isolation of multiple sensor and actuator failures for engines are difficult to achieve because of the interactive effects of the failed components. If faults all appear either in sensors only or in actuators only, many existing residual generators which provide decoupled residual signals can be employed directly to obtain proper fault detection and isolation. However, when both sensor and actuator failures occur at the same time, their mutual effects on residuals make fault isolation particularly difficult. Under such circumstances, further decision logic is required. In the paper, the authors propose a hexadecimal decision table to relate all possible failure patterns to the residual code. The residual code is obtained through simple threshold testing of the residuals, which are the output of a general scheme of residual generators. The proposed diagnostic system incorporating the hexadecimal decision table has been successfully applied to automotive engine sensors and actuators in both simulation and experimental analyses. Enhancement of the present diagnostic performance by implementing an additional sensor is also described. >
---
paper_title: Generating directional residuals with dynamic parity relations
paper_content:
Abstract It is shown how diagnostic residuals, which exhibit directional properties at all times in response to an arbitrary mix of input and output faults, can be generated using dynamic parity relations. The parity relations are applied directly to the observables of the monitored plant; the design relies on the plant's dynamic input-output model. This residual generator can be made computationally polynomial (moving average) by including the invariant zero polynomial of the fault system in the specified fault responses. The polynomial design yields moving average noise transfers as well. White noise transfer and disturbance decoupling are achieved by extending the response specification. The parity relation approach is compared with the traditional detection filter design, and is shown to be more straightforward and have milder existence conditions; if subjected to the same specification, the two approaches yield identical residual generators.
---
paper_title: Diagnosis of automotive electronic throttle control systems
paper_content:
During the past two decades, the automotive industry has been required to develop on-board health monitoring capabilities to meet legislated diagnostic requirements for engine management systems. In this paper, real-time diagnostics are presented which monitor the performance of an electronic throttle control system to detect and identify a suite of anomalies. The ETC system shall be modeled and a parity diagnostic strategy applied to detect the presence of faults. The specific nature of the fault is isolated using a parametric estimation methodology. Representative numerical results are presented and discussed to demonstrate the operational performance of the ETC system and health monitoring algorithms.
---
paper_title: Fault detection for modern Diesel engines using signal- and process model-based methods
paper_content:
Abstract Modern Diesel engines with direct fuel injection and turbo charging have shown a significant progress in fuel consumption, emissions and driveability. Together with exhaust gas recirculation and variable geometry turbochargers they became complicated and complex processes. Therefore, fault detection and diagnosis is not easily done and need to be improved. This contribution shows a systematic development of fault detection and diagnosis methods for two system components of Diesel engines, the intake system and the injection system together with the combustion process. By applying semiphysical dynamic process models, identification with special neural networks, signal models and parity equations residuals are generated. Detectable deflections of these residuals lead to symptoms which are the basis for the detection of several faults. Experiments with a 2.0 l Diesel engine on a dynamic test bench as well as in the vehicle have demonstrated the detection and diagnosis of several implemented faults in real time with reasonable calculation effort.
---
paper_title: Nonlinear parity equation based residual generation for diagnosis of automotive engine faults
paper_content:
Abstract The parity equation residual generation method is a model-based fault detection and isolation scheme that has been applied with some success to the problem of monitoring the health of engineering systems. However, this scheme fails when applied to significantly nonlinear systems. This paper presents the application of a nonlinear parity equation residual generation scheme that uses forward and inverse dynamic models of nonlinear systems, to the problem of diagnosing sensor and actuator faults in an internal combustion engine, during execution of the United States Environmental Protection Agency Inspection and Maintenance 240 driving cycle. The Nonlinear AutoRegressive Moving Average Model with eXogenous inputs technique is used to identify the engine models required for residual generation. The proposed diagnostic scheme is validated experimentally and is shown to be sensitive to a number of input and sensor faults while remaining robust to the unmeasured load torque disturbance.
---
paper_title: Fault detection and isolation using parity relations
paper_content:
Abstract The design of dynamic parity (consistency) relations, for the detection and isolation of faults, is described. Both additive and multiplicative (parametric) faults are considered. Various isolation schemes are discussed and decoupling from disturbances and certain model errors is included. Links to diagnostic observers and parameter estimation are pointed out.
---
paper_title: Model Based Diagnosis for the Air Intake System of the SI-Engine
paper_content:
Because of legislative regulations like OBDII, on-board diagnosis has gained much interest lately. A model based approach is suggested for the diagnosis of the air intake system of an SI-engine. Im ...
---
paper_title: Diagnosis and Prognosis of Automotive Systems: motivations, history and some results
paper_content:
Abstract The field of automotive engineering has seen an explosion in the presence of on-board electronic components and systems vehicles since the 1970s. This growth was initially motivated by the introduction of emissions regulations that led to the widespread application of electronic engine controls. A secondary but important consequence of these developments was the adoption of on-board diagnostics regulations aimed at insuring that emission control systems would operate as intended for a prescribed period of time (or vehicle mileage). In addition, the presence of micro-controllers on-board the vehicle has led to a proliferation of other functions related to safety and customer convenience, and implemented through electronic systems and related software, thus creating the need for more sophisticated on-board diagnostics. Today, a significant percentage of the software code in an automobile is devoted to diagnostic functions. This paper presents an overview of diagnostic needs and requirements in the automotive industry, illustrates some of the challenges that are associated with satisfying these requirements, and proposes some future directions.
---
paper_title: IMPROVING DIAGNOSIS PERFORMANCES ON A TRUCK ENGINE MAKING USE OF STATISTICAL CHARTS
paper_content:
Abstract A method to improve model based diagnosis for the air path of a truck engine is presented. Originally, inaccuracies in both the volumetric efficiency static model and sensors limited the diagnosis performances. Statistical charts built from truck operational data were tested in order to reduce the overall residual dispersion. Data taken from two different trucks working in various operating conditions were used to evaluate the proposed approach. The use of charts is improving the diagnosis performances by approximately 30%.
---
paper_title: Model-based diagnosis of an automotive engine using several types of fault models
paper_content:
Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.
---
paper_title: Decentralized Diagnosis in Heavy Duty Vehicles
paper_content:
Fault diagnosis is important for automotive vehicles, due to economic reasons, such as efficient repair and fault prevention, and legislative reasons, mainly safety and pollution. Embedded systems in vehicles includes a large number of electronic control units, that is connected to each other via an electronic network. Many of the current diagnostic systems use pre-compiled diagnostic tests, and fault-logs that stores the results from the tests. To improve the diagnostic system, fault localization in addition to the existing fault detection is wanted. Since there are limitations in processing power, memory, and network capacity, an algorithm is searched for that uses stated diagnoses in the control units to find the diagnoses for the complete system. Such an algorithm is presented and exemplified in the article. The embedded system used in a Scania heavy duty vehicle, has been used as a case study to find limitations in the embedded system, and realistic requirements on the algorithm. Copyright c © 2004 J. Biteus, M. Jensen, & M. Nyberg.
---
paper_title: Determining the fault status of a component and its readiness, with a distributed automotive application
paper_content:
In systems using only single-component tests, the fault status of a component is ready if a test only supervising the component has been evaluated. However, if plausibility tests that supervise multiple components are used, then a component can be ready before all tests supervising the component have been evaluated. Based on test results, this paper contributes with conditions on when a component is ready. The conditions on readiness are given for both centralized and distributed systems and are here applied to the distributed diagnostic system in an automotive vehicle.
---
paper_title: Injection diagnosis through common-rail pressure measurement
paper_content:
AbstractModern diesel common-rail injection systems supply fuel from a high-pressure vessel. The injection event causes an instantaneous drop in the rail pressure, as the stored mass is diminished. Pressure variations are also affected by the dynamics of the high-pressure pump that supplies fuel to the rail to compensate for the emptying process due to the injection. This paper proposes the possibility of diagnosing the injection process from measurement of the rail pressure. Different data treatment techniques are explored and evaluated in this paper to propose an effective method for the diagnosis of common-rail injection systems.
---
paper_title: Model-based detection and isolation of faults due to ageing in the air and fuel paths of common-rail direct injection diesel engines equipped with a λ and a nitrogen oxides sensor
paper_content:
The air and fuel management of modern diesel engines is based on a feedforward control approach. As a consequence, faults of the air or the fuel path have a direct impact on the emissions that cannot be observed nor compensated. Faults due to the ageing of engine parts are very common; thus engines are designed with tolerances large enough to fulfil the emission standards for their entire lifetime. This paper presents a method to locate and quantify engine faults due to the ageing of parts of modern diesel engines in order to help to reduce design tolerances. A relative air-to-fuel ratio and a nitrogen oxides emission sensor have been applied to a test-bench engine, and control-oriented emission models have been developed. Based thereon, a model-based fault detection and isolation system has been implemented. The sensor signals are compared with the models in order to detect a possible fault, and fault location and size are estimated by means of a bank of extended Kalman filters, one for each fault considered. Results from the New European Driving Cycle show that the developed control-oriented models are accurate, and the faults can be correctly detected and isolated.
---
paper_title: Model-based fault diagnosis of spark-ignition direct-injection engine using nonlinear estimations
paper_content:
In this paper, the detection and isolation of actuator faults (both measured and commanded) occurring in the engine breathing and the fueling systems of a spark-ignition direct-injection (SIDI) engine are described. The breathing system in an SIDI engine usually consists of a fresh air induction path via an electronically controlled throttle (ECT) and an exhaust gas recirculation (EGR) path via an EGR valve. They are dynamically coupled through the intake manifold to form a gas mixture, which eventually enters the engine cylinders for a subsequent combustion process. Meanwhile, the fueling system is equipped with a high-pressure common-rail injection for a precise control of the fuel quantity directly injected into the engine cylinders. Since the coupled system is highly nonlinear in nature, the fault diagnosis will be performed by generating residuals based on multiple nonlinear observers. Performing the fault detection and isolation properly on these key actuators in an SIDI engine could in principle ensure a precise control of air/fuel ratio and EGR dilution for improving fuel economy and reducing engine-out exhaust emissions.
---
paper_title: Diesel fuel injection system faults diagnosis based on fuzzy injection pressure pattern recognition
paper_content:
On-line fuel injection system faults detection and diagnosis is a desirable function for diesel engines to ensure reliable operation. This paper introduces a fuzzy pattern recognition approach for assessing diesel fuel injection pressure patterns and detecting common fuel injection system faults based on the processed injection pressure data. A validation study indicated that this method was capable of screening out the characteristic parameters of the fuel injection system health condition. On-line diagnostic tests resulted in an over 80% of correct diagnoses on fuel injection system faults from 100 validation cases.
---
paper_title: A Multi-Net System for the Fault Diagnosis of a Diesel Engine
paper_content:
A multi-net fault diagnosis system designed to provide an early warning of combustion-related faults in a diesel engine is presented. Two faults (a leaking exhaust valve and a leaking fuel injector nozzle) were physically induced (at separate times) in the engine. A pressure transducer was used to sense the in-cylinder pressure changes during engine cycles under both of these conditions, and during normal operation. Data corresponding to these measurements were used to train artificial neural nets to recognise the faults, and to discriminate between them and normal operation. Individually trained nets, some of which were trained on subtasks, were combined to form a multi-net system. The multi-net system is shown to be effective when compared with the performance of the component nets from which it was assembled. The system is also shown to outperform a decision-tree algorithm (C5.0), and a human expert; comparisons which show the complexity of the required discrimination. The results illustrate the improvements in performance that can come about from the effective use of both problem decomposition and redundancy in the construction of multi-net systems.
---
paper_title: Detection of engine misfire by wavelet analysis of cylinder-head vibration signals.
paper_content:
The misfiring fault of internal combustion engine was detected by using the vibration signals of cylinder-head. Based on the data acquisition system built with LabVIEW, the cylinder-head vibration signals were detected with an accelerometer while the engine was rapidly accelerating from idle speed to high speed, at which time the engine was running under four working conditions of normal and single cylinder misfiring, double cylinders continuously misfiring and double cylinders alternately misfiring. After decomposing the vibration signals with db3 wavelet, whether the engine was misfiring or not, and what type of misfiring, were judged by comparing the decomposing results. The resultd showrf that, the low-frequency vibration of the engine cylinder head was related to the rotation of the principal shaft, and the high-frequency vibration was related to the combustion in the cylinder. There were certain corresponding relationships between wave crests of high-frequency vibration and wave crests of low-frequency under the four conditions of normal and faults when engine runs in idle segment, accelerating segment, and high-speed segment. Thus, the misfiring fault and type can be detected by analyzing the corresponding relations. Detection of the misfiring fault by using wavelet analysis was effective and feasible. Keywords: internal combustion engine, acceleration, multiple misfiring, wavelet analysis, LabVIEW DOI: 10.3965/j.issn.1934-6344.2008.02.001-007 Citation: Jiang Aihua, Li Xiaoyu, Huang Xiuchang, Zhang zhenhua, Hua Hongxing. Detection of engine misfire by wavelet analysis of cylinder-head vibration signals. Int J Agric & Biol Eng. 2008; 1(2): 1
---
paper_title: Non-parametric models in the monitoring of engine performance and condition: Part 2: Non-intrusive estimation of diesel engine cylinder pressure and its use in fault detection
paper_content:
Abstract An application of the radial basis function model, described in Part 1, is demonstrated on a four-cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the trained model are validated against measured data and an example is given of the application of this model to the detection of a slight fault in one of the cylinders.
---
paper_title: Misfire-misfuel classification using support vector machines
paper_content:
AbstractThis paper proposes the use of support vector machines to perform classification between different types of missed combustion event in a six-cylinder engine. On-board diagnostics regulations require the detection of missed combustion events, which is possible through interpretation of crankshaft speed information. However, current approaches provide no information on the actual cause of the event, in particular whether it was caused by a misfuel (absence of fuel) or a misfire (absence of spark) event. Whilst the impact on the environment and emission treatment systems due to misfuel is minimal, misfire events are detrimental to both. Consequently information regarding the causes of missing combustion events potentially allows the development of unique recovery strategies particular to the source of the problem. In this paper, an approach is proposed that will provide the potential for, firstly, detection of a missing combustion event and, secondly, real-time classification of the event into either...
---
paper_title: Misfire Detection Using a Neural Network Based Pattern Recognition Technique
paper_content:
This contribution investigates the practical application of artificial neural networks to misfire detection in gasoline engines.The problem of misfire detection is formulated as a pattern recognition problem. A feed-forward multiple-layer neural network is used for the classification of firing and misfiring events. Emphasis is given on the trade-off between performance, computational cost and implementabilitv of the technique on a production electronic control unit (ECL). The developed technique is applied to a six cylinder gasoline engine to detect misfire events over the whole range of operation defined by official on board diagnosis (OBD) regulations. Experimental results on a passenger car are presented.
---
paper_title: Application of Nonlinear Sliding Mode Observers for Cylinder Pressure Reconstruction
paper_content:
Abstract The theoretical development of combining nonlinear sliding observers with internal combustion engine cycle simulations in order to estimate individual cylinder pressures was first presented by Kao & Moskwa in Ref. [8]. This new nonlinear approach to observe engine cylinder pressure, combustion heat release, and torque estimation has been shown to have good performance with fast convergence and stability. In this paper, the nonlinear sliding observer gains are modified to be a function of crank angle. It is applied to a six cylinder engine. The experimental results show the possibilities of application of this nonlinear observer to the cylinder pressure reconstruction.
---
paper_title: Multiple Misfire Identification by a Wavelet-Based Analysis of Crankshaft Speed Fluctuation
paper_content:
The analysis of the crankshaft speed fluctuation is one of the most investigated and used techniques for the detection and isolation of the misfire events in an internal combustion engine. During the past, lots of methods for single misfire detection were presented in literature but the multiple misfire analysis still represents an open challenge. This paper describes a new analysis technique based on the wavelet approach that allows for both the extraction of the frequency components related to a misfire event, and its localization in the time domain. The detection process is performed by analyzing the crankshaft angular velocity measurement, in order to underline decelerations due to misfire events, and investigate the post-misfire pattern in a suitable domain (wavelet domain). Moreover, the proposed algorithm allows for an easy isolation of the cylinder responsible for the misfire. Experimental results will support the validity of the described approach.
---
paper_title: Detection of misfire and knock in spark ignition engines by wavelet transform of engine block vibration signals
paper_content:
The high-frequency components of engine block vibration signals during misfire and normal combustion show clear differences of duration and scale by wavelet analysis - continuous wavelet transform (CWT). A new factor, combustion noise intensity (`CNICWT'), has been defined, which has been derived from the CWT scalogram. It has been shown that CNICWT can detect misfire more accurately than fast Fourier transform power spectral density. Similarly, it has been found that CNICWT can also be applied to knock detection. The durations of both knock and misfire detection overlapped each other at the same wavelet scale 8; misfire, normal and knock cycles can be indicated on the same histogram by CNICWT. Moreover, knock and misfire detection can be combined to one CNICWT computation.
---
paper_title: Model-based engine fault detection using cylinder pressure estimates from nonlinear observers
paper_content:
There are two indicated torque input observers presented in paper. The input estimation problem is transformed into the control tracking problem. The sliding mode control with integrator and PI with feedforward control are used for tracking control blocks. The indicated torque estimations are applied to detect engine firing faults. Cylinder pressure centroid and cylinder pressure difference approaches for engine diagnostics are also discussed in this paper. Simulation results for a six-cylinder engine with abnormal combustion are plotted to show engine combustion problems. >
---
paper_title: Estimate of indicated torque from crankshaft speed fluctuations: a model for the dynamics of the IC engine
paper_content:
A contribution is made to the task of constructing a global model for the IC (internal combustion) engine. A robust submodel is formulated for the dynamics of the IC engine, where-in the engine is viewed as a system with input given by cylinder pressure and output corresponding to crankshaft angular acceleration and crankshaft torque. The formulation is well suited to closed-loop engine control and transmission control applications. In the model, cylinder pressure is deterministically related to net engine torque through the geometry and dynamics of the reciprocating assembly. The relationship between net engine torque and crankshaft angular acceleration is explain in terms of a passive second-order electrical circuit model with constant parameters. Experimental results confirm the validity of the model over a wide range of engine operating conditions, including transient conditions. It is concluded that the model provides a powerful tool for estimating average and instantaneous net engine torque based on an inexpensive noncontacting measurement of crankshaft acceleration, thus providing access to one of the primary engine performance variables. >
---
paper_title: The on-line detection of engine misfire at low speed using multiple feature fusion with fuzzy pattern recognition
paper_content:
This paper proposes a technique for the online detection of incipient engine misfire based on multiple feature fusion and fuzzy pattern recognition. The technique requires the measurement of instantaneous angular velocity signals. By processing the engine dynamics model equation in the angular frequency domain, four dimensionless features for misfire detection are defined, along with fast feature-extracting algorithms. By directly analysing the waveforms of the angular velocity and the angular acceleration, six other dimensionless features are extracted. Via fuzzy pattern recognition, all the features are associated together as a fuzzy vector. This vector identifies whether the engine is healthy or faulty and then locates the position of a misfiring cylinder or cylinders if necessary. The experimental work conducted on a production engine operating at low speeds confirms that such a technique is able to work with the redundant and complementary information of all the features and that it leads to improved diagnostic reliability. It is fully expected that this technique will be simple to implement and will provide a useful practical tool for the online monitoring and realtime diagnosis of engine misfire in individual cylinders.
---
paper_title: Misfire and compression fault detection through the energy model
paper_content:
This article proposes a simple algorithm for misfire detection in reciprocating engines. The algorithm, based on an energy model of the engine, requires the measurement of the instantaneous angular speed. By processing the engine dynamics in the angular domain, variations in the working parameters of the engine, such as external load and mean angular speed, are compensated. A dimensionless feature has been abstracted for evaluation of the combustion as well as compression process of each cylinder. The proposed technique is expected to be easy to implement and to provide useful information for on-line monitoring of the in-cylinder processes in an internal combustion engine.
---
paper_title: Residual generation and statistical pattern recognition for engine misfire diagnostics
paper_content:
Methods for diagnosing misfire in internal combustion engines are presented in this paper. Crank-angle domain digital filters are used to extract features from the measured engine speed signal that are characteristic of misfire. Features for intermittent and continuous misfires are developed separately, since the engine speed responses for intermittent and continuous misfires are distinctly different. Also, the influence of crankshaft torsional vibration and repeatable measurement errors must be addressed differently in each case. The outputs from the digital filters serve as inputs to a pattern recognition network based on linear parametric classifiers. Experimental results from implementation on a Ford 4.6L V-8 engine are provided.
---
paper_title: Real-time misfire detection via sliding mode observer
paper_content:
A new method to detect misfire in internal combustion engines is presented. It is based on the estimation of the cylinder deviation torque by using sliding mode observer. The input estimation problem is transformed into the control tracking problem. The sliding controller is utilised to continuously track the measured varying crank speed by changing the estimated deviation torque. During the process of tracking, the speed estimation errors decrease and the gradual stability of the dynamics is assured. The mean deviation torque during the power stroke derived from the estimated deviation torque can be employed to detect easily engine misfires. Experimental results for a four-cylinder engine indicate that the method is a suitable tool for real-time misfire detection on board vehicle under various working conditions.
---
paper_title: Real Time Estimation of Engine Torque for the Detection of Engine Misfires
paper_content:
The need for improvements in the one-line estimation of engine performance variables is greater bowadays as a result of more stringent emission control legislation. There is also a concurrent requirement for improved on-board diagnostics to detect different types of malfunctions. For example, recent California Air Resources Board (CARB) regulations mandate continuous monitoring of misfires, a problem which, short of an expensive measurement of combustion pressure in each cylinder, is most directly approached by estimating individual cylinder torque. This paper describes the theory and experimental results of a method for the estimation of individual cylinder torque in automative engines, with the intent of satisfying the CARB misfire detection requirements. Estimation, control, and diagnostic functions associated with automotive engines involve near periodic processes, due to the nature of multi-cylinder engines. Thze model of the engine dynamics used in this study fully exploits the inherent periodicity of the combustion process in the crank angle domain in order to obtain a simple deconvolution medhod for the estimation of the mean torque produced by each cylinder during each stroke from a measurement of crankshaft angular velocity. The deconvolution is actually performed in the spatial frequency domain, recognizing that the combustion energy is concentrated at discrete spatial frequencies, which are harmonics of the frequency of rotation of the crankshaft. Thus, the resulting deconvolution algorithm is independent of engine speed, and reduces to an algebraic operation in the frequency domain. It is necessary to perform a Discrete Fourier Transform (DFT) on the measured angular velocity signal, sampled at fixed uniform crank angle intervals. The paper discusses the model used in the study, and the experimental validations of the algorithm, which has been implemented in real time using a portable computer and has been tested extensively on different production vehicles on a chassis dynamometer and on the road
---
paper_title: On-line estimation of indicated torque in IC engines via sliding mode observers
paper_content:
An approach to fault diagnosis for internal combustion engines is considered. It is based on the estimation of cylinder indicated torques by means of sliding mode observers. Instead of measuring indicated pressure in cylinders directly, crankshaft angular position is measured as the input of observers, which estimate the indicated torques. Several engine models are considered with different levels of complexity.
---
paper_title: Cylinder pressure and combustion heat release estimation for SI engine diagnostics using nonlinear sliding observers
paper_content:
Cylinder pressure is an important parameter in engine combustion analysis or engine diagnosis. An approach is introduced to estimate cylinder pressure and combustion heat release in multicylinder SI engines based solely on engine speed measurements. Because of the nonlinear nature of engines, this estimation employs a nonlinear observer: the sliding observer. In many applications, cylinder pressure is critical for control or engine monitoring systems. Researchers have pursued various approaches to obtain the desired cylinder pressure directly or indirectly. However, these approaches vary in cost, reliability, robustness, accuracy and convenience. The use of nonlinear sliding observers in pressure and combustion heat release estimation based on measurements of engine speed provides an accurate, low-cost, and reliable way to acquire these desired states. In this paper the estimation of cylinder pressures and combustion heat releases of a multicylinder SI engine is presented. Since a problem of system observability arises in pressure estimation when the cylinder piston moves to its TDC, means of reducing estimation errors in this condition are described. Finally, the applications of this approach in engine diagnostics are discussed. >
---
paper_title: An investigation of crankshaft oscillations for cylinder health diagnostics
paper_content:
The vibrational characteristics of an internal combustion engine crankshaft are investigated from a cylinder health diagnostics point of view. Experimental results from a six-cylinder industrial diesel engine are presented to demonstrate the effects of cylinder imbalance on the individual harmonic components of the engine speed signal. A crank-angle domain numerical model of the crankshaft dynamics for a six-cylinder industrial diesel engine is also adopted to establish the effects of continuous low-power production in individual cylinders of a multi-cylinder engine. Outline of a diagnostics algorithm that makes use of the properties of crankshaft vibration behaviour is provided. In particular, crank-angle domain notch filters are employed to extact the harmonic components of engine speed. The outlined method can be implemented for individual cylinder health diagnostics across a family of multi-cylinder engines and can be formulated to handle changes in crankshaft characteristics due to replacement of mechanical components and/or wear.
---
paper_title: Engine misfire detection
paper_content:
Abstract In this paper a simplified engine model is introduced in order to estimate the combustion torque from angular velocity measurement, using a Kalman filter approach. It can be implemented on-board the vehicle. The proposed discrete state-space model works with angular rather than time-equidistant sampling. Signals from conventional angular speed sensors of available engine management systems are used, without additional hardware components. Errors introduced by the digital computation of the angular speed, as well as the mechanical imperfections of the target wheel, are also modelled and taken into account.
---
paper_title: Knocking detection using wavelet instantaneous correlation method
paper_content:
Abstract In this study, we propose a new method for knocking detection that utilizes the vibration signal measured by a knock sensor under the knocking conditions known as a real mother wavelet (RMW), and carry out instantaneous correlation from the wavelet transform. We call this method the wavelet instantaneous correlation (WIC) method. The degree of similarity between the RMW and the vibration of the engine block was judged and only the knocking signal from the vibration of the engine block was extracted. The results obtained here show that the method proposed in this study is useful for knocking detection even if the engine speed is very high at 6000 rpm.
---
paper_title: Detection of misfire and knock in spark ignition engines by wavelet transform of engine block vibration signals
paper_content:
The high-frequency components of engine block vibration signals during misfire and normal combustion show clear differences of duration and scale by wavelet analysis - continuous wavelet transform (CWT). A new factor, combustion noise intensity (`CNICWT'), has been defined, which has been derived from the CWT scalogram. It has been shown that CNICWT can detect misfire more accurately than fast Fourier transform power spectral density. Similarly, it has been found that CNICWT can also be applied to knock detection. The durations of both knock and misfire detection overlapped each other at the same wavelet scale 8; misfire, normal and knock cycles can be indicated on the same histogram by CNICWT. Moreover, knock and misfire detection can be combined to one CNICWT computation.
---
paper_title: Engine Knock Detection from Vibration Signals Using Pattern Recognition
paper_content:
The paper deals with a diagnostic method that allows to detect engineknock. The developed algorithm differentiates three kinds of engine cycles:absence of knock, increasing knock and heavy knock. The decision is takenfrom a block vibration signal. The diagnostic method is based on patternrecognition. Three models of different data shapes provided from theaccelerometer are elaborated. This is done using a time-scale analysis toolcalled a wavelet network. It allows to extract relevant features from thesignal. The aim of the method is then to partition the feature space intoclasses representing the knock states. Experimental results are reported.
---
paper_title: Knock detection in spark ignition engines by vibration analysis of cylinder block: A parametric modeling approach
paper_content:
A simple and novel method for detection of low intensity knock, occurred in spark ignition engines, is developed in this paper. The proposed method is based on the modeling of the cylinder block vibration signal by auto regressive moving average (ARMA) parametric model. It is observed that one of the estimated moving average parameters is highly sensitive to the knock, so by monitoring this parameter, it is possible to detect the knock in SI engines even in very initial stages. The results also demonstrate that the proposed method is capable of detecting knock by simple hardware with low sampling frequency, leads to reduction the computation time as well as hardware complexity and cost. Moreover, a new method of utilizing the tachometer signal in parallel to the accelerometer one to estimate the knock-sensitive window (KSW) is introduced.
---
paper_title: Wavelet-based knock detection with fuzzy logic
paper_content:
This paper presents a novel approach to the determination of the knocking condition of a spark-ignition engine using the discrete wavelet transform as a means of analyzing the engine-block vibration signal and a fuzzy inference scheme to generate an estimate of the knock intensity. The block vibration sensor responds to knock-induced pressure waves generated when a portion of the air-fuel mixture in the cylinder combusts spontaneously. However, the block vibration signal is masked by other excitation signals, which render the deter- mination of the knocking condition of the combustion a difficult task. The wavelet transform lends itself very useful to the time-frequency analysis of the vibration signal. Various characteristic features can be determined from the wavelet decomposition of the signal. These are in turn fused together to generate an estimate of the knocking in- tensity using fuzzy logic. The proposed scheme was developed and tested using engine combustion data processed off-line. Off-line data was also used to optimize the parameters of the fuzzy scheme. The sys- tem was subsequently implemented on the real-time engine controller for on-line testing.
---
paper_title: Mechanical signature analysis using time-frequency signal processing: application to internal combustion engine knock detection
paper_content:
Signature analysis consists of the extraction of information from measured signal patterns. The work presented in this paper illustrates the use of time-frequency (TF) analysis methods for the purpose of mechanical signature analysis. Mechanical signature analysis is a mature and developed field; however, TF analysis methods are relatively new to the field of mechanical signal processing, having mostly been developed in the present decade, and have not yet been applied to their full potential in this field of engineering applications. Some of the ongoing efforts are briefly reviewed in this paper. One important application of TF mechanical signature analysis is the diagnosis of faults in mechanical systems. In this paper we illustrate how the use of joint TF signal representations can result in tangible benefits when analyzing signatures generated by transient phenomena in mechanical systems, such as might be caused by faults or otherwise abnormal operation. This paper also explores signal detection concepts in the joint TF domain and presents their application to the detection of internal combustion engine knock.
---
paper_title: Model-based fault diagnosis of spark-ignition direct-injection engine using nonlinear estimations
paper_content:
In this paper, the detection and isolation of actuator faults (both measured and commanded) occurring in the engine breathing and the fueling systems of a spark-ignition direct-injection (SIDI) engine are described. The breathing system in an SIDI engine usually consists of a fresh air induction path via an electronically controlled throttle (ECT) and an exhaust gas recirculation (EGR) path via an EGR valve. They are dynamically coupled through the intake manifold to form a gas mixture, which eventually enters the engine cylinders for a subsequent combustion process. Meanwhile, the fueling system is equipped with a high-pressure common-rail injection for a precise control of the fuel quantity directly injected into the engine cylinders. Since the coupled system is highly nonlinear in nature, the fault diagnosis will be performed by generating residuals based on multiple nonlinear observers. Performing the fault detection and isolation properly on these key actuators in an SIDI engine could in principle ensure a precise control of air/fuel ratio and EGR dilution for improving fuel economy and reducing engine-out exhaust emissions.
---
paper_title: Real-Time Diagnosis of the Exhaust Recirculation in Diesel Engines Using Least-Squares Parameter Estimation
paper_content:
In this paper, we present a real-time parameter identification approach for diagnosing faults in the exhaust gas recirculation (EGR) system of Diesel engines. The proposed diagnostics method has the ability to detect and estimate the magnitude of a leak or a restriction in the EGR valve, which are common faults in the air handling system of a Diesel engine. Real-time diagnostics is achieved using a recursive-least-squares (RLS) method, as well as, a recursive formulation of a more robust version of the RLS method referred to as recursive total-least-squares method. The method is used to identify the coefficients in a static orifice flow model of the EGR valve. The proposed approach of fault detection is successfully applied to diagnose low-flow or high-flow faults in an engine and is validated using experimental data obtained from a Diesel engine test cell and a truck.
---
paper_title: ROBUSTNESS ASSESSMENT OF ADAPTIVE FDI SYSTEM FOR ENGINE AIR PATH
paper_content:
Robustness assessment is important for every newly developed method. This paper presents robustness assessment of a new adaptive on-board fault diagnosis algorithm for the air-path of spark ignition (SI) engines. The method uses a radial basis function (RBF) neural network to classify pre-defined possible faults from engine measurements, reporting fault occurrence as well as the type and size of a fault. After diagnosing faults in each sample interval, the weights and widths of the RBF fault classifier are updated with the measurements and appropriately selected target outputs. Consequently, the network can adapt to the time-varying dynamics of the engine and environment change so that the false alarm rate is greatly reduced and the required network size is also reduced. The developed scheme is assessed with various faults simulated on a mean value engine benchmark model and compared with a fixed parameter RBF classifier. The robustness assessment is done for a wide range of operational modes for an automotive engine in real life. Special attention has been given to minimise the neural network size for its practical implementation in electronic control unit (ECU) of an automotive. Simulation results demonstrate the effectiveness of the proposed algorithm and its robustness.
---
paper_title: Model-based diagnosis of an automotive engine using several types of fault models
paper_content:
Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.
---
paper_title: Loading and Regeneration Analysis of a Diesel Particulate Filter with a Radio Frequency-Based Sensor
paper_content:
Accurate knowledge of diesel particulate filter (DP F) loading is critical for robust and efficient ope ration of the combined engine-exhaust aftertreatment system. Furthermore, upcoming on-board diagnostics regulations require on-board technologies to evaluate the statu s of the DPF. This work describes the application of radio frequency (RF) ‐ based sensing techniques to accura tely measure DPF soot levels and the spatial distri bution of the accumulated material. A 1.9L GM turbo diesel e ngine and a DPF with an RF-sensor were studied. Direct comparisons between the RF measurement and conventional pressure-based methods were made. Further analysis of the particulate matter loading rates wa s obtained with a mass-based soot emission measurement instrument (TEOM). Comparison with pressure drop measurements show the RF technique is unaffected by exhaust flow variati ons and exhibits a high degree of sensitivity to DPF so ot loading and good dynamic response. Additional computational and experimental work further illustr ates the spatial resolution of the RF measurements. Based on the experimental results, the RF technique shows significant promise for improving DPF control enab ling optimization of the combined engine-aftertreatment system for improved fuel economy and extended DPF service life.
---
paper_title: Diesel Particulate Filter Diagnostics Using Correlation and Spectral Analysis
paper_content:
Surve, Pranati Ramdas. M.S.E.C.E., Purdue University, August 2008. Diesel Particulate Filter Diagnostics Using Correlation and Spectral Analysis. Major Professors: Peter H. Meckl and Venkataramanan Balakrishnan. Diesel Particulate Filters (DPF) are used to trap the harmful particulate matter (PM) present in the exhaust of diesel engines. The particulate matter is trapped in and on a porous ceramic substrate to keep PM emissions low. The onboard diagnostics requirements enforced by Environmental Protection Agency (EPA) require that the DPF perform well to keep emissions below certain specified levels. Further, should the DPF fail in any way, resulting in higher emission levels, this event must be detected by the engine control module. The objective of this work is to “detect failed DPF condition”. The temperature and pressure signals from transducers inserted into the inlet and outlet of the DPF are analyzed. The approach is to correlate the pre-DPF and post-DPF temperature and pressure signals and define the transfer function characteristics for nominal DPF behavior. Determining how these characteristics change as a result of filter failure forms the basis of a DPF fault detection algorithm. It is observed from the test data that for the pressure signal, other than the mean value signal (i.e., at zero frequency), most of the energy content is concentrated at the firing frequency of the engine. The dynamic pressure signals are used to determine the magnitude squared of the transfer function characteristics of DPF by energy spectral analysis. This approach can achieve a failure detection of lightly failed DPF which is not possible by current algorithms based on mean value pressure drop. The most significant contribution of this research is the extension of dynamic pressure signal analysis from steady-state engine operation to transient operating conditions.
---
paper_title: Lean NOx trap storage model for diesel engine aftertreatment control and diagnosis
paper_content:
NOx emissions are a great concern to the environment. One major NOx emission source is from off-road and highway vehicles powered with diesel engines. Diesel engines offer significant benefits in fuel economy and low-speed torque performance. However, these benefits come at the expense of increased aftertreatment complexity. A lean NOx trap (LNT) is one effective way to reduce emissions to meet strict EPA regulations. Based on the chemical kinetics analysis of the LNT adsorption mechanism and mass transfer process, a storage model was proposed to estimate the mass of NOx adsorbed inside an LNT during lean periods based on other available signals. The model includes exhaust gas mass flow rate, in-bed temperature, inflow NOx concentrations, and NOx adsorption capacity of the LNT. Model validation shows close agreement between predictions and experimental data. This storage model can be used for LNT regeneration control and fault detection.
---
paper_title: Model-based fault detection and isolation for a diesel lean NOx trap aftertreatment system
paper_content:
Abstract The lean NO x trap (LNT) is an aftertreatment device used to reduce nitrogen oxides emissions on Diesel engines. To operate the LNT with high conversion efficiency, an optimized regeneration schedule is required, together with closed-loop control of the air/fuel ratio during regeneration. Furthermore, to comply with emissions regulations, diagnostic schemes are needed to detect and isolate faults, typically related to aging, sulfur poisoning and thermal deactivation. The paper describes a step-by-step methodology for the design and validation of model-based fault diagnosis for a LNT aftertreatment system. The approach is based on a control-oriented model of the LNT validated with experimental data. The proposed diagnostic approach is based on the generation of residuals using system models, through the comparison of the predicted and measured values of selected output variables. The paper focuses on the detection and isolation of sensor faults and LNT parametric faults. Different diagnostic methodologies are presented in relation to the detection of specific faults. Starting from sulfur poisoning detection in a laboratory environment which represents a preliminary validation of the approach, the diagnostic scheme is extended to detect various faults under different plant configurations and operating conditions, with a final application to on-board fault detection and isolation.
---
paper_title: A Methodology for Fault Diagnosis of Diesel NOx Aftertreatment Systems
paper_content:
Abstract Diesel engines are today considered leading candidates for the new generations of passenger vehicles due to their fuel efficiency and drivability. One of the key elements for the future acceptability is the compliance with emission standards (particularly on nitrogen oxides), which will require precise control of the aftertreatment system. Furthermore, in light of OBD-II regulations, considerable research must be devoted to the design of fault diagnosis algorithms. The definition of fault diagnosis strategies is a complex process that involves thorough studies of the system behavior in healthy and faulty conditions. Such studies can be done in multiple ways, including experimentation and mathematical modeling. In both cases, a thorough knowledge of the system components, sensors and actuators is required. The proposed paper presents an approach to model-based fault diagnosis of Diesel NOx aftertreatment systems. The proposed methodology is based on a functional and structural analysis of the system, at the level of individual components and assemblies. This facilitates the mapping and characterization of system faults through FTA and FMEA methods, allowing for the design of control-oriented models to be used for fault detection and isolation. In this paper, the outlined approach is applied to a Lean NOx Trap system.
---
paper_title: NOX storage and reduction on a Pt/BaO/alumina monolithic storage catalyst
paper_content:
Abstract The performance of a Pt/BaO/Al2O3 washcoated monolith reactor is investigated using propylene as a reductant. The dependence of the time-averaged NOX conversion is reported for several operating parameters, including feed composition, temperature, flow rate, propylene pulse duration and overall cycle time. NOX storage data, which reveal both kinetic and thermodynamic limitations, provide guidance on selecting the feed protocol giving high NOX conversion. Complex nonisothermal transient features spanning the NOX storage and reduction regimes are revealed. Time-averaged NOX conversions exceeding 80% are achieved over a wide range of feed temperatures when short pulses are fed that have a sufficiently high propylene concentration to create fuel-rich conditions. The periodic feed of propylene gives time-averaged NOX conversion significantly exceeding the steady-state conversion for an equivalent propylene feed rate. The results indicate that the rich–lean feed protocol must be tuned in order to achieve a maximum NOX conversion with minimal breakthrough of incompletely oxidized components. The time-averaged conversion achieves a maximum at an intermediate cycle time and reductant pulse duty, with values dependent on the feed temperature and flow rate. The results are compared to previous literature data and interpreted with a phenomenological storage and reduction cycle.
---
paper_title: An extended Kalman filter for ammonia coverage ratio and capacity estimations in the application of Diesel engine SCR control and onboard diagnosis
paper_content:
Ammonia, as the reductant to convert NOx to nitrogen molecules in a selective catalytic reduction (SCR) catalyst, plays an important role in Diesel engine SCR control. To achieve a sophisticated SCR control, the presence of ammonia in the SCR should be precisely monitored. Ammonia sensor for gaseous concentration measurement has been available recently. However, ammonia can also be adsorbed on the SCR substrate which is unmeasurable but dominates the NOx reduction dynamics. And also the catalyst ammonia storage capacity serves as an important factor for the judgment of SCR catalyst aging. In this paper, an observer for SCR catalyst ammonia coverage ratio and storage capacity estimations is designed based on an extended Kalman filter (EKF). Simulation results verify that the EKF-based observer can estimate both the states well.
---
paper_title: Observer-based estimation of selective catalytic reduction catalyst ammonia storage
paper_content:
AbstractThis paper presents two non-linear observer designs with different robustness to estimate engine selective catalytic reduction (SCR) ammonia coverage ratio. The observers estimate the SCR catalyst ammonia storage (or coverage ratio) based on the available measurements of NOx, NH3, and temperature. An extended Kalman filter is proposed to eliminate the NOx sensor cross-sensitivity to ammonia. A sliding mode observer technique was employed for the design. The robustness of the observers with respect to sensor measurement uncertainties was theoretically analysed. FTP75 test cycle simulation results show the robustness and estimation accuracies of the different observers in the presence of different measurement errors/uncertainties.
---
paper_title: On‐board diagnosis for three‐way catalytic converters
paper_content:
On-board fault diagnosis is critical for the automotive industry. Recently, new on-board diagnostic system requirements (OBD II) have been enforced on California vehicles and new legislation will become stricter and stricter; moreover such requirements have also been extended in Europe (EOBD). Government regulations will require monitoring vehicle emissions and alerting the driver if the exhaust after-treatment system is not working properly. To meet these requirement, sophisticated diagnostic algorithms have to be developed. This paper presents a model-based stochastic approach for fault detection with application to automotive exhaust-gas after-treatment systems. The algorithm, based on relatively simple control-oriented models of the three-way catalytic converter and the oxygen sensor, is suitable for real-time, on-board applications. The overall strategy has been tuned and validated on the basis of experimental data. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: Data Driven Simplified Three-Way Catalyst Health Diagnostic Models: Experimental Results
paper_content:
Presented is the identification of simplified three-way catalyst (TWC) models from vehicle data. The simplified models are developed for multiple TWCs from two different classes: full useful life and threshold. The full useful life (FUL) TWCs represent catalysts from vehicles with 100,000 miles whereas threshold TWCs represent catalysts from vehicles with over 150,000 miles. The results showed that these simplified models have consistent parameter estimates when identified in a passive monitoring mode as the vehicle experiences a driving cycle from the Federal Test Procedure (FTP). Moreover the parameters of the simplified TWC models contain TWC health/age information. The model input is the air mass flow rate (AM) into the engine and the model output is a proposed TWC health measure developed in this paper. The impact of TWC temperature will also be detailed.Copyright © 2007 by ASME
---
paper_title: A model-based approach to automotive three-way catalyst on-board monitoring☆
paper_content:
Abstract A model-based three-way automotive catalyst monitoring and fault detection strategy is presented in this work. The performance of the catalyst is inferred from the error between the post-catalyst exhaust gas oxygen sensor air fuel ratio measurement and the model predicted value. A simplified catalyst oxygen storage and reversible deactivation model is employed to predict the post-catalyst air fuel ratio. The model-based strategy is based on the use of a test statistic that is computed from a window of post-catalyst air fuel ratio prediction error data updated in real-time. A fault is assumed to be present when the value of this statistic exceeds a threshold determined from some specified confidence level. We illustrate several test statistics using engine operating data and conclude with an evaluation of this strategy.
---
paper_title: Model-Based Control and Diagnostic Monitoring for Automotive Catalyst Systems
paper_content:
An integrated model-based automotive catalyst control and diagnostic monitoring system is presented. This system incorporates a simplified dynamic catalyst model that describes oxygen storage and release in the catalyst and predicts the post-catalyst exhaust gas oxygen sensor signal. The model-based controller maintains the catalyst stored oxygen at a desired operating condition to prevent post-catalyst emissions breakthrough by ultimately adjusting the engine fueling. Diagnostic monitoring is performed by detecting changes in a test statistic derived from the post-catalyst sensor response. These changes, which are sensitive to both long-term catalyst deactivation effects and short-term emission control device failures, are detected based on the result of nonparametric statistical tests on a moving horizon of past test statistic values.
---
paper_title: Three-way catalyst diagnostics for advanced emissions control systems
paper_content:
Automotive emissions are stringently regulated. Since 1980, a three-way catalyst (TWC) has been used to convert harmful emissions of hydrocarbons, carbon monoxide, and oxides of nitrogen into less harmful gases in order to meet these regulations. The TWC's efficiency of conversion of these gases is primarily dependent on the mass ratio of air to fuel (A./F) in the mixture leaving the exhaust manifold and entering the catalyst. This paper develops a method by which a dynamic TWC model can be used for diagnostic purposes. This diagnostic method is analyzed in the context of a hypothesis test that is based on the oxygen storage capacity of the TWC. The Neyman-Pearson criterion is used as the basis for this hypothesis test. It is initially applied in the case of a single sample where the variance of the data is assumed to be known. This is then expanded to a multiple-sample case through the use of Student's t test. The improved fidelity of the t test is demonstrated, and it is shown that larger sample sizes provide further improvement in the quality of the hypothesis test.
---
paper_title: Continuous wavelet transform technique for fault signal diagnosis of internal combustion engines
paper_content:
A fault signal diagnosis technique for internal combustion engines that uses a continuous wavelet transform algorithm is presented in this paper. The use of mechanical vibration and acoustic emission signals for fault diagnosis in rotating machinery has grown significantly due to advances in the progress of digital signal processing algorithms and implementation techniques. The conventional diagnosis technology using acoustic and vibration signals already exists in the form of techniques applying the time and frequency domain of signals, and analyzing the difference of signals in the spectrum. Unfortunately, in some applications the performance is limited, such as when a smearing problem arises at various rates of engine revolution, or when the signals caused by a damaged element are buried in broadband background noise. In the present study, a continuous wavelet transform technique for the fault signal diagnosis is proposed. In the experimental work, the proposed continuous wavelet algorithm was used for fault signal diagnosis in an internal combustion engine and its cooling system. The experimental results indicated that the proposed continuous wavelet transform technique is effective in fault signal diagnosis for both experimental cases. Furthermore, a characteristic analysis and experimental comparison of the vibration signal and acoustic emission signal analysis with the proposed algorithm are also presented in this report.
---
paper_title: Fault correction of an airflow signal in a gasoline engine system using a neural fuzzy scheme and genetic algorithm
paper_content:
AbstractThis study presents an airflow correction system for correcting the fault of an airflow signal of a gasoline engine. The correction system consists of a fuzzy-model-based airflow estimation system and an airflow-monitoring system. The airflow estimation system estimates the normal airflow based on fuzzy neural networks and trained using the steepest-descent method and back-propagation algorithm. The airflow-monitoring system corrects the fault airflow signal according to the correction law. In order to raise the training performance, a genetic algorithm is used to search the learning rates and initial consequent gains of the fuzzy neural networks. The estimated results indicate that using a genetic algorithm certainly improves the performance of normal airflow estimation. The corrected results indicate that the airflow correction system can provide a feasible means of carrying out airflow fault correction in gasoline engine systems.
---
paper_title: MODEL-BASED ON-BOARD FAULT DETECTION AND DIAGNOSIS FOR AUTOMOTIVE ENGINES
paper_content:
This paper desribes an algorithm developed for the online detection and diagnosis of faults in automobile engine sensors and actuators, using the on-board microcomputer. The algorithm is based on the structured parity equation methodology. The parity equations are derived from an engine model having linear dynamics and static nonlinearities, obtained by identification from simualtion experiments.
---
paper_title: Neuro-fuzzy-based fault detection of the air flow sensor of an idling gasoline engine
paper_content:
Abstract This paper presents a neuro-fuzzy-based diagnostic system for detecting the faults of an air flow sensor of an idling gasoline engine. Based on the Takagi-cSugeno fuzzy system model, the diagnostic system is formulated by the input-output relationships between symptoms and faults. The system parameters are regulated with the learning datum from experiments, using the steepest descent method and back-propagation algorithm. The proposed diagnostic system consists of two parts: one is to judge the fault of the sensor and the other is to identify the bias degree of the sensor. The experimental results show that the fault source and fault (bias) degree can be identified with the proposed diagnostic system, and indicate that the neuro-fuzzy strategy is an efficient and available approach for fault diagnosis problems of the gasoline engine system.
---
paper_title: Developing a fault tolerant power‐train control system by integrating design of control and diagnostics
paper_content:
Fault detection, isolation and fault tolerant control are investigated for an spark ignition engine. Fault tolerant control refers to a strategy in which the desired stability and robustness of the control system are guaranteed in the presence of faults. In an attempt to realize fault tolerant control, a methodology for integrated design of control and fault diagnostics is proposed. Specifically, the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing. Information obtained from integral sliding mode control and from observers with hypothesis testing is utilized so that a fault can be detected, isolated and compensated. As an application example, the air and fuel dynamics of an IC engine are considered. A mean value engine model is developed and implemented in Simulink®. The air and fuel dynamics of the engine are identified using experimental data. The proposed algorithm for integration of control and diagnostics is then validated using the identified engine model. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: Improved SI engine modelling techniques with application to fault detection
paper_content:
This paper applies a fault detection strategy to a Jaguar car engine. Real data, over several engine speeds, is used to obtain an improved model of a subsystem comprising of the air intake, the manifold dynamics and the engine pumping. Modelling of an air leak is also considered. Three practical fault scenarios are assessed, deemed important by the manufacturer. A nonlinear observer method is used to detect practical sensor faults and an air leak in the manifold. Isolation logic is designed with respect to a restricted set of sensor measurements. The effectiveness of both modelling and fault detection axe assessed. A practical aim is to limit engine pollutants caused by system faults.
---
paper_title: Diagnosis of sensor bias faults
paper_content:
Describes a robust sensor fault diagnosis algorithm for a class of nonlinear dynamic systems. Specifically, the paper uses adaptive techniques to estimate the unknown constant sensor bias in the presence of system modeling uncertainties and sensor noise. The robustness, sensitivity and stability of the adaptive fault diagnosis architecture are rigorously established. A simulation example to illustrate the use of the proposed fault diagnosis architecture to diagnose bias in an automotive Universal Exhaust Gas Oxygen sensor is presented.
---
paper_title: Automotive engine diagnosis and control via nonlinear estimation
paper_content:
The aim of this article is to explore a possible approach to the problem of designing control and diagnostic strategies for future generations of automotive engines. The work described focuses on the use of physical models to estimate unmeasured or unmeasurable variables and parameters, to be used for control and diagnostic purposes. We introduce the mean value engine model used, present a conceptual strategy for combined control and diagnosis focusing mainly on the problem of air fuel ratio control, review basic concepts related to estimation in nonlinear systems, and propose various forms of the estimators that serve different objectives. We illustrate estimation problems in the context of a simplified engine model, assuming both linear and nonlinear measurement of oxygen concentration in the exhaust. Some simulation and experimental results related to estimation-based control and diagnosis are shown.
---
paper_title: Diagnosis of an automotive emission control system using fuzzy inference
paper_content:
Abstract Fault diagnosis of a physical plant is crucial for its healthy performance, as it could ultimately prevent catastrophic failure, help comply with environmental regulations, and enhance customer satisfaction. There exist several methods to detect and isolate incipient faults that might cause a plant’s performance to deviate from the nominal, which can be either subjective or objective. A scheme and methodology for integrating subjective (heuristic) and objective (analytical) knowledge for fault diagnosis and decision-making using fuzzy logic is demonstrated in this paper. Furthermore, the structure, challenges, and benefits of such integration are explored. Also, experimental results of the work carried out are presented.
---
paper_title: Nonlinear parity equation based residual generation for diagnosis of automotive engine faults
paper_content:
Abstract The parity equation residual generation method is a model-based fault detection and isolation scheme that has been applied with some success to the problem of monitoring the health of engineering systems. However, this scheme fails when applied to significantly nonlinear systems. This paper presents the application of a nonlinear parity equation residual generation scheme that uses forward and inverse dynamic models of nonlinear systems, to the problem of diagnosing sensor and actuator faults in an internal combustion engine, during execution of the United States Environmental Protection Agency Inspection and Maintenance 240 driving cycle. The Nonlinear AutoRegressive Moving Average Model with eXogenous inputs technique is used to identify the engine models required for residual generation. The proposed diagnostic scheme is validated experimentally and is shown to be sensitive to a number of input and sensor faults while remaining robust to the unmeasured load torque disturbance.
---
paper_title: Intake Air Path Diagnostics for Internal Combustion Engines
paper_content:
Presented is the detection, isolation, and estimation of faults that occur in the intake air path of internal combustion engines during steady state operation. The proposed diagnostic approach is based on a static air path model, which is adapted online such that the model output matches the measured output during steady state conditions. The resulting changes in the model coefficients create a vector whose magnitude and direction are used for fault detection and isolation. Fault estimation is realized by analyzing the residual between the actual sensor measurement and the output of the original (i.e., healthy) model. To identify the structure of the steady state air path model a process called system probing is developed. The proposed diagnostics algorithm is experimentally validated on the intake air path of a Ford 4.6 L V-8 engine. The specific faults to be identified include two of the most problematic faults that degrade the performance of transient fueling controllers: bias in the mass air flow sensor and a leak in the intake manifold. The selected model inputs include throttle position and engine speed, and the output is the mass air flow sensor measurement.
---
paper_title: Detection of sensor failures in automotive engines
paper_content:
The real-time application of detection filters to the diagnosis of sensor failures in automotive engine control systems is presented. The detection filter is the embodiment of a model-based failure detection and isolation (FDI) methodology, which utilizes analytical redundancy within a dynamical system to isolate the cause and location of abnormal behavior. The philosophy and essential features of FDI theory are presented, and the practical application of the method to the diagnosis of faults in some key sensors in an electronically controlled internal combustion engine is described. The experimental results presented here have been obtained on a production vehicle, and demonstrate that the real-time implementation of such detection filters is feasible, opening the way to a new generation of diagnostic strategies. >
---
paper_title: Fault detection in internal combustion engines using fuzzy logic
paper_content:
AbstractIn this study, a complementary fuzzy-logic-based fault diagnosis system was developed to diagnose the faults of an internal combustion engine (ICE) and the system incorporated with an engine test stand. The input variables of the fuzzy logic classifier were acquired via a data acquisition card and RS-232 port. The rule base of this system was developed by considering the theoretical knowledge, the expert knowledge, and the experiment results. The accuracy of the fuzzy logic classifier was tested by experimental studies which were performed under different fault conditions. Using the developed fault diagnosis system, ten general faults which were observed in the internal combustion engine were successfully diagnosed in real time. With these characteristics, the system could easily be used for fault diagnosis in test laboratories and in service workshops.
---
paper_title: Exhaust pressure estimation and its application to variable geometry turbine and wastegate diagnostics
paper_content:
Exhaust pressure is a critical engine parameter used to calculate engine volumetric efficiency and EGR flow rate. In this paper, exhaust pressure is estimated for an internal combustion engine equipped with a variable geometry turbocharger. A coordinate transformation is applied to generate a turbine map for estimation of the exhaust pressure. This estimation can be used to replace an expensive pressure sensor for cost saving. On the other hand, for internal combustion engines that have already installed exhaust pressure sensors, this estimation can be used to generate residual signals for model-based diagnostics. Based on the residual signals, two diagnostic methods are proposed: one based on cumulative sum algorithms and the other based on pattern recognition and neural networks. The algorithms are able to detect and isolate different failure modes for a turbocharger system.
---
paper_title: Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy: A survey and some new results
paper_content:
Abstract The paper reviews the state of the art of fault detection and isolation in automatic processes using analytical redundancy, and presents some new results. It outlines the principles and most important techniques of model-based residual generation using parameter identification and state estimation methods with emphasis upon the latest attempts to achieve robustness with respect to modelling errors. A solution to the fundamental problem of robust fault detection, providing the maximum achievable robustness by decoupling the effects of faults from each other and from the effects of modelling errors, is given. This approach not only completes the theory but is also of great importance for practical applications. For the case where the prerequisites for complete decoupling are not given, two approximate solutions—one in the time domain and one in the frequency domain—are presented, and the crossconnections to earlier approaches are evidenced. The resulting observer schemes for robust instrument fault detection, component fault detection, and actuator fault detection are briefly discussed. Finally, the basic scheme of fault diagnosis using a combination of analytical and knowledge-based redundancy is outlined.
---
paper_title: Application of an Effective Data-Driven Approach to Real-time time Fault Diagnosis in Automotive Engines
paper_content:
A dominant thrust in modern automotive industry is the development of "smart service systems" for the comfort of customers. The current on-board diagnosis systems embedded in the automobiles follow conventional rule-based diagnosis procedures, and may benefit from the introduction of sophisticated artificial intelligence and pattern recognition-based procedures in terms of diagnostic accuracy. Here, we present a mode-invariant fault diagnosis procedure that is based on data -driven approach, and show its applicability to automotive engines. The proposed approach achieves high-diagnostic accuracy by detecting the faults as soon as they occur. It uses statistical hypothesis tests to detect faults, a wavelet-based preprocessing of the data, and pattern recognition techniques for classifying various faults in engines. We simulate the Toyota Camry 544N Engine SIMULINK model in a real-time simulator and controlled by a prototype ECU (Electronic Control Unit). The engine model is simulated under several operating conditions (pedal angle, engine speed, etc), and pre-and post-fault data is collected for eight engine faults with different severity levels, and a database of cases is created for applying the presented approach. The results demonstrate that appealing diagnostic accuracy and fault severity estimation are possible with pattern recognition-based techniques, and, in particular, with the support vector machines.
---
paper_title: Fault detection and isolation for an experimental internal combustion engine via fuzzy identification
paper_content:
Certain engine faults can be detected and isolated by examining the pattern of deviations of engine signals from their nominal unfailed values. In this brief paper, we show how to construct a fuzzy identifier to estimate the engine signals necessary to calculate the deviation from nominal engine behavior, so that we may determine if the engine has certain actuator and sensor "calibration faults". We compare the fuzzy identifier to a nonlinear ARMAX technique and provide experimental results showing the effectiveness of our fuzzy identification based failure detection and identification strategy. >
---
paper_title: Use of Neural Networks for Modelling and Fault Detection for the Intake Manifold of a SI Engine
paper_content:
A Jaguar Car engine is used to provide data for modelling the throttle body, engine pumping and manifold body. Based on the gas law of the intake dynamics, input/output variables are identified and used to train a neural network. Various structures are compared and assessed. The best structure is then used for fault detection. A neural network observer is developed and error stability is assessed. Two fault scenarios are considered.
---
paper_title: Model-based diagnosis of an automotive engine using several types of fault models
paper_content:
Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.
---
paper_title: Embedded model-based fault diagnosis for on-board diagnosis of engine control systems
paper_content:
In this paper, a model-based fault diagnosis scheme for on-board diagnosis in spark ignition (SI) engine control systems is presented. The developed fault diagnosis system fully makes use of the available control structure and is embedded into the control loops. As a result, the implementation of the diagnosis system is realized with low demands on engineering costs, computational power and memory. The developed diagnosis scheme has been successfully applied to the air intake system of an SI-engine
---
paper_title: Model-based fault detection and diagnosis with special input excitation applied to a modern diesel engine
paper_content:
Abstract Due to the rising complexity of many technical processes modern diagnosis systems have to supervise a multitude of hydraulic, mechanical, electromechanical and mechatronic components. Therefore model-based methods of fault-detection and diagnosis have been developed. These methods use mathematical process models to relate data of several measurable variables. Thus the diagnosis quality depends on the available sensor data. In order to obtain additional information with the given sensor configuration special input excitation signals can be used. This paper will describe a method to locate faults in multivariable systems using such input excitation and its application to the intake air system of a modern common rail Diesel engine. The presented method uses the knowledge of fault effects on the measured output, when the inputs are successively excited quasi-stationary, to determine the location of the fault. It has been applied successfully to differentiate air mass sensor faults from other process faults.
---
paper_title: Robust strategy for intake leakage detection in diesel engines
paper_content:
Fault detection is motivated by the needs of guaranteeing high-performance engine behavior and regarding to the environmentally-based legislative regulations. An adaptive model-based observer strategy is applied for the fault detection and estimation. The hole estimation relies on the model accuracy and sensors precision. In this paper is provided by a model-based upper bound for leakage error estimation for threshold design by the mean of the observer sensitivity study. The proposed approach generates a threshold based only on the available measures even if faulty. Simulation results are provided using advanced Diesel engine developed under AMEsim.
---
paper_title: Improved SI engine modelling techniques with application to fault detection
paper_content:
This paper applies a fault detection strategy to a Jaguar car engine. Real data, over several engine speeds, is used to obtain an improved model of a subsystem comprising of the air intake, the manifold dynamics and the engine pumping. Modelling of an air leak is also considered. Three practical fault scenarios are assessed, deemed important by the manufacturer. A nonlinear observer method is used to detect practical sensor faults and an air leak in the manifold. Isolation logic is designed with respect to a restricted set of sensor measurements. The effectiveness of both modelling and fault detection axe assessed. A practical aim is to limit engine pollutants caused by system faults.
---
paper_title: Model Based Diagnosis for the Air Intake System of the SI-Engine
paper_content:
Because of legislative regulations like OBDII, on-board diagnosis has gained much interest lately. A model based approach is suggested for the diagnosis of the air intake system of an SI-engine. Im ...
---
paper_title: Model Based Diagnosis of Leaks in the Air-Intake System of an SI-Engine
paper_content:
One important area of SI-engine diagnosis is the diagnosis of leakage in the air-intake system. This is because a leakage can cause increased emissions and drivability problems. A method for accura ...
---
paper_title: Application of a data-driven monitoring technique to diagnose air leaks in an automotive diesel engine: A case study
paper_content:
This paper presents a case study of the application of a data-driven monitoring technique to diagnose air leaks in an automotive diesel engine. Using measurement signals taken from the sensors/actuators which are present in a modern automotive vehicle, a data-driven diagnostic model is built for condition monitoring purposes. Detailed investigations have shown that measured signals taken from the experimental test-bed often contain redundant information and noise due to the nature of the process. In order to deliver a clear interpretation of these measured signals, they therefore need to undergo a 'compression' and an 'extraction' stage in the modelling process. It is at this stage that the proposed data-driven monitoring technique plays a significant role by taking only the important information of the original measured signals for fault diagnosis purposes. The status of the engine's performance is then monitored using this diagnostic model. This condition monitoring process involves two separate stages of fault detection and root-cause diagnosis. ::: ::: The effectiveness of this diagnostic model was validated using an experimental automotive 1.9L four-cylinder diesel engine embedded in a chassis dynamometer in an engine test-bed. Two joint diagnostics plots were used to provide an accurate and sensitive fault detection process. Using the proposed model, small air leaks in the inlet manifold plenum chamber with a diameter size of 2-6 turn were accurately detected. Further analyses using contribution to T-2 and Q statistics show the effect of these air leaks on fuel consumption. It was later discovered that these air leaks may contribute to emissions fault. ::: ::: In comparison to the existing model-based approaches, the proposed method has several benefits: (i) it makes no simplifying assumptions, as the model is built entirely from the measured signals; (ii) it is simple and straight-forward; (iii) there is no additional hardware required for modelling; (iv) it is a time and cost-efficient way to deliver condition monitoring (i.e. fault diagnosis application); (v) it is capable of pin-pointing the root-cause and the effect of the problem; and (vi) it is feasible to be implemented in practice. (c) 2005 Elsevier Ltd. All rights reserved.
---
paper_title: Using hypothesis testing theory to evaluate principles for leakage diagnosis of automotive engines
paper_content:
Two different methods for diagnosing leakages in the air path of an automotive engine are investigated. The first is based on a comparison between measured and estimated air flows. The second is based on an estimation of the leakage area. The two methods are compared by using a framework of hypothesis testing and especially the power function. The investigation is made first in theory and then also on a real engine. The conclusion is that the principle based on the estimated leakage area, gives a better power function and is therefore the best choice if only leakage detection is considered. However, if also other faults need to be diagnosed, it is shown that the sensitivity to these other faults may be better with the principle based on comparison of estimated and measured air flow. © 2003 Elsevier Ltd. All rights reserved.
---
paper_title: Model-based adaptive observers for intake leakage detection in diesel engines
paper_content:
This paper studies the problem of diesel engine diagnosis by means of model-based adaptive observers. The problem is motivated by the needs of guarantee high-performance engine behavior and in particular to respect the environmentally-based legislative regulations. The complexity of the intake systems of this type of engine makes this task particularly arduous and requires to constantly monitor and diagnose the engine operation. The development and application of two different nonlinear adaptive observers for intake leakage estimation is the goal of this work. The proposed model-based adaptive observers approach allows estimating a variable that is directly related to the presence of leakage, e.g., hole radius. Monitoring and diagnostic tasks, with this kind of approach, are straightforward. Two different approaches, whose main difference is on observer adaptation law structure are studied. One approach is based on fixed gains while the other method has variable gain. The paper also includes experimental results of the two studied methods on a four cylinder Diesel engine testbed.
---
paper_title: Model-based diagnosis of an automotive engine using several types of fault models
paper_content:
Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.
---
paper_title: Diagnosis and Prognosis of Automotive Systems: motivations, history and some results
paper_content:
Abstract The field of automotive engineering has seen an explosion in the presence of on-board electronic components and systems vehicles since the 1970s. This growth was initially motivated by the introduction of emissions regulations that led to the widespread application of electronic engine controls. A secondary but important consequence of these developments was the adoption of on-board diagnostics regulations aimed at insuring that emission control systems would operate as intended for a prescribed period of time (or vehicle mileage). In addition, the presence of micro-controllers on-board the vehicle has led to a proliferation of other functions related to safety and customer convenience, and implemented through electronic systems and related software, thus creating the need for more sophisticated on-board diagnostics. Today, a significant percentage of the software code in an automobile is devoted to diagnostic functions. This paper presents an overview of diagnostic needs and requirements in the automotive industry, illustrates some of the challenges that are associated with satisfying these requirements, and proposes some future directions.
---
paper_title: Optical Diagnostics of Late-Injection Low-Temperature Combustion in a Heavy-Duty Diesel Engine
paper_content:
A late injection, high exhaust-gas recirculation (EGR)-rate, low-temperature combustion strategy was investigated in a heavy-duty diesel engine using a suite of optical diagnostics: chemiluminescence for visualization of ignition and combustion, laser Mie scattering for liquid fuel imaging, planar laser-induced fluorescence (PLIF) for both OH and vapor-fuel imaging, and laser-induced incandescence (LII) for soot imaging. Fuel is injected at top dead center when the in-cylinder gases are hot and dense. Consequently, the maximum liquid fuel penetration is 27 mm, which is short enough to avoid wall impingement. The cool flame starts 4.5 crank angle degrees (CAD) after the start of injection (ASI), midway between the injector and bowl-rim, and likely helps fuel to vaporize. Within a few CAD, the cool-flame combustion reaches the bowl-rim. A large premixed combustion occurs near 9 CAD ASI, close to the bowl rim. Soot is visible shortly afterwards along the walls, typically between two adjacent jets at the head vortex location. OH PLIF indicates that premixed combustion first occurs within the jet and then spreads along the bowl rim in a thin layer, surrounding soot pockets at the start of the mixing-controlled combustion phase near 17 CAD ASI. During the mixing-controlled phase, soot is not fully oxidized and is still present near the bowl-rim late in the cycle. At the end of combustion near 27 CAD ASI, averaged PLIF images indicate two separate zones. OH PLIF appears near the bowl rim, while broadband PLIF persists late in the cycle near the injector. The most likely source of broadband PLIF is unburned fuel, which indicates that the near-injector region is a potential source of unburned hydrocarbons.
---
paper_title: On Board Vehicle Diagnostics
paper_content:
Vehicle on-board diagnostics has been evolving rapidly over the last 20 years. What began as a small set manufacturer-specific tests and communication protocols has evolved into a complex, comprehensive diagnostic system able to detect literally hundreds of failures that could cause drivability concerns or emission increases. This rapid evolution has been driven, in part, by California's OBD-II regulations as well as need for manufacturers to provide comprehensive diagnostics to allow technicians to service the complex engine and transmission controls on today's vehicles. As the technology improves, states are relying on on-board diagnostics for Inspection/Maintenance programs in place of more costly tailpipe emission tests. The recent introduction of a standardized CAN communication link provides fast and easy access to internal control module data and opens the way for new wireless telematics technologies like prognostics and remote diagnostics. This paper explores the history and future opportunities brought about by this OBD technology.
---
paper_title: An Integrated Diagnostic Development Process for Automotive Engine Control Systems
paper_content:
Theory and applications of model-based fault diagnosis have progressed significantly in the last four decades. In addition, there has been increased use of model-based design and testing in the automotive industry to reduce design errors, perform rapid prototyping, and hardware-in-the-loop simulation (HILS). This paper presents a new model-based diagnostic development process for automotive engine control systems. This process seamlessly employs a graph-based dependency model and mathematical models for online/offline diagnosis. The hybrid method improves the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on empirical models, enables remote diagnosis, and responds to the challenges of increased system complexity. The development platform consists of an engine electronic control unit (ECU) rapid prototyping system and HILS equipment - the air intake subsystem (AIS). The diagnostic strategy is tested and validated using the HILS platform.
---
paper_title: Application of Signal Analysis and Data-driven Approaches to Fault Detection and Diagnosis in Automotive Engines
paper_content:
The modern era of sophisticated automobiles is necessitating the development of generic and automated embedded fault diagnosis tools. Future vehicles are expected to contain more than one hundred complex electronic control units (ECUs) and data acquisition systems to control and monitor large number of system variables in real-time. There exists an abundant amount of literature on fault detection and diagnosis (FDD). However, these techniques are developed in isolation. In order to solve the problem of FDD in complex systems, such as modern vehicles, a hybrid methodology combining different techniques is needed. Here, we apply an approach based on signal analysis that combines various signal processing and statistical learning techniques for real-time FDD in automotive engines. The data under several scenarios is collected from an engine model running in a real-time simulator and controlled by an ECU.
---
paper_title: Application of an Effective Data-Driven Approach to Real-time time Fault Diagnosis in Automotive Engines
paper_content:
A dominant thrust in modern automotive industry is the development of "smart service systems" for the comfort of customers. The current on-board diagnosis systems embedded in the automobiles follow conventional rule-based diagnosis procedures, and may benefit from the introduction of sophisticated artificial intelligence and pattern recognition-based procedures in terms of diagnostic accuracy. Here, we present a mode-invariant fault diagnosis procedure that is based on data -driven approach, and show its applicability to automotive engines. The proposed approach achieves high-diagnostic accuracy by detecting the faults as soon as they occur. It uses statistical hypothesis tests to detect faults, a wavelet-based preprocessing of the data, and pattern recognition techniques for classifying various faults in engines. We simulate the Toyota Camry 544N Engine SIMULINK model in a real-time simulator and controlled by a prototype ECU (Electronic Control Unit). The engine model is simulated under several operating conditions (pedal angle, engine speed, etc), and pre-and post-fault data is collected for eight engine faults with different severity levels, and a database of cases is created for applying the presented approach. The results demonstrate that appealing diagnostic accuracy and fault severity estimation are possible with pattern recognition-based techniques, and, in particular, with the support vector machines.
---
paper_title: Integration of fdi functional units into embedded tracking control loops and its application to FDI in engine control systems
paper_content:
A scheme is presented for the integration of FDI functional units into control loops embedded in mechatronic systems. The core of this FDI scheme is to design the FDI functional units by making use of the available tracking control structure. The developed FDI scheme is applied to the air-intake control system of an SI-engine
---
paper_title: Developing a fault tolerant power‐train control system by integrating design of control and diagnostics
paper_content:
Fault detection, isolation and fault tolerant control are investigated for an spark ignition engine. Fault tolerant control refers to a strategy in which the desired stability and robustness of the control system are guaranteed in the presence of faults. In an attempt to realize fault tolerant control, a methodology for integrated design of control and fault diagnostics is proposed. Specifically, the integrated design of control and diagnostics is achieved by combining the integral sliding mode control methodology and observers with hypothesis testing. Information obtained from integral sliding mode control and from observers with hypothesis testing is utilized so that a fault can be detected, isolated and compensated. As an application example, the air and fuel dynamics of an IC engine are considered. A mean value engine model is developed and implemented in Simulink®. The air and fuel dynamics of the engine are identified using experimental data. The proposed algorithm for integration of control and diagnostics is then validated using the identified engine model. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: Automotive engine diagnosis and control via nonlinear estimation
paper_content:
The aim of this article is to explore a possible approach to the problem of designing control and diagnostic strategies for future generations of automotive engines. The work described focuses on the use of physical models to estimate unmeasured or unmeasurable variables and parameters, to be used for control and diagnostic purposes. We introduce the mean value engine model used, present a conceptual strategy for combined control and diagnosis focusing mainly on the problem of air fuel ratio control, review basic concepts related to estimation in nonlinear systems, and propose various forms of the estimators that serve different objectives. We illustrate estimation problems in the context of a simplified engine model, assuming both linear and nonlinear measurement of oxygen concentration in the exhaust. Some simulation and experimental results related to estimation-based control and diagnosis are shown.
---
paper_title: Nonlinear parity equation based residual generation for diagnosis of automotive engine faults
paper_content:
Abstract The parity equation residual generation method is a model-based fault detection and isolation scheme that has been applied with some success to the problem of monitoring the health of engineering systems. However, this scheme fails when applied to significantly nonlinear systems. This paper presents the application of a nonlinear parity equation residual generation scheme that uses forward and inverse dynamic models of nonlinear systems, to the problem of diagnosing sensor and actuator faults in an internal combustion engine, during execution of the United States Environmental Protection Agency Inspection and Maintenance 240 driving cycle. The Nonlinear AutoRegressive Moving Average Model with eXogenous inputs technique is used to identify the engine models required for residual generation. The proposed diagnostic scheme is validated experimentally and is shown to be sensitive to a number of input and sensor faults while remaining robust to the unmeasured load torque disturbance.
---
paper_title: Embedded model-based fault diagnosis for on-board diagnosis of engine control systems
paper_content:
In this paper, a model-based fault diagnosis scheme for on-board diagnosis in spark ignition (SI) engine control systems is presented. The developed fault diagnosis system fully makes use of the available control structure and is embedded into the control loops. As a result, the implementation of the diagnosis system is realized with low demands on engineering costs, computational power and memory. The developed diagnosis scheme has been successfully applied to the air intake system of an SI-engine
---
| Title: A Survey on Diagnostic Methods for Automotive Engines
Section 1: INTRODUCTION
Description 1: An introductory section that covers the background, significance, and regulatory requirements of automotive onboard diagnostic systems.
Section 2: APPROACHES TO FAULT DETECTION AND DIAGNOSIS OF ENGINEERED SYSTEMS
Description 2: A comprehensive review of various methods available for diagnostics of engineered systems, categorized under data-driven and model-based methods.
Section 3: MONITORING REQUIREMENTS FOR AUTOMOTIVE ENGINES AND RELEVANT WORK
Description 3: Detailed description of major monitoring requirements for modern automotive engines, including their purpose, malfunction criteria, and recent work.
Section 4: DIAGNOSIS OF SENSOR FAULTS AND LEAKS IN AUTOMOTIVE ENGINES
Description 4: Discussion of sensor faults and leaks in automotive engines, including detection and diagnosis methods using both data-driven and model-based approaches.
Section 5: EMERGING AREAS OF RESEARCH IN FAULT DIAGNOSIS OF POWER TRAIN SYSTEMS
Description 5: Exploration of recent research efforts and emerging topics in diagnostic methods for advanced combustion, remote diagnostics, and integration of fault detection with engine control design.
Section 6: FUNDING
Description 6: Statement regarding the funding of the research, if any. |
A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web | 8 | ---
paper_title: The computational geowiki: what, why, and how
paper_content:
Google Maps and its spin-offs are highly successful, but they have a major limitation: users see only pictures of geographic data. These data are inaccessible except by limited vendor-defined APIs, and associated user data are weakly linked to them. But some applications require access, specifically geowikis and computational geowikis. We present the design and implementation of a computational geowiki. We also show empirically that both geowiki and computational geowiki features are necessary for a representative domain, bicycling, because (a) cyclists have useful knowledge unavailable except from cyclists and (b) cyclist-oriented automatic route-finding is enhanced by user input. Finally, we derive design implications: for example, user contributions presented within a route description are useful, and wikis should support contribution of opinion as well as fact.
---
paper_title: The wikification of GIS and its consequences: Or Angelina Jolie's new tattoo and the future of GIS
paper_content:
A method for encapsulating magnetic particles by enclosure within oil drops, mixing in an aqueous solution and dispersing the oil drops with the enclosed particles by application of an alternating magnetic field. The dispersed and oil covered particles are microencapsulated with at least one type of polymer.
---
paper_title: Using Ontologies for Integrated Geographic Information Systems
paper_content:
Today, there is a huge amount of data gathered about the Earth, not only from new spatial information systems, but also from new and more sophisticated data collection technologies. This scenario leads to a number of interesting research challenges, such as how to integrate geographic information of different kinds. The basic motivation of this paper is to introduce a GIS architecture that can enable geographic information integration in a seamless and flexible way based on its semantic value and regardless of its representation. The proposed solution is an ontology-driven geographic information system that acts as a system integrator. In this system, an ontology is a component, such as the database, cooperating to fulfill the system’s objectives. By browsing through ontologies the users can be provided with information about the embedded knowledge of the system. Special emphasis is given to the case of remote sensing systems and geographic information systems. The levels of ontologies can be used to guide processes for the extraction of more general or more detailed information. The use of multiple ontologies allows the extraction of information in different stages of classification. The semantic integration of aerial images and GIS is a crucial step towards better geospatial modeling.
---
paper_title: DBpedia: A Nucleus for a Web of Open Data
paper_content:
DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
---
paper_title: An intelligent bay geo-information retrieval approach based on Geo-ontology
paper_content:
In the era of information explosion, information retrieval has become a bottleneck in information sharing and integration. However currently, the existing information retrieval methods are mainly based on keyword matching, which can not fully take advantage of the information context and potential knowledge. All of these methods are particularly inefficient as to geospatial information which is more complex and unstructured. Nevertheless, geospatial ontology (Geo-ontology) has been used to enrich geospatial objects with semantic information which could be very helpful in the geospatial information retrieval and integration. And there is a wealth of relations between objects (such as the relation between bay and the inter-tidal zone) in bay field. Thus in this paper, an intelligent bay geo-information retrieval approach based on bay geo-information ontology has been proposed. Firstly, the procedure of establishing the bay geo-information ontology database is introduced. Secondly, both the intelligent retrieval mechanism and approach in bay geo-information are expatiated, which are used to convert the user retrievals request to semantic request and map the semantic request to concrete query of actual data source by adopting the knowledge that's explicitly expressed in formal ontology modeling language (OWL). Finally, the method was applied in the bay information management system, and the feasibility and efficacy of the geo-information retrieval approach is well verified.
---
paper_title: Toward the semantic geospatial web
paper_content:
With the growth of the World Wide Web has come the insight that currently available methods for finding and using information on the web are often insufficient. In order to move the Web from a data repository to an information resource, a totally new way of organizing information is needed. The advent of the Semantic Web promises better retrieval methods by incorporating the data's semantics and exploiting the semantics during the search process. Such a development needs special attention from the geospatial perspective so that the particularities of geospatial meaning are captured appropriately. The creation the Semantic Geospatial Web needs the development multiple spatial and terminological ontologies, each with a formal semantics; the representation of those semantics such that they are available both to machines for processing and to people for understanding; and the processing of geospatial queries against these ontologies and the evaluation of the retrieval results based on the match between the semantics of the expressed information need and the available semantics of the information resources and search systems. This will lead to a new framework for geospatial information retrieval based on the semantics of spatial and terminological ontologies. By explicitly representing the role of semantics in different components of the information retrieval process (people, interfaces, search systems, and information resources), the Semantic Geospatial Web will enable users to retrieve more precisely the data they need, based on the semantics associated with these data.
---
paper_title: LinkedGeoData -- Adding a Spatial Dimension to the Web of Data
paper_content:
In order to employ the Web as a medium for data and information integration, comprehensive datasets and vocabularies are required as they enable the disambiguation and alignment of other data and information. Many real-life information integration and aggregation tasks are impossible without comprehensive background knowledge related to spatial features of the ways, structures and landscapes surrounding us. In this paper we contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. We describe how this data can be interlinked with other spatial data sets, how it can be made accessible for machines according to the linked data paradigm and for humans by means of a faceted geo-data browser.
---
paper_title: GeoWordNet: a resource for geo-spatial applications
paper_content:
Geo-spatial ontologies provide knowledge about places in the world and spatial relations between them. They are fundamental in order to build semantic information retrieval systems and to achieve semantic interoperability in geo-spatial applications. In this paper we present GeoWordNet, a semantic resource we created from the full integration of GeoNames, other high quality resources and WordNet. The methodology we followed was largely automatic, with manual checks when needed. This allowed us accomplishing at the same time a never reached before accuracy level and a very satisfactory quantitative result, both in terms of concepts and geographical entities.
---
paper_title: Geographical classification of documents using evidence from Wikipedia
paper_content:
Obtaining or approximating a geographic location for search results often motivates users to include place names and other geography-related terms in their queries. Previous work shows that queries that include geography-related terms correspond to a significant share of the users' demand. Therefore, it is important to recognize the association of documents to places in order to adequately respond to such queries. This paper describes strategies for text classification into geography-related categories, using evidence extracted from Wikipedia. We use terms that correspond to entry titles and the connections between entries in Wikipedia's graph to establish a semantic network from which classification features are generated. Results of experiments using a news data-set, classified over Brazilian states, show that such terms constitute valid evidence for the geographical classification of documents, and demonstrate the potential of this technique for text classification.
---
paper_title: Geographical Information Retrieval with Ontologies of Place
paper_content:
Geographical context is required of many information retrieval tasks in which the target of the search may be documents, images or records which are referenced to geographical space only by means of place names. Often there may be an imprecise match between the query name and the names associated with candidate sources of information. There is a need therefore for geographical information retrieval facilities that can rank the relevance of candidate information with respect to geographical closeness as well as semantic closeness with respect to the topic of interest. Here we present an ontology of place that combines limited coordinate data with qualitative spatial relationships between places. This parsimonious model of place is intended to suppon information retrieval tasks that may be global in scope. The ontology has been implemented with a semantic modelling system linking non-spatial conceptual hierarchies with the place ontology. An hierarchical distance measure is combined with Euclidean distance between place centroids to create a hybrid spatial distance measure. This can be combined with thematic distance, based on classification semantics, to create an integrated semantic closeness measure that can be used for a relevance ranking of retrieved objects.
---
paper_title: OpenStreetMap: User-Generated Street Maps
paper_content:
The OpenStreetMap project is a knowledge collective that provides user-generated street maps. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes. A considerable number of contributors edit the world map collaboratively using the OSM technical infrastructure, and a core group, estimated at approximately 40 volunteers, dedicate their time to creating and improving OSM's infrastructure, including maintaining the server, writing the core software that handles the transactions with the server, and creating cartographical outputs. There's also a growing community of software developers who develop software tools to make OSM data available for further use across different application domains, software platforms, and hardware devices. The OSM project's hub is the main OSM Web site.
---
paper_title: Semantically enriching VGI in support of implicit feedback analysis
paper_content:
In recent years, the proliferation of Volunteered Geographic Information (VGI) has enabled many Internet users to contribute to the construction of rich and increasingly complex spatial datasets. This growth of geo-referenced information and the often loose semantic structure of such data have resulted in spatial information overload. For this reason, a semantic gap has emerged between unstructured geo-spatial datasets and high-level ontological concepts. Filling this semantic gap can help reduce spatial information overload, therefore facilitating both user interactions and the analysis of such interaction. Implicit Feedback analysis is the focus of our work. In this paper we address this problem by proposing a system that executes spatial discovery queries. Our system combines a semantically-rich and spatially-poor ontology (DBpedia) with a spatially-rich and semantically-poor VGI dataset (OpenStreetMap). This technique differs from existing ones, such as the aggregated dataset LinkedGeoData, as it is focused on user interest analysis and takes map scale into account. System architecture, functionality and preliminary results gathered about the system performance are discussed.
---
paper_title: Core Elements of Digital Gazetteers: Placenames, Categories, and Footprints
paper_content:
The core elements of a digital gazetteer are the placename itself, the type of place it labels, and a geographic footprint representing its location and possibly its extent. Such gazetteer data is an important component of indirect geographic referencing through placenames. Based on the gazetteer development work of the Alexandria Digital Library, this paper presents the nature of placenames, and the process of assigning categories to places based on the words in the placenames and other information, and discusses the nature of georeferencing places with geographic footprints.
---
paper_title: Development and application of a metric on semantic nets
paper_content:
Motivated by the properties of spreading activation and conceptual distance, the authors propose a metric, called distance, on the power set of nodes in a semantic net. Distance is the average minimum path length over all pairwise combinations of nodes between two subsets of nodes. Distance can be successfully used to assess the conceptual distance between sets of concepts when used on a semantic net of hierarchical relations. When other kinds of relationships, like 'cause', are used, distance must be amended but then can again be effective. The judgements of distance significantly correlate with the distance judgements that people make and help to determine whether one semantic net is better or worse than another. The authors focus on the mathematical characteristics of distance that presents novel cases and interpretations. Experiments in which distance is applied to pairs of concepts and to sets of concepts in a hierarchical knowledge base show the power of hierarchical relations in representing information about the conceptual distance between concepts. >
---
paper_title: DBpedia: A Nucleus for a Web of Open Data
paper_content:
DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
---
paper_title: Open Mind Common Sense: Knowledge Acquisition from the General Public
paper_content:
Open Mind Common Sense is a knowledge acquisition system designed to acquire commonsense knowledge from the general public over the web. We describe and evaluate our first fielded system, which enabled the construction of a 450,000 assertion commonsense knowledge base. We then discuss how our second-generation system addresses weaknesses discovered in the first. The new system acquires facts, descriptions, and stories by allowing participants to construct and fill in natural language templates. It employs word-sense disambiguation and methods of clarifying entered knowledge, analogical inference to provide feedback, and allows participants to validate knowledge and in turn each other.
---
paper_title: Linked Data -- The story so far
paper_content:
The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
---
paper_title: Freebase: a collaboratively created graph database for structuring human knowledge
paper_content:
Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.
---
paper_title: DBpedia Mobile: A Location-Enabled Linked Data Browser
paper_content:
In this demonstration, we present DBpedia Mobile, a location-centric DBpedia client application for mobile devices consisting of a map view and a Fresnel-based Linked Data browser. The DBpedia project extracts structured information from Wikipedia and publishes this information as Linked Data on the Web. The DBpedia dataset contains information about 2.18 million things, including almost 300,000 geographic locations. DBpedia is interlinked with various other location-related datasets. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, users can explore background information about locations and can navigate into interlinked datasets. DBpedia Mobile demonstrates that the DBpedia dataset can serve as a useful starting point to explore the Geospatial Semantic Web using a mobile device.
---
paper_title: YAGO : A Core of Semantic Knowledge Unifying WordNet and Wikipedia
paper_content:
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
---
paper_title: Geographical Linked Data: The Administrative Geography of Great Britain on the Semantic Web
paper_content:
Ordnance Survey, the national mapping agency of Great Britain, is investigating how semantic web technologies assist its role as a geographical information provider. A major part of this work involves the development of prototype products and datasets in RDF. This article discusses the production of an example dataset for the administrative geography of Great Britain, demonstrating the advantages of explicitly encoding topological relations between geographic entities over traditional spatial queries. We also outline how these data can be linked to other datasets on the web of linked data and some of the challenges that this raises.
---
paper_title: Ontologies and knowledge bases: Towards a terminological clarification
paper_content:
The word \ontology" has recently gained a good popularity within the knowledge engineering community. However, its meaning tends to remain a bit vague, as the term is used in very difierent ways. Limiting our attention to the various proposals made in the current debate in AI, we isolate a number of interpretations, which in our opinion deserve a suitable clariflcation. We elucidate the implications of such various interpretations, arguing for the need of clear terminological choices regarding the technical use of terms like \ontology", \conceptualization" and \ontological commitment". After some comments on the use \Ontology" (with the capital \o") as a term which denotes a philosophical discipline, we analyse the possible confusion between an ontology intended as a particular conceptual framework at the knowledge level and an ontology intended as a concrete artifact at the symbol level, to be used for a given purpose. A crucial point in this clariflcation efiort is the careful analysis of Gruber’ s deflnition of an ontology as a speciflcation of a conceptualization.
---
paper_title: LinkedGeoData -- Adding a Spatial Dimension to the Web of Data
paper_content:
In order to employ the Web as a medium for data and information integration, comprehensive datasets and vocabularies are required as they enable the disambiguation and alignment of other data and information. Many real-life information integration and aggregation tasks are impossible without comprehensive background knowledge related to spatial features of the ways, structures and landscapes surrounding us. In this paper we contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. We describe how this data can be interlinked with other spatial data sets, how it can be made accessible for machines according to the linked data paradigm and for humans by means of a faceted geo-data browser.
---
paper_title: GeoWordNet: a resource for geo-spatial applications
paper_content:
Geo-spatial ontologies provide knowledge about places in the world and spatial relations between them. They are fundamental in order to build semantic information retrieval systems and to achieve semantic interoperability in geo-spatial applications. In this paper we present GeoWordNet, a semantic resource we created from the full integration of GeoNames, other high quality resources and WordNet. The methodology we followed was largely automatic, with manual checks when needed. This allowed us accomplishing at the same time a never reached before accuracy level and a very satisfactory quantitative result, both in terms of concepts and geographical entities.
---
paper_title: Geospatial Information Bottom-Up: A Matter of Trust and Semantics
paper_content:
Geographic Information Science and business are facing a new challenge: understanding and exploiting data and services emerging from online communities. In the emerging technologies of the social web, GI user roles switched from being data consumers to become data producers, the challenge we argue is in making this generated GI usable. As a use case we point to the increasing demands for up-to-date geographic information combined with the high cost of maintenance which present serious challenges to data providers. In this paper we argue that the social web combined with social network science present a unique opportunity to achieve the goal of reducing the cost of maintenance and update of geospatial data and providing a platform for bottom up approaches to GI. We propose to focus on web-based trust as a proxy measure for quality and to study its spatio-temporal dimensions. We also point to work on combining folksonomies with ontologies, allowing for alternative models of metadata and semantics as components of our proposed vision.
---
paper_title: Geographical classification of documents using evidence from Wikipedia
paper_content:
Obtaining or approximating a geographic location for search results often motivates users to include place names and other geography-related terms in their queries. Previous work shows that queries that include geography-related terms correspond to a significant share of the users' demand. Therefore, it is important to recognize the association of documents to places in order to adequately respond to such queries. This paper describes strategies for text classification into geography-related categories, using evidence extracted from Wikipedia. We use terms that correspond to entry titles and the connections between entries in Wikipedia's graph to establish a semantic network from which classification features are generated. Results of experiments using a news data-set, classified over Brazilian states, show that such terms constitute valid evidence for the geographical classification of documents, and demonstrate the potential of this technique for text classification.
---
paper_title: WordNet: An Electronic Lexical Database
paper_content:
A teaching device to acquaint dental students and also patients with endodontic root canal techniques performed by dentists and utilizing an electronic oscillator having a scale reading in electric current measurement and a pair of electrical circuit conductors being connected at one end to the terminals of the oscillator and the opposite ends thereof respectively being connectable to one or more small diameter metal wires which simulate dental reamers and files which are movable in root canal-simulating passages of uniform diameter complementary to that of said wires and formed in a transparent model of a human tooth including a root and cusp thereon and mounted in a transparent enclosure in which the root portion of the tooth extends with the cusp of the model extending above the upper end of the enclosure.
---
paper_title: OpenStreetMap: User-Generated Street Maps
paper_content:
The OpenStreetMap project is a knowledge collective that provides user-generated street maps. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes. A considerable number of contributors edit the world map collaboratively using the OSM technical infrastructure, and a core group, estimated at approximately 40 volunteers, dedicate their time to creating and improving OSM's infrastructure, including maintaining the server, writing the core software that handles the transactions with the server, and creating cartographical outputs. There's also a growing community of software developers who develop software tools to make OSM data available for further use across different application domains, software platforms, and hardware devices. The OSM project's hub is the main OSM Web site.
---
paper_title: WikiRelate! Computing Semantic Relatedness Using Wikipedia
paper_content:
Wikipedia provides a knowledge base for computing word relatedness in a more structured fashion than a search engine and with more coverage than WordNet. In this work we present experiments on using Wikipedia for computing semantic relatedness and compare it to WordNet on various benchmarking datasets. Existing relatedness measures perform better using Wikipedia than a baseline given by Google counts, and we show that Wikipedia outperforms WordNet when applied to the largest available dataset designed for that purpose. The best results on this dataset are obtained by integrating Google, WordNet and Wikipedia based measures. We also show that including Wikipedia improves the performance of an NLP application processing naturally occurring texts.
---
paper_title: Towards an RDF encoding of ConceptNet
paper_content:
The possibility of relying on a rich background knowledge constitutes a key element for the development of more effective sentiment analysis and SW applications. In this paper, we propose to encode the wide knowledge base collected by Open Mind Common Sense initiative into ConceptNet, in a semantic aware format to make it directly available for Semantic Web applications. We also discuss how the encoding of ConceptNet into RDF can be beneficial to promote its connection with other resources such as WordNet and HEO ontology to further extend its knowledge base.
---
paper_title: Semantic Wikipedia
paper_content:
Wikipedia is the world's largest collaboratively edited source of encyclopaedic knowledge. But in spite of its utility, its contents are barely machine-interpretable. Structural knowledge, e.,g. about how concepts are interrelated, can neither be formally stated nor automatically processed. Also the wealth of numerical data is only available as plain text and thus can not be processed by its actual meaning.We provide an extension to be integrated in Wikipedia, that allows the typing of links between articles and the specification of typed data inside the articles in an easy-to-use manner.Enabling even casual users to participate in the creation of an open semantic knowledge base, Wikipedia has the chance to become a resource of semantic statements, hitherto unknown regarding size, scope, openness, and internationalisation. These semantic enhancements bring to Wikipedia benefits of today's semantic technologies: more specific ways of searching and browsing. Also, the RDF export, that gives direct access to the formalised knowledge, opens Wikipedia up to a wide range of external applications, that will be able to use it as a background knowledge base.In this paper, we present the design, implementation, and possible uses of this extension.
---
paper_title: Semantically enriching VGI in support of implicit feedback analysis
paper_content:
In recent years, the proliferation of Volunteered Geographic Information (VGI) has enabled many Internet users to contribute to the construction of rich and increasingly complex spatial datasets. This growth of geo-referenced information and the often loose semantic structure of such data have resulted in spatial information overload. For this reason, a semantic gap has emerged between unstructured geo-spatial datasets and high-level ontological concepts. Filling this semantic gap can help reduce spatial information overload, therefore facilitating both user interactions and the analysis of such interaction. Implicit Feedback analysis is the focus of our work. In this paper we address this problem by proposing a system that executes spatial discovery queries. Our system combines a semantically-rich and spatially-poor ontology (DBpedia) with a spatially-rich and semantically-poor VGI dataset (OpenStreetMap). This technique differs from existing ones, such as the aggregated dataset LinkedGeoData, as it is focused on user interest analysis and takes map scale into account. System architecture, functionality and preliminary results gathered about the system performance are discussed.
---
paper_title: Volunteered Geographic Information: the nature and motivation of produsers
paper_content:
Advances in positioning, Web mapping, cellular communications and wiki technologies have surpassed the original visions of GSDI programs around the world. By tapping the distributed knowledge, personal time and energy of volunteer contributors, GI voluntarism is beginning to relocate and redistribute selected GI productive activities from mapping agencies to networks of non-state volunteer actors. Participants in the production process are both users and producers, or ‘produsers’ to use a recent neologism. Indeed, GI voluntarism ultimately has the potential to redistribute the rights to define and judge the value of the produced geographic information and of the new production system in general. The concept and its implementation present a rich collection of both opportunities and risks now being considered by leaders of public and private mapping organizations world-wide. In this paper, the authors describe and classify both the types of people who volunteer geospatial information and the nature of their contributions. Combining empirical research dealing with the Open Source software and Wikipedia communities with input from selected national mapping agencies and private companies, the authors offer different taxonomies that can help researchers clarify what is at stake with respect to geospatial information contributors. They identify early lessons which may be drawn from this research, and suggest questions which may be posed by large mapping organizations when considering the potential opportunities and risks associated with encouraging and employing Volunteered Geographic Information in their programs.
---
paper_title: Description Logics as Ontology Languages for the Semantic Web
paper_content:
The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality.
---
paper_title: Multi-source Toponym Data Integration and Mediation for a Meta-Gazetteer Service
paper_content:
A variety of gazetteers exist based on administrative or user contributed data. Each of these data sources has benefits for particular geographical analysis and information retrieval tasks but none is a one fit all solution. We present a mediation framework to access and integrate distributed gazetteer resources to build a meta-gazetteer that generates augmented versions of place name information. The approach combines different aspects of place name data from multiple gazetteer sources that refer to the same geographic place and employs several similarity metrics to identify equivalent toponyms.
---
paper_title: A survey on ontology mapping
paper_content:
Ontology is increasingly seen as a key factor for enabling interoperability across heterogeneous systems and semantic web applications. Ontology mapping is required for combining distributed and heterogeneous ontologies. Developing such ontology mapping has been a core issue of recent ontology research. This paper presents ontology mapping categories, describes the characteristics of each category, compares these characteristics, and surveys tools, systems, and related work based on each category of ontology mapping. We believe this paper provides readers with a comprehensive understanding of ontology mapping and points to various research topics about the specific roles of ontology mapping.
---
paper_title: A geo-service semantic integration in Spatial Data Infrastructures
paper_content:
In this paper we focus on the semantic heterogeneity problem as one of the main challenges in current Spatial Data Infrastructures (SDIs). We first report on the state of the art in reducing such a heterogeneity in SDIs. We then consider a particular geo-service integration scenario. We discuss an approach of how to semantically coordinate geographic services, which is based on a view of the semantics of web service coordination, implemented by using the Lightweight Coordination Calculus (LCC) language. In this approach, service providers share explicit knowledge of the interactions in which their services are engaged and these models of interaction are used operationally as the anchor for describing the semantics of the interaction. We achieve web service discovery and integration by using semantic matching between particular interactions and web service descriptions. For this purpose we introduce a specific solution, called structure preserving semantic matching. We present a real world application scenario to illustrate how semantic integration of geo web services can be performed by using this approach. Finally, we provide a preliminary evaluation of the solution discussed.
---
paper_title: Using Ontologies for Integrated Geographic Information Systems
paper_content:
Today, there is a huge amount of data gathered about the Earth, not only from new spatial information systems, but also from new and more sophisticated data collection technologies. This scenario leads to a number of interesting research challenges, such as how to integrate geographic information of different kinds. The basic motivation of this paper is to introduce a GIS architecture that can enable geographic information integration in a seamless and flexible way based on its semantic value and regardless of its representation. The proposed solution is an ontology-driven geographic information system that acts as a system integrator. In this system, an ontology is a component, such as the database, cooperating to fulfill the system’s objectives. By browsing through ontologies the users can be provided with information about the embedded knowledge of the system. Special emphasis is given to the case of remote sensing systems and geographic information systems. The levels of ontologies can be used to guide processes for the extraction of more general or more detailed information. The use of multiple ontologies allows the extraction of information in different stages of classification. The semantic integration of aerial images and GIS is a crucial step towards better geospatial modeling.
---
paper_title: YAGO : A Core of Semantic Knowledge Unifying WordNet and Wikipedia
paper_content:
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
---
paper_title: The semantics of similarity in geographic information retrieval
paper_content:
Similarity measures have a long tradition in fields such as information retrieval, artificial intelligence, and cognitive science. Within the last years, these measures have been extended and reused to measure semantic similarity; i.e., for comparing meanings rather than syntactic differences. Various measures for spatial applications have been de- veloped, but a solid foundation for answering what they measure; how they are best ap- plied in information retrieval; which role contextual information plays; and how similarity values or rankings should be interpreted is still missing. It is therefore difficult to decide which measure should be used for a particular application or to compare results from dif- ferent similarity theories. Based on a review of existing similarity measures, we introduce a framework to specify the semantics of similarity. We discuss similarity-based information retrieval paradigms as well as their implementation in web-based user interfaces for geo- graphic information retrieval to demonstrate the applicability of the framework. Finally, we formulate open challenges for similarity research.
---
paper_title: LinkedGeoData -- Adding a Spatial Dimension to the Web of Data
paper_content:
In order to employ the Web as a medium for data and information integration, comprehensive datasets and vocabularies are required as they enable the disambiguation and alignment of other data and information. Many real-life information integration and aggregation tasks are impossible without comprehensive background knowledge related to spatial features of the ways, structures and landscapes surrounding us. In this paper we contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. We describe how this data can be interlinked with other spatial data sets, how it can be made accessible for machines according to the linked data paradigm and for humans by means of a faceted geo-data browser.
---
paper_title: GeoWordNet: a resource for geo-spatial applications
paper_content:
Geo-spatial ontologies provide knowledge about places in the world and spatial relations between them. They are fundamental in order to build semantic information retrieval systems and to achieve semantic interoperability in geo-spatial applications. In this paper we present GeoWordNet, a semantic resource we created from the full integration of GeoNames, other high quality resources and WordNet. The methodology we followed was largely automatic, with manual checks when needed. This allowed us accomplishing at the same time a never reached before accuracy level and a very satisfactory quantitative result, both in terms of concepts and geographical entities.
---
paper_title: Inferring Geographical Ontologies from Multiple Resources for Geographical Information Retrieval
paper_content:
Most of the information available in electronic format, such as in the World Wide Web or in digital libraries, involves some kind of spatial awareness. For instance, news usually describe an event and the place where this event occurred: “Earthquake in Turkey”, “Visit of the Pope in Valencia”. Currently, the Information Retrieval (IR) research community is increasing its eorts dedicated to the retrieval of geographical information, as testified by the creation of the GeoCLEF 1 [5] evaluation exercise at the CLEF 2005, recently repeated in 2006, and the advances of the SPIRIT 2 project [6]. These eorts are aimed to the solution of typical issues of the geographical IR task. In many cases, explicit geographical information is missing from the documents, for instance the indication of a broader geographical entity is omitted when it is supposed to be well-known to the readers (e.g. usually France is not named in a news related to Paris). Another common problem is the synonymy, when there are many ways to indicate a geographical entity. This is particularly true for foreign names, where spelling variations are frequent. The solution to these problems has been generally individuated in the use of geographical-oriented ontologies [4, 6]. The manual construction of this kind of resources is usually a long, laborious process, and in many cases they are not freely available, such as the Getty Thesaurus of Geographical Names 3 (TGN). In order to overcome this issue, we made some attempts [2, 3] to use the geographical information included in WordNet, the wellknown general domain ontology developed at the University of Princeton [7]. Unfortunately, the quantity of geographical information included in WordNet is quite small. Although it is quite dicult to calculate the number of geographi
---
paper_title: GeoMergeP: Geographic Information Integration through Enriched Ontology Matching
paper_content:
The combination of the use of advanced Information and Communication Technology, especially the Internet, to enable new ways of working, with the enhanced provision of information and interactive services accessible over different channels, is the foundation of a new family of information systems. Particularly, this information explosion on the Web, which threatens our ability to manage information, has affected the geographic information systems. Interoperability is a key word here, since it means, an increasing level of cooperation between information sources on national, regional and local levels; and requires new methods to develop interoperable geographic systems. In this paper, an ontology-driven system (GeoMergeP) is described for the semantic integration of geographic information sources. Particularly, we focus on how ontology matching can be enriched through the use of standards for implementing a semi-automatic matching approach. Then, the requirements and steps of the system are illustrated on the ISPRA (Italian Institute for Environmental Protection and Research) case study. Our preliminary results show that ontology matching can be improved; helping interoperating systems increase reliability of exchanged and shared information.
---
paper_title: Geo linked data
paper_content:
Semantic Web applications that include map visualization clients are becoming common. When the description of an entity contains coordinate pairs, semantic applications often lay them as pins on maps provided by Web mapping service applications, such as Google Maps. Nowadays, semantic applications cannot guarantee that those maps provide spatial information related to the entities pinned to them. To address this issue, this paper proposes a refinement of Linked Data practices, named Geo Linked Data, which defines a lightweight semantic infrastructure to relate URIs that identify real world entities with geospatial Web resources, such as maps.
---
paper_title: Approaches to Semantic Similarity Measurement for Geo‐Spatial Data: A Survey
paper_content:
Semantic similarity is central for the functioning of semantically enabled processing of geospatial data. It is used to measure the degree of potential semantic interoperability between data or different geographic information systems (GIS). Similarity is essential for dealing with vague data queries, vague concepts or natural language and is the basis for semantic information retrieval and integration. The choice of similarity measurement influences strongly the conceptual design and the functionality of a GIS. The goal of this article is to provide a survey presentation on theories of semantic similarity measurement and review how these approaches – originally developed as psychological models to explain human similarity judgment – can be used in geographic information science. According to their knowledge representation and notion of similarity we classify existing similarity measures in geometric, feature, network, alignment and transformational models. The article reviews each of these models and outlines its notion of similarity and metric properties. Afterwards, we evaluate the semantic similarity models with respect to the requirements for semantic similarity measurement between geospatial data. The article concludes by comparing the similarity measures and giving general advice how to choose an appropriate semantic similarity measure. Advantages and disadvantages point to their suitability for different tasks.
---
paper_title: Multi-source Toponym Data Integration and Mediation for a Meta-Gazetteer Service
paper_content:
A variety of gazetteers exist based on administrative or user contributed data. Each of these data sources has benefits for particular geographical analysis and information retrieval tasks but none is a one fit all solution. We present a mediation framework to access and integrate distributed gazetteer resources to build a meta-gazetteer that generates augmented versions of place name information. The approach combines different aspects of place name data from multiple gazetteer sources that refer to the same geographic place and employs several similarity metrics to identify equivalent toponyms.
---
paper_title: DBpedia Mobile: A Location-Enabled Linked Data Browser
paper_content:
In this demonstration, we present DBpedia Mobile, a location-centric DBpedia client application for mobile devices consisting of a map view and a Fresnel-based Linked Data browser. The DBpedia project extracts structured information from Wikipedia and publishes this information as Linked Data on the Web. The DBpedia dataset contains information about 2.18 million things, including almost 300,000 geographic locations. DBpedia is interlinked with various other location-related datasets. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, users can explore background information about locations and can navigate into interlinked datasets. DBpedia Mobile demonstrates that the DBpedia dataset can serve as a useful starting point to explore the Geospatial Semantic Web using a mobile device.
---
paper_title: Ontology‐based retrieval of geographic information
paper_content:
Discovering and accessing suitable geographic information (GI) in the open and distributed environments of current Spatial Data Infrastructures (SDIs) is a crucial task. Catalogues provide searchable repositories of information descriptions, but the mechanisms to support GI retrieval are still insufficient. Problems of semantic heterogeneity caused by the ambiguity of natural language can arise during keyword‐based search in catalogues and when formulating a query to access the discovered data. In this paper, we present an approach to ontology‐based GI retrieval that contributes to solving existing problems of semantic heterogeneity and hides most of the complexity of the required procedure from the requester. A query language and graphical user interface allow a requester to intuitively formulate a query using a well‐known domain vocabulary. From this query, an ontology concept is derived, which is then used to search a catalogue for a data source that provides all the information required to answer the ...
---
paper_title: YAGO : A Core of Semantic Knowledge Unifying WordNet and Wikipedia
paper_content:
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
---
paper_title: A Survey of Named Entity Recognition and Classification
paper_content:
This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.
---
paper_title: Using co‐occurrence models for placename disambiguation
paper_content:
This paper describes the generation of a model capturing information on how placenames co-occur together. The advantages of the co-occurrence model over traditional gazetteers are discussed and the problem of placename disambiguation is presented as a case study. We begin by outlining the problem of ambiguous placenames. We demonstrate how analysis of Wikipedia can be used in the generation of a co-occurrence model. The accuracy of our model is compared to a handcrafted ground truth; then we evaluate alternative methods of applying this model to the disambiguation of placenames in free text (using the GeoCLEF evaluation forum). We conclude by showing how the inclusion of placenames in both the text and geographic parts of a query provides the maximum mean average precision and outline the benefits of a co-occurrence model as a data source for the wider field of geographic information retrieval (GIR).
---
paper_title: Ontology-Based Spatial Query Expansion in Information Retrieval
paper_content:
Ontologies play a key role in Semantic Web research. A common use of ontologies in Semantic Web is to enrich the current Web resources with some well-defined meaning to enhance the search capabilities of existing web searching systems. This paper reports on how ontologies developed in the EU Semantic Web project SPIRIT are used to support retrieval of documents that are considered to be spatially relevant to users’ queries. The query expansion techniques presented in this paper are based on both a domain and a geographical ontology. The proposed techniques are distinguished from conventional ones in that a query is expanded by derivation of its geographical query footprint. The techniques are specially designed to resolve a query (such as castles near Edinburgh) that involves spatial terms (e.g. Edinburgh) and fuzzy spatial relationships (e.g. near) that qualify the spatial terms. Various factors are taken into account to support intelligent expansion of a spatial query, including, spatial terms as encoded in the geographical ontology, non-spatial terms as encoded in the domain ontology, as well as the semantics of the spatial relationships and their context of use. Some experiments have been carried out to evaluate the performance of the proposed techniques using sample realistic ontologies.
---
paper_title: Approaches to disambiguating toponyms
paper_content:
Many approaches have been proposed in recent years in the context of Geographic Information Retrieval (GIR), mostly in order to deal with geographically constrained information in un-structured texts. Most of these approaches share a common scheme: in order to disambiguate a toponym t with n possible referents in a document d, they find a certain number of context toponyms c0,...,ck that are contained in d. A score for each referent is calculated according to the context toponyms, and the referent with the highest score is selected. According to the method used to calculate the score, Toponym Disambiguation (TD) methods may be grouped into three main categories, as proposed by [7]: • map-based: methods that use an explicit representation of toponyms on a map, for instance to calculate the average distance of unambiguous context toponyms from referents; • knowledge-based: methods that exploit external knowledge sources such as gazetteers, Wikipedia or ontologies to find disambiguation clues; • data-driven or supervised: methods based on machine learning techniques.
---
paper_title: Geographical classification of documents using evidence from Wikipedia
paper_content:
Obtaining or approximating a geographic location for search results often motivates users to include place names and other geography-related terms in their queries. Previous work shows that queries that include geography-related terms correspond to a significant share of the users' demand. Therefore, it is important to recognize the association of documents to places in order to adequately respond to such queries. This paper describes strategies for text classification into geography-related categories, using evidence extracted from Wikipedia. We use terms that correspond to entry titles and the connections between entries in Wikipedia's graph to establish a semantic network from which classification features are generated. Results of experiments using a news data-set, classified over Brazilian states, show that such terms constitute valid evidence for the geographical classification of documents, and demonstrate the potential of this technique for text classification.
---
paper_title: Geographical Information Retrieval with Ontologies of Place
paper_content:
Geographical context is required of many information retrieval tasks in which the target of the search may be documents, images or records which are referenced to geographical space only by means of place names. Often there may be an imprecise match between the query name and the names associated with candidate sources of information. There is a need therefore for geographical information retrieval facilities that can rank the relevance of candidate information with respect to geographical closeness as well as semantic closeness with respect to the topic of interest. Here we present an ontology of place that combines limited coordinate data with qualitative spatial relationships between places. This parsimonious model of place is intended to suppon information retrieval tasks that may be global in scope. The ontology has been implemented with a semantic modelling system linking non-spatial conceptual hierarchies with the place ontology. An hierarchical distance measure is combined with Euclidean distance between place centroids to create a hybrid spatial distance measure. This can be combined with thematic distance, based on classification semantics, to create an integrated semantic closeness measure that can be used for a relevance ranking of retrieved objects.
---
paper_title: 2009. Semantic Rules for Context-Aware Geographical Information Retrieval
paper_content:
Geographical information retrieval (GIR) can benefit from context information to adapt the results to a user's current situation and personal preferences. In this respect, semantics-based GIR is especially challenging because context information - such as collected from sensors - is often provided through numeric values, which need to be mapped to ontological representations based on nominal symbols. The Web Ontology Language (OWL) lacks mathematical processing capabilities that require free variables, so that even basic comparisons and distance calculations are not possible. Therefore, the context information cannot be interpreted with respect to the task and the current user's preferences. In this paper, we introduce an approach based on semantic rules that adds these processing capabilities to OWL ontologies. The task of recommending personalized surf spots based on user location and preferences serves as a case study to evaluate the capabilities of semantic rules for context-aware geographical information retrieval. We demonstrate how the Semantic Web Rule Language (SWRL) can be utilized to model user preferences and how execution of the rules successfully retrieves surf spots that match these preferences. While SWRL itself enables free variables, mathematical functions are added via built-ins - external libraries that are dynamically loaded during rule execution. Utilizing the same mechanism, we demonstrate how SWRL built-ins can query the Semantic Sensor Web to enable the consideration of real-time measurements and thus make geographical information retrieval truly context-aware.
---
paper_title: Description Logics as Ontology Languages for the Semantic Web
paper_content:
The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality.
---
paper_title: Linked Data -- The story so far
paper_content:
The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
---
paper_title: Freebase: a collaboratively created graph database for structuring human knowledge
paper_content:
Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.
---
paper_title: The semantics of similarity in geographic information retrieval
paper_content:
Similarity measures have a long tradition in fields such as information retrieval, artificial intelligence, and cognitive science. Within the last years, these measures have been extended and reused to measure semantic similarity; i.e., for comparing meanings rather than syntactic differences. Various measures for spatial applications have been de- veloped, but a solid foundation for answering what they measure; how they are best ap- plied in information retrieval; which role contextual information plays; and how similarity values or rankings should be interpreted is still missing. It is therefore difficult to decide which measure should be used for a particular application or to compare results from dif- ferent similarity theories. Based on a review of existing similarity measures, we introduce a framework to specify the semantics of similarity. We discuss similarity-based information retrieval paradigms as well as their implementation in web-based user interfaces for geo- graphic information retrieval to demonstrate the applicability of the framework. Finally, we formulate open challenges for similarity research.
---
paper_title: LinkedGeoData -- Adding a Spatial Dimension to the Web of Data
paper_content:
In order to employ the Web as a medium for data and information integration, comprehensive datasets and vocabularies are required as they enable the disambiguation and alignment of other data and information. Many real-life information integration and aggregation tasks are impossible without comprehensive background knowledge related to spatial features of the ways, structures and landscapes surrounding us. In this paper we contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. We describe how this data can be interlinked with other spatial data sets, how it can be made accessible for machines according to the linked data paradigm and for humans by means of a faceted geo-data browser.
---
paper_title: GeoWordNet: a resource for geo-spatial applications
paper_content:
Geo-spatial ontologies provide knowledge about places in the world and spatial relations between them. They are fundamental in order to build semantic information retrieval systems and to achieve semantic interoperability in geo-spatial applications. In this paper we present GeoWordNet, a semantic resource we created from the full integration of GeoNames, other high quality resources and WordNet. The methodology we followed was largely automatic, with manual checks when needed. This allowed us accomplishing at the same time a never reached before accuracy level and a very satisfactory quantitative result, both in terms of concepts and geographical entities.
---
paper_title: OpenStreetMap: User-Generated Street Maps
paper_content:
The OpenStreetMap project is a knowledge collective that provides user-generated street maps. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes. A considerable number of contributors edit the world map collaboratively using the OSM technical infrastructure, and a core group, estimated at approximately 40 volunteers, dedicate their time to creating and improving OSM's infrastructure, including maintaining the server, writing the core software that handles the transactions with the server, and creating cartographical outputs. There's also a growing community of software developers who develop software tools to make OSM data available for further use across different application domains, software platforms, and hardware devices. The OSM project's hub is the main OSM Web site.
---
paper_title: Semantically enriching VGI in support of implicit feedback analysis
paper_content:
In recent years, the proliferation of Volunteered Geographic Information (VGI) has enabled many Internet users to contribute to the construction of rich and increasingly complex spatial datasets. This growth of geo-referenced information and the often loose semantic structure of such data have resulted in spatial information overload. For this reason, a semantic gap has emerged between unstructured geo-spatial datasets and high-level ontological concepts. Filling this semantic gap can help reduce spatial information overload, therefore facilitating both user interactions and the analysis of such interaction. Implicit Feedback analysis is the focus of our work. In this paper we address this problem by proposing a system that executes spatial discovery queries. Our system combines a semantically-rich and spatially-poor ontology (DBpedia) with a spatially-rich and semantically-poor VGI dataset (OpenStreetMap). This technique differs from existing ones, such as the aggregated dataset LinkedGeoData, as it is focused on user interest analysis and takes map scale into account. System architecture, functionality and preliminary results gathered about the system performance are discussed.
---
paper_title: Ontology Quality by Detection of Conflicts in Metadata
paper_content:
ontologies continue to be an important component in techniques and applications in semantic technologies. Thus, it is necessary to evaluate their quality. Our focus is the detection of conflicting information (within an ontology) as a criterion to improve the quality of an ontology. We describe different types of conflicts and propose a rule-based approach by which human experts can define conditions that signal a conflict in data. These rules (represented using RuleML) are used to automatically detect conflicts in populated ontologies. We describe a prototype application and evaluate the applicability of this approach.
---
paper_title: Modelling Ontology Evaluation and Validation
paper_content:
We present a comprehensive approach to ontology evaluation and validation, which have become a crucial problem for the development of semantic technologies. Existing evaluation methods are integrated into one sigle framework by means of a formal model. This model consists, firstly, of a meta-ontology called O 2 , that characterises ontologies as semiotic objects. Based on O 2 and an analysis of existing methodologies, we identify three main types of measures for evaluation: structural measures, that are typical of ontologies represented as graphs; functional measures, that are related to the intended use of an ontology and of its components; and usability-profiling measures, that depend on the level of annotation of the considered ontology. The meta-ontology is then complemented with an ontology of ontology validation called oQual, which provides the means to devise the best set of criteria for choosing an ontology over others in the context of a given project. Finally, we provide a small example of how to apply oQual-derived criteria to a validation case.
---
paper_title: On Trusting Wikipedia
paper_content:
Given the fact that many people use Wikipedia , we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia . Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies.
---
paper_title: YAGO : A Core of Semantic Knowledge Unifying WordNet and Wikipedia
paper_content:
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
---
paper_title: A semiotic metrics suite for assessing the quality of ontologies
paper_content:
A suite of metrics is proposed to assess the quality of an ontology. Drawing upon semiotic theory, the metrics assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the Ontology Auditor. An initial validation of the Ontology Auditor on the DARPA Agent Markup Language (DAML) library of domain ontologies indicates that the metrics are feasible and highlights the wide variation in quality among ontologies in the library. The contribution of the research is to provide a theory-based framework that developers can use to develop high quality ontologies and that applications can use to choose appropriate ontologies for a given task.
---
paper_title: Spatial Data Quality - Problems and Prospects
paper_content:
This paper reflects upon the topic of spatial data quality and the progress made in this field over the past 20-30 years. While international standards have been established, theoretical models of error developed, new visualization techniques introduced, and metadata now routinely documented for spatial datasets, difficulties nevertheless exist with the way data quality information is being described, communicated and applied in practice by users. These problems are identified and the paper suggests how the spatial information community might move forward to overcome these obstacles.
---
paper_title: Geospatial Information Bottom-Up: A Matter of Trust and Semantics
paper_content:
Geographic Information Science and business are facing a new challenge: understanding and exploiting data and services emerging from online communities. In the emerging technologies of the social web, GI user roles switched from being data consumers to become data producers, the challenge we argue is in making this generated GI usable. As a use case we point to the increasing demands for up-to-date geographic information combined with the high cost of maintenance which present serious challenges to data providers. In this paper we argue that the social web combined with social network science present a unique opportunity to achieve the goal of reducing the cost of maintenance and update of geospatial data and providing a platform for bottom up approaches to GI. We propose to focus on web-based trust as a proxy measure for quality and to study its spatio-temporal dimensions. We also point to work on combining folksonomies with ontologies, allowing for alternative models of metadata and semantics as components of our proposed vision.
---
paper_title: OntoQA: Metric-based ontology quality analysis
paper_content:
As the Semantic Web gains importance for sharing knowledge on the Internet this has lead to the development and publishing of many ontologies in different domains. When trying to reuse existing ontologies into their applications, users are faced with the problem of determining if an ontology is suitable for their needs. In this paper, we introduce OntoQA, an approach that analyzes ontology schemas and their populations (i.e. knowledgebases) and describes them through a well defined set of metrics. These metrics can highlight key characteristics of an ontology schema as well as its population and enable users to make an informed decision quickly. We present an evaluation of several ontologies using these metrics to demonstrate their applicability.
---
paper_title: Spatial Data Quality and Error Analysis Issues: GIS Functions and Environmental Modeling
paper_content:
Organophilic clays and thixotropic cross-linkable polyester compositions containing the same are provided comprising an unsaturated polyester and an unsaturated aromatic monomer such as styrene having dispersed therein an organophilic clay gellant comprising the reaction product of a smectite-type clay having a cation exchange capacity of at least 75 milliequivalents per 100 grams of clay and a methyl benzyl dialkyl ammonium compound or a dibenzyl dialkyl ammonium compound, said alkyl groups comprising long chain alkyl radicals having from 8 to 20 carbon atoms, wherein at least 5% of said long chains contain 8 to 14 carbon atoms and preferably at least 20% have 12 carbon atoms, the amount of said ammonium compound reacted with said clay being from 100 to 130 milliequivalents per 100 grams of clay based on 100% active clay. The polyester commpositions are prepared by a pregel method in which the organophilic clay gellant is mixed with the unsaturated aromatic monomer under high shear to form a pregel and the pregel is then combined with an unsaturated polyester to form the thixotropic compositions.
---
paper_title: Internet encyclopaedias go head to head
paper_content:
Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds. UPDATE: see details of how the data were collected for this article in the supplementary information. UPDATE 2 (28 March 2006). The results reported in this news story and their interpretation have been disputed by Encyclopaedia Britannica. Nature responded to these objections .
---
paper_title: Towards quality metrics for OpenStreetMap
paper_content:
Volunteered Geographic Information (VGI) is currently a "hot topic" in the GIS community. The OpenStreetMap (OSM) project is one of the most popular and well supported examples of VGL Traditional measures of spatial data quality are often not applicable to OSM as in many cases it is not possible to access ground-truth spatial data for all regions mapped by OSM. We investigate to develop measures of quality for OSM which operate in an unsupervised manner without reference to a "trusted" source of ground-truth data. We provide results of analysis of OSM data from several European countries. The results highlight specific quality issues in OSM. Results of comparing OSM with ground-truth data for Ireland are also presented.
---
paper_title: Empirical Insights on a Value of Ontology Quality in Ontology-Driven Web Search
paper_content:
Nowadays ontologies are often used to improve search applications. Quality of ontology plays an important role in these applications. An important body of work exists in both information retrieval evaluation and ontology quality assessment areas. However, there is a lack of task- and scenario-based quality assessment methods. In this paper we discuss a framework to assess fitness of ontology for use in ontology-driven Web search. We define metrics for ontology fitness to particular search tasks and metrics for ontology capability to enhance recall and precision. Further, we discuss results of a preliminary experiment showing applicability of the proposed framework and a value of ontology quality in ontology-driven Web search.
---
paper_title: Toward a Formal Evaluation of Ontology Quality
paper_content:
The present invention relates to novel hydrodipyranophthalazine compounds of the formula: wherein R1 = hydrogen or lower alkyl, R2 = hydrogen, chlorine or bromine, R3 = hydrogen or aralkyl if R2 = hydrogen, R3 = bromine or alkoxy if R2 = bromine, R3 = chlorine or alkoxy if R2 = chlorine, or wherein R4 = hydrogen, halogen, trifluoromethyl, alkyl, alkoxyl, alkylenedioxy, a possibly substituted amino group, or a nitro group, THE 4,4'-BOND IS SATURATED OR UNSATURATED, AND ACID-ADDITION SALTS OF THESE COMPOUNDS.
---
paper_title: Consuming Multiple Linked Data Sources: Challenges and Experiences
paper_content:
Linked Data has provided the means for a large number of considerable knowledge resources to be published and interlinked utilising Semantic Web technologies. However it remains difficult to make use of this 'Web of Data' fully, due to its inherently distributed and often inconsistent nature. In this paper we introduce core challenges faced when consuming multiple sources of Linked Data, focussing in particular on the problem of querying. We compare both URI resolution and federated query approaches, and outline the experiences gained in the development of an application which utilises a hybrid approach to consume Linked Data from the unbounded web.
---
paper_title: Challenges and resources for evaluating geographical IR
paper_content:
This paper discusses evaluation of Geo-IR systems, arguing for a separate study of the different algorithmic components involved. It presents existing resources for evaluating the different components, together with a review on previous results in the area.
---
paper_title: When owl: sameAs isn't the Same: An Analysis of Identity Links on the Semantic Web
paper_content:
In Linked Data, the use of owl:sameAs is ubiquitous in ‘inter-linking’ data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of ‘identity link,’ a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
---
paper_title: What OWL Has Done for Geography and Why We Don't Need it to Map Read
paper_content:
This position paper describes our experiences of authoring a geospatial ontology in the domain of hydrology topography, giving examples of where we find the additional functionality of OWL 1.1 useful. We also comment on approaches to combining spatial and semantic reasoning. We contend that spatial reasoning can be best achieved through a combination of semantic and spatial database technologies working together and spatial relationships expressed in OWL need to represent more than just spatial logics.
---
paper_title: Challenges for indexing in GIR
paper_content:
Geographic information retrieval (GIR) has been evaluated in campaigns such as GeoCLEF, GikiP, and GikiCLEF [8, 12, 11]. Surprisingly, most results from these evaluations showed that adding more geographic knowledge typically had little or no effect on performance of GIR systems or that it even decreases performance compared to traditional (textual) information retrieval baselines (see e.g. [4]). In this position paper, current challenges of how to further improve the creation, structure and access to geographic resources (for simplicity, called the geographic index in the rest of this note) are discussed. The major challenges for indexing in GIR discussed in this note are applying methods beyond named entity recognition to identify geographic references, integrating additional, proven methods from related research areas such as question answering for semantic indexing, and aiming for better index support to interpret geographic relations. After summarizing the state of the art in indexing for GIR as it has evolved from GIR evaluation campaigns, research challenges and directions for future research are presented.
---
paper_title: Evaluating GIR: geography-oriented or user-oriented?
paper_content:
Geographic terms appear very often in user queries. Many geographic references are present in query log-files. Examples found in different log files [6] include question answering style queries such as "What time is it in West Samoa?", terms within book titles like "Death in Venice", detailed street addresses and single term city names.
---
paper_title: Building place ontologies for the semantic web:: issues and approaches
paper_content:
Place geo-ontologies have a key role to play in the development of thegeospatial-semantic web, with regard to facilitating the search for geographical information and resources. They normally hold large amounts of geographicinformation and undergo a continuous process of revision and update. This papers reviews the limitations of the OWL ontology language for the representation of Place and proposes two novel approaches to frameworks that combine rules and OWL for building and managing Place ontologies.
---
paper_title: Towards quality metrics for OpenStreetMap
paper_content:
Volunteered Geographic Information (VGI) is currently a "hot topic" in the GIS community. The OpenStreetMap (OSM) project is one of the most popular and well supported examples of VGL Traditional measures of spatial data quality are often not applicable to OSM as in many cases it is not possible to access ground-truth spatial data for all regions mapped by OSM. We investigate to develop measures of quality for OSM which operate in an unsupervised manner without reference to a "trusted" source of ground-truth data. We provide results of analysis of OSM data from several European countries. The results highlight specific quality issues in OSM. Results of comparing OSM with ground-truth data for Ireland are also presented.
---
paper_title: The place of place in geographical IR
paper_content:
Let us define geographical IR (GIR) as the activity whosepurpose is to retrieve information in a geographically-awareway. In other words, considering the geographical dimensionas special. GIR presupposes two things:• the possibility to associate to (possibly retrieve from)the collection geographical information• the existence (or the possibility of creation) of semanticrepositories that allow geographical reasoning, hence-forth called geo-ontologies.The most common kind of collections for GIR so far arethe Web and other document collections, which are mainlytextual. This paper is concerned with the non-trivial rela-tionship between reference to place in natural language (NL)and common GIR assumptions. There are two main ways inwhich NL texts and GIR meet: in the attempt to derive orpopulate geo-ontologies from text itself, and in the attemptto label Web pages with what is called geo-scopes, derivingthese from clues in the pages themselves.We will survey briefly the two, noting in passing that bothapproaches are bottom-up in the sense that they look atthe texts, but the second makes use of a prior informationsource, a geo-ontology, which is typically top-down (see Geo-Net-PT01 in Table 1).
---
paper_title: Empirical Insights on a Value of Ontology Quality in Ontology-Driven Web Search
paper_content:
Nowadays ontologies are often used to improve search applications. Quality of ontology plays an important role in these applications. An important body of work exists in both information retrieval evaluation and ontology quality assessment areas. However, there is a lack of task- and scenario-based quality assessment methods. In this paper we discuss a framework to assess fitness of ontology for use in ontology-driven Web search. We define metrics for ontology fitness to particular search tasks and metrics for ontology capability to enhance recall and precision. Further, we discuss results of a preliminary experiment showing applicability of the proposed framework and a value of ontology quality in ontology-driven Web search.
---
paper_title: The digital earth: Understanding our planet in the 21st century
paper_content:
The hard part of taking advantage of this flood of geospatial information will be making sense of it. turning raw data into understandable information. Today, we often find that we have more information than we know what to do with. The Landsat program, designed to help us understand the global environment, is a good example. The Landsat satellite is capable of taking a complete photograph of the entire planet every two weeks, and it's been collecting data for more than 20 years. In spite of the great need for that information, the vast majority of those images have never fired a single neuron in a single human brain. Instead, they are stored in electronic silos of data. We used to have an agricultural policy where we stored grain in Midwestern silos and let it rot while millions of people starved to death. Now we have an insatiable hunger for knowledge. Yet a great deal of data remains unused.
---
paper_title: Citizens as Sensors: The World of Volunteered Geography
paper_content:
In recent months there has been an explosion of interest in using the Web to create, assemble, and disseminate geographic information provided voluntarily by individuals. Sites such as Wikimapia and OpenStreetMap are empowering citizens to create a global patchwork of geographic information, while Google Earth and other virtual globes are encouraging volunteers to develop interesting applications using their own data. I review this phenomenon, and examine associated issues: what drives people to do this, how accurate are the results, will they threaten individual privacy, and how can they augment more conventional sources? I compare this new phenomenon to more traditional citizen science and the role of the amateur in geographic observation.
---
paper_title: Toward the semantic geospatial web
paper_content:
With the growth of the World Wide Web has come the insight that currently available methods for finding and using information on the web are often insufficient. In order to move the Web from a data repository to an information resource, a totally new way of organizing information is needed. The advent of the Semantic Web promises better retrieval methods by incorporating the data's semantics and exploiting the semantics during the search process. Such a development needs special attention from the geospatial perspective so that the particularities of geospatial meaning are captured appropriately. The creation the Semantic Geospatial Web needs the development multiple spatial and terminological ontologies, each with a formal semantics; the representation of those semantics such that they are available both to machines for processing and to people for understanding; and the processing of geospatial queries against these ontologies and the evaluation of the retrieval results based on the match between the semantics of the expressed information need and the available semantics of the information resources and search systems. This will lead to a new framework for geospatial information retrieval based on the semantics of spatial and terminological ontologies. By explicitly representing the role of semantics in different components of the information retrieval process (people, interfaces, search systems, and information resources), the Semantic Geospatial Web will enable users to retrieve more precisely the data they need, based on the semantics associated with these data.
---
paper_title: Evaluating GIR: geography-oriented or user-oriented?
paper_content:
Geographic terms appear very often in user queries. Many geographic references are present in query log-files. Examples found in different log files [6] include question answering style queries such as "What time is it in West Samoa?", terms within book titles like "Death in Venice", detailed street addresses and single term city names.
---
| Title: A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web
Section 1: Introduction
Description 1: Introduce the concept of geo-wikification, the impact of neogeography on various sectors, and the emergence of the Semantic Geospatial Web. Emphasize the synergy between crowdsourcing, VGI, and the Semantic Geospatial Web, and outline the contents of the survey.
Section 2: Survey of Open Linked Geo-Knowledge Bases
Description 2: Provide a comprehensive survey of open, collaborative geo-knowledge bases, including definitions of related terms and descriptions of prominent projects such as ConceptNet, DBpedia, Freebase, GeoNames, GeoWordNet, LinkedGeoData, OpenCyc, and others. Summarize the characteristics and contributions of each knowledge base.
Section 3: Mapping, Aligning, and Merging Geo-Knowledge Bases
Description 3: Discuss the challenges and methods for integrating heterogeneous geo-knowledge bases, including ontology alignment and merging techniques. Highlight specific integration projects and methodologies.
Section 4: Ontology-powered Geographic Information Retrieval (GIR)
Description 4: Survey recent work in the use of geo-knowledge bases to enhance Geographic Information Retrieval (GIR), covering key areas such as Named Entity Recognition and Classification, toponym disambiguation, and toponym resolution. Discuss the benefits and challenges of using ontologies in GIR.
Section 5: The OSM Semantic Network
Description 5: Describe the creation and application of the OSM Semantic Network, including the development of a web crawler to extract semantic data from OpenStreetMap (OSM) and linking it to DBpedia. Discuss the significance of this resource in the context of geo-knowledge bases.
Section 6: The Quality of Crowdsourced Geo-Knowledge Bases
Description 6: Examine the methods for assessing the quality of geo-knowledge bases, particularly those that are crowdsourced. Discuss the trade-offs between coverage and precision, and present approaches for evaluating the quality of both class-level schemas and instance-level data.
Section 7: Current Limitations of Geo-Knowledge Bases and GIR
Description 7: Identify and analyze the current limitations and challenges of geo-knowledge bases and GIR, including issues of ambiguity, coverage, quality, expressivity, and complexity. Discuss the implications of these challenges for the future of geo-information systems.
Section 8: Conclusions and Future Work
Description 8: Summarize the findings of the survey and discuss the future directions for research in open geo-knowledge bases and GIR. Emphasize the importance of developing usable applications and interfaces to leverage these knowledge bases effectively. |
Tunable Microfluidic Devices for Hydrodynamic Fractionation of Cells and Beads: A Review | 11 | ---
paper_title: Microfluidics for cell separation
paper_content:
The need for efficient cell separation, an essential preparatory step in many biological and medical assays, has led to the recent development of numerous microscale separation techniques. This review describes the current state-of-the-art in microfluidics-based cell separation techniques. Microfluidics-based sorting offers numerous advantages, including reducing sample volumes, faster sample processing, high sensitivity and spatial resolution, low device cost, and increased portability. The techniques presented are broadly classified as being active or passive depending on the operating principles. The various separation principles are explained in detail along with popular examples demonstrating their application toward cell separation. Common separation metrics, including separation markers, resolution, efficiency, and throughput, of these techniques are discussed. Developing efficient microscale separation methods that offering greater control over cell population distribution will be important in realizing true point-of-care (POC) lab-on-a-chip (LOC) systems.
---
paper_title: Chip integrated strategies for acoustic separation and manipulation of cells and particles
paper_content:
Acoustic standing wave technology combined with microtechnology opens up new areas for the development of advanced particle and cell separating microfluidic systems. This tutorial review outlines the fundamental work performed on continuous flow acoustic standing wave separation of particles in macro scale systems. The transition to the microchip format is further surveyed, where both fabrication and design issues are discussed. The acoustic technology offers attractive features, such as reasonable throughput and ability to separate particles in a size domain of about tenths of micrometers to tens of micrometers. Examples of different particle separation modes enabled in microfluidic chips, utilizing standing wave technology, are described along a discussion of several potential applications in life science research and in the medical clinic. Chip integrated acoustic standing wave separation technology is still in its infancy and it can be anticipated that new laboratory standards very well may emerge from the current research.
---
paper_title: Cell manipulation in microfluidics
paper_content:
Recent advances in the lab-on-a-chip field in association with nano/microfluidics have been made for new applications and functionalities to the fields of molecular biology, genetic analysis and proteomics, enabling the expansion of the cell biology field. Specifically, microfluidics has provided promising tools for enhancing cell biological research, since it has the ability to precisely control the cellular environment, to easily mimic heterogeneous cellular environment by multiplexing, and to analyze sub-cellular information by high-contents screening assays at the single-cell level. Various cell manipulation techniques in microfluidics have been developed in accordance with specific objectives and applications. In this review, we examine the latest achievements of cell manipulation techniques in microfluidics by categorizing externally applied forces for manipulation: (i) optical, (ii) magnetic, (iii) electrical, (iv) mechanical and (v) other manipulations. We furthermore focus on history where the manipulation techniques originate and also discuss future perspectives with key examples where available.
---
paper_title: Microfluidics technology for manipulation and analysis of biological cells
paper_content:
Analysis of the profiles and dynamics of molecular components and sub-cellular structures in living cells using microfluidic devices has become a major branch of bioanalytical chemistry during the past decades. Microfluidic systems have shown unique advantages in performing analytical functions such as controlled transportation, immobilization, and manipulation of biological molecules and cells, as well as separation, mixing, a nd dilution of chemical reagents, which enables the analysis of intracellular parameters and detection of cell metabolites, even on a single-cell leve l. This article provides an in-depth review on the applications of microfluidic devices for cell-based assays in recent years (2002–2005). Various cell manipulation methods for microfluidic applications, based on magnetic, optical, mechanical, and electrical principles, are described with selected examples of microfluidic devices for cell-based analysis. Microfluidic devices for cell treatment, including cell lysis, cell culture, and cell electroporation, are surveyed and their unique features are introduced. Special attention is devoted to a number of microfluidic devices for cell-based assays, including micro cytometer, microfluidic chemical cytometry, biochemical sensing chip, and whole cell sensing chip. © 2005 Elsevier B.V. All rights reserved.
---
paper_title: Consideration of Nonuniformity in Elongation of Microstructures in a Mechanically Tunable Microfluidic Device for Size-Based Isolation of Microparticles
paper_content:
This paper presents an investigation of the nonlinearity behavior in deformation that appears during the linear stretching of the elastomeric microstructures in a pillar-based microfilter. Determining the impact of the nonuniformity in strain on the geometry and performance of such a planar device under in-plane stretch is the motivation for this study. A semiempirical model is used to explain the physical strain–stress behavior from the root to the tip of the micropillars in the linear arrays in the device. For microfabrication of the device, the main substrate is elastomeric polyurethane methacrylate, which is utilized in an ultraviolet-molding method. Optical imaging and scanning electron microscopy were used to evaluate the deformation of the microstructures under different loading conditions. It was demonstrated that by applying mechanical strains of $\Delta L/L_{o})$ on the elastomeric device using a modified syringe pump, the spacing of the pillars is increased effectively to about three times the size of the initial setting of 5.5 $\mu $ m, which corresponds to a strain of above 180% in the absence of nonuniformity effects. This simple yet interesting behavior can be exploited to rapidly adjust a microfluidic device for application to the separation of microbeads or blood cells, which would normally require the geometrical redesign and fabrication of a new device. [2014-0213]
---
paper_title: Inertial microfluidics for continuous particle filtration and extraction
paper_content:
In this paper, we describe a simple passive microfluidic device with rectangular microchannel geometry for continuous particle filtration. The design takes advantage of preferential migration of particles in rectangular microchannels based on shear-induced inertial lift forces. These dominant inertial forces cause particles to move laterally and occupy equilibrium positions along the longer vertical microchannel walls. Using this principle, we demonstrate extraction of 590 nm particles from a mixture of 1.9 μm and 590 nm particles in a straight microfluidic channel with rectangular cross-section. Based on the theoretical analysis and experimental data, we describe conditions required for predicting the onset of particle equilibration in square and rectangular microchannels. The microfluidic channel design has a simple planar structure and can be easily integrated with on-chip microfluidic components for filtration and extraction of wide range of particle sizes. The ability to continuously and differentially equilibrate particles of different size without external forces in microchannels is expected to have numerous applications in filtration, cytometry, and bioseparations.
---
paper_title: A technique of optimization of microfiltration using a tunable platform
paper_content:
The optimum efficiency of size-based filtration in microfluidic devices is highly dependent on characteristics of design, deformability of microparticles/cells, and fluid flow. The effects of filter pores and flow rate, which are the two major determining and related factors of characterization in the separation of particles and cells are investigated in this work. An elastomeric microfluidic device consisting of parallel arrays of pillars with mechanically tunable spacings is employed as an adjustable microfiltration platform. The tunable filtration system is used for finding the best conditions of separation of solid microbeads or deformable blood cells in a crossflow pillar-based method. It is demonstrated that increasing flow rate in the range of 1.0–80.0 µl min−1 has an adverse effect on the device performance in terms of decreased separation efficiency of deformable blood cells. However, by tuning the gap size in the range of 2.5–7.5 µm, the selectivity of the separation is controlled from about 5.0 to 95.0% for white blood cells (WBCs) and 40.0 to 95.0% for red blood cells (RBCs). Finally, the best range of trapping and passing efficiencies of ~70–80.0% simultaneously for WBCs and RBCs in whole blood sample is achieved at optimum gap size of ~3.5–4.0 µm.
---
paper_title: Optical Manipulation of Objects and Biological Cells in Microfluidic Devices
paper_content:
In this paper, we review optical techniques used for micro-manipulation of small particles and cells in microfluidic devices. These techniques are based on the object's interaction with focused laser light (consequential forces of scattering and gradient). Inorganic objects including polystyrene spheres and organic objects including biological cells were manipulated and switched in and between fluidic channels using these forces that can typically be generated by vertical cavity surface emitting laser (VCSEL) arrays, with only a few mW optical powers. T-, Y-, and multi-layered X fluidic channel devices were fabricated by polydimethylsiloxane (PDMS) elastomer molding of channel structures over photolithographically defined patterns using a thick negative photoresist. We have also shown that this optical manipulation technique can be extended to smaller multiple objects by using an optically trapped particle as a handle, or an “optical handle”. Ultimately, optical manipulation of small particles and biological cells could have applications in biomedical devices for drug discovery, cytometry and cell biology research.
---
paper_title: Cell manipulation with magnetic particles toward microfluidic cytometry
paper_content:
Magnetic particles have become a promising tool for nearly all major lab-on-a-chip (LOC) applications, from sample capturing, purification, enrichment, transport to detection. For biological applications, the use of magnetic particles is especially well established for immunomagnetic separation. There is a great amount of interest in the automation of cell sorting and counting with magnetic particles in LOC platforms. So far, despite great efforts, only few fully functional LOC devices have been described and further integration is necessary. In this review, we will describe the physics of magnetic cell sorting and counting in LOC formats with a special focus on recent progress in the field.
---
paper_title: Tunable liquid-filled microlens array integrated with microfluidic network
paper_content:
An elastomer-based tunable liquid-filled microlens array integrated on top of a microfluidic network is fabricated using soft lithographic techniques. The simultaneous control of the focal length of all the microlenses composing the elastomeric array is accomplished by pneumatically regulating the pressure of the microfluidic network. A focal length tuning range of hundreds of microns to several millimeters is achieved. Such an array can be used potentially in dynamic imaging systems and adaptive optics.
---
paper_title: A tunable microfluidic-based filter modulated by pneumatic pressure for separation of blood cells
paper_content:
This article presents a new microfluidic-based filter for the separation of microbeads or blood cells with a high filtration rate. The device was composed of a circular micropump for automatic liquid transport, and a normally closed valve located at the filter zone for separation of beads or cells. The filtration mechanism was based on the tunable deformation of polydimethylsiloxane (PDMS) membranes that defined the gap between a floating block structure and the substrate which determined the maximum diameter of the beads/cells that can pass through the filter. Another unique feature of this filter is an unclogging mechanism using a suction force, resulting in a back flow to remove any trapped beads/cells in the filter zone when the PDMS membrane was restored to its initial state. The separation performance of the proposed device was first experimentally evaluated by using microbeads. The results showed that this device was capable of providing size-tunable filtration with a high recovery efficiency (95.25–96.21%) for microbeads with sizes smaller than the defined gap in the filter zone. Furthermore, the proposed device was also capable of performing separation of blood cells and blood plasma from human whole blood. Experimental results showed that an optimum filtration rate of 21.40 and 3.00 μl/min correspond to high recovery efficiencies of 86.69 and 80.66%, respectively, for red blood cells (RBCs) and blood plasma. The separation method developed in this work could be used for various point-of-care diagnostic applications involving separation of plasma and blood cells.
---
paper_title: A pillar-based microfilter for isolation of white blood cells on elastomeric substrate
paper_content:
Our goal is to design, fabricate, and characterize a pillar-based microfluidic device for size-based separation of human blood cells on an elastomeric substrate with application in the low-cost rapid prototyping of lab-chip devices. The single inlet single outlet device is using parallel U-shape arrays of pillars with cutoff size of 5.5 μm for trapping white blood cells (WBCs) in a pillar chamber with internal dead-volume of less than 1.0 μl. The microstructures are designed to limit the elastomeric deformation against fluid pressures. Numerical analysis showed that at maximum pressure loss of 15 kPa which is lower than the device conformal bonding strength, the pillar elastomeric deformation is less than 5% for flow rates of up to 1.0 ml min−1. Molding technique was employed for device prototyping using polyurethane methacrylate (PUMA) resin and polydimethylsiloxane (PDMS) mold. Characterization of the dual-layer device with beads and blood samples is performed. Tests with blood injection showed that ∼18%–25% of WBCs are trapped and ∼84%–89% of red blood cells (RBCs) are passed at flow rates of 15–50 μl min−1 with a slight decrease of WBCs trap and improve of the RBCs pass at higher flow rates. Similar results were obtained by separation of mixed microspheres of different size injected at flow rates of up to 400 μl min−1. Tests with blood samples stained by fluorescent gel demonstrated that the WBCs are accumulated in the arrays of pillars that later end up to blockage of the device. Filtration results of using elastomeric substrate present a good consistency with the trend of separation efficiencies of the similar silicon-based filters.
---
paper_title: Review of cell and particle trapping in microfluidic systems
paper_content:
The ability to obtain ideal conditions for well-defined chemical microenvironments and controlled temporal chemical and/or thermal variations holds promise of high-resolution cell response studies, cell-cell interactions or e.g. proliferation conditions for stem cells. It is a major motivation for the rapid increase of lab-on-a-chip based cell biology research. In view of this, new chip-integrated technologies are at an increasing rate being presented to the research community as potential tools to offer spatial control and manipulation of cells in microfluidic systems. This is becoming a key area of interest in the emerging lab-on-a-chip based cell biology research field. This review focuses on the different technical approaches presented to enable trapping of particles and cells in microfluidic system.
---
paper_title: Optical trapping, manipulation, and sorting of cells and colloids in microfluidic systems with diode laser bars
paper_content:
We demonstrate a new technique for trapping, sorting, and manipulating cells and micrometer-sized particles within microfluidic systems, using a diode laser bar. This approach overcomes the scaling limitations of conventional scanned laser traps, while avoiding the computational and optical complexity inherent to holographic optical trapping schemes. The diode laser bar enables us to control a large trapping zone, 1 μm by 100 μm, without the necessity of scanning or altering the phase of the beam.
---
paper_title: Deformability considerations in filtration of biological cells
paper_content:
Biological cells are highly sensitive to variation in local pressure because cellular membranes are not rigid. Unlike microbeads, cells deform under pressure or even lyse. In isolating or enriching cells by mechanical filtration, pressure-induced lysis is exacerbated when high local fluidic velocity is present or when a filter reaches its intended capacity. Microfabrication offers new possibilities to design fluidic environments to reduce cellular stress during the filtration process. We describe the underlying biophysics of cellular stress and general solutions to scale up filtration processes for biological cells.
---
paper_title: Manipulation and sorting of magnetic particles by a magnetic force microscope on a microfluidic magnetic trap platform
paper_content:
We have integrated a microfluidic magnetic trap platform with an external magnetic force microscope (MFM) cantilever. The MFM cantilever tip serves as a magnetorobotic arm that provides a translatable local magnetic field gradient to capture and move magnetic particles with nanometer precision. The MFM electronics have been programmed to sort an initially random distribution of particles by moving them within an array of magnetic trapping elements. We measured the maximum velocity at which the particles can be translated to be 2.2mm∕s±0.1mm∕s, which can potentially permit a sorting rate of approximately 5500particles∕min. We determined a magnetic force of 35.3±2.0pN acting on a 1μm diameter particle by measuring the hydrodynamic drag force necessary to free the particle. Release of the particles from the MFM tip is made possible by a nitride membrane that separates the arm and magnetic trap elements from the particle solution. This platform has potential applications for magnetic-based sorting, manipulati...
---
paper_title: Cell Separation by Non-Inertial Force Fields in Microfluidic Systems.
paper_content:
Cell and microparticle separation in microfluidic systems has recently gained significant attention in sample preparations for biological and chemical studies. Microfluidic separation is typically achieved by applying differential forces on the target particles to guide them into different paths. This paper reviews basic concepts and novel designs of such microfluidic separators with emphasis on the use of non-inertial force fields, including dielectrophoretic force, optical gradient force, magnetic force, and acoustic primary radiation force. Comparisons of separation performances with discussions on physiological effects and instrumentation issues toward point-of-care devices are provided as references for choosing appropriate separation methods for various applications.
---
paper_title: A microfabricated deformability-based flow cytometer with application to malaria
paper_content:
Malaria resulting from Plasmodium falciparum infection is a major cause of human suffering and mortality. Red blood cell (RBC) deformability plays a major role in the pathogenesis of malaria. Here we introduce an automated microfabricated “deformability cytometer” that measures dynamic mechanical responses of 103 to 104 individual RBCs in a cell population. Fluorescence measurements of each RBC are simultaneously acquired, resulting in a population-based correlation between biochemical properties, such as cell surface markers, and dynamic mechanical deformability. This device is especially applicable to heterogeneous cell populations. We demonstrate its ability to mechanically characterize a small number of P. falciparum-infected (ring stage) RBCs in a large population of uninfected RBCs. Furthermore, we are able to infer quantitative mechanical properties of individual RBCs from the observed dynamic behavior through a dissipative particle dynamics (DPD) model. These methods collectively provide a systematic approach to characterize the biomechanical properties of cells in a high-throughput manner.
---
paper_title: Cell research with physically modified microfluidic channels: A review
paper_content:
An overview of the use of physically modified microfluidic channels towards cell research is presented. The physical modification can be realized either by combining embedded physical micro/nanostructures or a topographically patterned substrate at the micro- or nanoscale inside a channel. After a brief description of the background and the importance of the physically modified microfluidic system, various fabrication methods are described based on the materials and geometries of physical structures and channels. Of many operational principles for microfluidics (electrical, magnetic, optical, mechanical, and so on), this review primarily focuses on mechanical operation principles aided by structural modification of the channels. The mechanical forces are classified into (i) hydrodynamic, (ii) gravitational, (iii) capillary, (iv) wetting, and (v) adhesion forces. Throughout this review, we will specify examples where necessary and provide trends and future directions in the field.
---
paper_title: Elastomer based tunable optofluidic devices
paper_content:
The synergetic integration of photonics and microfluidics has enabled a wide range of optofluidic devices that can be tuned based on various physical mechanisms. One such tuning mechanism can be realized based on the elasticity of polydimethylsiloxane (PDMS). The mechanical tuning of these optofluidic devices was achieved by modifying the geometry of the device upon applying internal or external forces. External or internal forces can deform the elastomeric components that in turn can alter the optical properties of the device or directly induce flow. In this review, we discuss recent progress in tunable optofluidic devices, where tunability is enabled by the elasticity of the construction material. Different subtypes of such tuning methods will be summarized, namely tuning based on bulk or membrane deformations, and pneumatic actuation.
---
paper_title: Mechanically tunable optofluidic distributed feedback dye laser
paper_content:
We demonstrated a continuously tunable optofluidic distributed feedback (DFB) dye laser on a monolithic poly(dimethylsiloxane) (PDMS) chip. We obtained /spl sim/60 nm tuning range by mechanically varying the grating period. Single-mode operation was maintained with <0.1 nm linewidth.
---
paper_title: A Mechanically Tunable Microfluidic Cell-Trapping Device.
paper_content:
Abstract Controlled manipulation, such as isolation, positioning, and trapping of cells, is important in basic biological research and clinical diagnostics. Micro/nanotechnologies have been enabling more effective and efficient cell trapping than possible with conventional platforms. Currently available micro/nanoscale methods for cell trapping, however, still lack flexibility in precisely controlling the number of trapped cells. We exploited the large compliance of elastomers to create an array of cell-trapping microstructures, whose dimensions can be mechanically modulated by inducing uniformly distributed strain via application of external force on the chip. The device consists of two elastomer polydimethylsiloxane (PDMS) sheets, one of which bears dam-like, cup-shaped geometries to physically capture cells. The mechanical modulation is used to tune the characteristics of cell trapping to capture a predetermined number of cells, from single cells to multiple cells. Thus, enhanced utility and flexibility for practical applications can be attained, as demonstrated by tunable trapping of MCF-7 cells, a human breast cancer cell line.
---
paper_title: Tuneable separation in elastomeric microfluidics devices.
paper_content:
We describe how the elastomeric properties of PDMS (polydimethylsiloxane) can be utilised to achieve tuneable particle separation in Deterministic Lateral Displacement devices via strain controlled alteration of inter-obstacle distances, a development that opens up new avenues toward more effective separation of particles in microfluidics devices.
---
paper_title: Electrokinetic motion of particles and cells in microchannels
paper_content:
This paper provides an overview of the electrokinetic phenomena associated with particles and cells in microchannel systems. The most important phenomena covered include electrophoresis, dielectrophoresis, and induced-charge electrokinetics. The latest development of these electrokinetic techniques for particle or cell manipulations in microfluidic systems is reviewed, in terms of the basic theories, mathematical models, numerical and experimental methods, and the key results/findings from the published literatures in the most recent decades. Some of the limitations associated with the negative field effects are discussed and the perspectives for the future investigations are summarized.
---
paper_title: Deformability-based cell classification and enrichment using inertial microfluidics
paper_content:
The ability to detect and isolate rare target cells from heterogeneous samples is in high demand in cell biology research, immunology, tissue engineering and medicine. Techniques allowing label-free cell enrichment or detection are especially important to reduce the complexity and costs towards clinical applications. Single-cell deformability has recently been recognized as a unique label-free biomarker for cell phenotype with implications for assessment of cancer invasiveness. Using a unique combination of fluid dynamic effects in a microfluidic system, we demonstrate high-throughput continuous label-free cell classification and enrichment based on cell size and deformability. The system takes advantage of a balance between deformability-induced and inertial lift forces as cells travel in a microchannel flow. Particles and droplets with varied elasticity and viscosity were found to have separate lateral dynamic equilibrium positions due to this balance of forces. We applied this system to successfully classify various cell types using cell size and deformability as distinguishing markers. Furthermore, using differences in dynamic equilibrium positions, we adapted the system to conduct passive, label-free and continuous cell enrichment based on these markers, enabling off-chip sample collection without significant gene expression changes. The presented method has practical potential for high-throughput deformability measurements and cost-effective cell separation to obtain viable target cells of interest in cancer research, immunology, and regenerative medicine.
---
paper_title: Inertial microfluidics for continuous particle separation in spiral microchannels.
paper_content:
In this work we report on a simple inertial microfluidic device that achieves continuous multi-particle separation using the principle of Dean-coupled inertial migration in spiral microchannels. The dominant inertial forces coupled with the Dean rotational force due to the curvilinear microchannel geometry cause particles to occupy a single equilibrium position near the inner microchannel wall. The position at which particles equilibrate is dependent on the ratio of the inertial lift to Dean drag forces. Using this concept, we demonstrate, for the first time, a spiral lab-on-a-chip (LOC) for size-dependent focusing of particles at distinct equilibrium positions across the microchannel cross-section from a multi-particle mixture. The individual particle streams can be collected with an appropriately designed outlet system. To demonstrate this principle, a 5-loop Archimedean spiral microchannel with a fixed width of 500 microm and a height of 130 microm was used to simultaneously and continuously separate 10 microm, 15 microm, and 20 microm polystyrene particles. The device exhibited 90% separation efficiency. The versatility of the device was demonstrated by separating neuroblastoma and glioma cells with 80% efficiency and high relative viability (>90%). The achieved throughput of approximately 1 million cells/min is substantially higher than the sorting rates reported by other microscale sorting methods and is comparable to the rates obtained with commercial macroscale flow cytometry techniques. The simple planar structure and high throughput offered by this passive microfluidic approach make it attractive for LOC devices in biomedical and environmental applications.
---
paper_title: Cellular enrichment through microfluidic fractionation based on cell biomechanical properties
paper_content:
The biomechanical properties of populations of diseased cells are shown to have differences from healthy populations of cells, yet the overlap of these biomechanical properties can limit their use in disease cell enrichment and detection. We report a new microfluidic cell enrichment technology that continuously fractionates cells through differences in biomechanical properties, resulting in highly pure cellular subpopulations. Cell fractionation is achieved in a microfluidic channel with an array of diagonal ridges that are designed to segregate biomechanically distinct cells to different locations in the channel. Due to the imposition of elastic and viscous forces during cellular compression, which are a function of cell biomechanical properties including size and viscoelasticity, larger, stiffer and less viscos cells migrate parallel to the diagonal ridges and exhibit positive lateral displacement. On the other hand, smaller, softer and more viscous cells migrate perpendicular to the diagonal ridges due to circulatory flow induced by the ridges and result in negative lateral displacement. Multiple outlets are then utilized to collect cells with finer gradation of differences in cell biomechanical properties. The result is that cell fractionation dramatically improves cell separation efficiency compared to binary outputs and enables the measurement of subtle biomechanical differences within a single cell type. As a proof-of-concept demonstration, we mix two different leukemia cell lines (K562 and HL60) and utilize cell fractionation to achieve over 45-fold enhancement of cell populations, with high purity cellular enrichment (90% to 99%) of each cell line. In addition, we demonstrate cell fractionation of a single cell type (K562 cells) into subpopulations and characterize the variations of biomechanical properties of the separated cells with atomic force microscopy. These results will be beneficial to obtaining label-free separation of cellular mixtures, or to better investigate the origins of biomechanical differences in a single cell type.
---
paper_title: A tunable micro filter modulated by pneumatic pressure for cell separation
paper_content:
Abstract This study reports a new microfluidic-based filter for size-tunable separation of microbeads or cells. The filtration separation mechanism is based on the pneumatically tunable deformation of poly-dimethylsiloxane (PDMS) membranes, which block the fluid channel with a varied degree. This defines the dimensions of the open area of the fluid channel and thus determines the maximum diameter the microbeads or cells which can pass through. The proposed device incorporates pneumatic micropumps for automatic liquid handling. Another unique feature of this filter is an unclogging mechanism using a back-flush operating mode, by which a reverse-directional flow is utilized to flush the clogged filter zone. The separation performance of the proposed device has been experimentally evaluated. Results show that this developed device is able to provide precise size-dependent filtration, with a high passage efficiency (82–89%) for microbeads with sizes smaller than the defined void space in the filter zone. Also, the proposed separation mechanism is also capable of providing a reasonable filtration rate (14.9–3.3 μl/min). Furthermore, the separation of chondrocytes from a 30 μl suspension of enzymatically digested tissue is successfully demonstrated, showing an excellent cell passage efficiency of 93% and a cell viability of 96%. The proposed device is therefore capable of performing cell separation in situations where either the harvested specimen is limited or the sample cell content is sparse. It also paves a new route to delicately separate or to isolate cells in a simple and controllable manner.
---
paper_title: Microfluidics for flow cytometric analysis of cells and particles.
paper_content:
This review describes recent developments in microfabricated flow cytometers and related microfluidic devices that can detect, analyze, and sort cells or particles. The high-speed analytical capabilities of flow cytometry depend on the cooperative use of microfluidics, optics and electronics. Along with the improvement of other components, replacement of conventional glass capillary-based fluidics with microfluidic sample handling systems operating in microfabricated structures enables volume- and power-efficient, inexpensive and flexible analysis of particulate samples. In this review, we present various efforts that take advantage of novel microscale flow phenomena and microfabrication techniques to build microfluidic cell analysis systems.
---
paper_title: Particle focusing in microfluidic devices
paper_content:
Focusing particles (both biological and synthetic) into a tight stream is usually a necessary step prior to counting, detecting, and sorting them. The various particle focusing approaches in microfluidic devices may be conveniently classified as sheath flow focusing and sheathless focusing. Sheath flow focusers use one or more sheath fluids to pinch the particle suspension and thus focus the suspended particles. Sheathless focusers typically rely on a force to manipulate particles laterally to their equilibrium positions. This force can be either externally applied or internally induced by channel topology. Therefore, the sheathless particle focusing methods may be further classified as active or passive by the nature of the forces involved. The aim of this article is to introduce and discuss the recent developments in both sheath flow and sheathless particle focusing approaches in microfluidic devices.
---
paper_title: Mechanical characterization of bulk Sylgard 184 for microfluidics and microengineering
paper_content:
Polydimethylsiloxane (PDMS) elastomers are extensively used for soft lithographic replication of microstructures in microfluidic and micro-engineering applications. Elastomeric microstructures are commonly required to fulfil an explicit mechanical role and accordingly their mechanical properties can critically affect device performance. The mechanical properties of elastomers are known to vary with both curing and operational temperatures. However, even for the elastomer most commonly employed in microfluidic applications, Sylgard 184, only a very limited range of data exists regarding the variation in mechanical properties of bulk PDMS with curing temperature. We report an investigation of the variation in the mechanical properties of bulk Sylgard 184 with curing temperature, over the range 25 ◦ C to 200 ◦ C. PDMS samples for tensile and compressive testing were fabricated according to ASTM standards. Data obtained indicates variation in mechanical properties due to curing temperature for Young’s modulus of 1.32‐2.97 MPa, ultimate tensile strength of 3.51‐7.65 MPa, compressive modulus of 117.8‐186.9 MPa and ultimate compressive strength of 28.4‐51.7 GPa in a range up to 40% strain and hardness of 44‐54 ShA.
---
paper_title: Hydrodynamic filtration for on-chip particle concentration and classification utilizing microfluidics
paper_content:
We propose here a new method for continuous concentration and classification of particles in microfluidic devices, named hydrodynamic filtration. When a particle is flowing in a microchannel, the center position of the particle cannot be present in a certain distance from sidewalls, which is equal to the particle radius. The proposed method utilizes this fact, and is performed using a microchannel having multiple side branch channels. By withdrawing a small amount of liquid repeatedly from the main stream through the side channels, particles are concentrated and aligned onto the sidewalls. Then the concentrated and aligned particles can be collected according to size through other side channels (selection channels) in the downstream of the microchannel. Therefore, continuous introduction of a particle suspension into the microchannel enables both particle concentration and classification at the same time. In this method, the flow profile inside a precisely fabricated microchannel determines the size limit of the filtered substances. So the filtration can be performed even when the channel widths are much larger than the particle size, without the problem of channel clogging. In this study, concentrations of polymer microspheres with diameters of 1–3 µm were increased 20–50-fold, and they were collected independently according to size. In addition, selective enrichment of leukocytes from blood was successfully performed.
---
paper_title: Free Flow Acoustophoresis: Microfluidic-Based Mode of Particle and Cell Separation
paper_content:
A novel method, free flow acoustophoresis (FFA), capable of continuous separation of mixed particle suspensions into multiple outlet fractions is presented. Acoustic forces are utilized to separate particles based on their size and density. The method is shown to be suitable for both biological and nonbiological suspended particles. The microfluidic separation chips were fabricated using conventional microfabrication methods. Particle separation was accomplished by combining laminar flow with the axial acoustic primary radiation force in an ultrasonic standing wave field. Dissimilar suspended particles flowing through the 350-microm-wide channel were thereby laterally translated to different regions of the laminar flow profile, which was split into multiple outlets for continuous fraction collection. Using four outlets, a mixture of 2-, 5-, 8-, and 10-microm polystyrene particles was separated with between 62 and 94% of each particle size ending up in separate fractions. Using three outlets and three particle sizes (3, 7, and 10 microm) the corresponding results ranged between 76 and 96%. It was also proven possible to separate normally acoustically inseparable particle types by manipulating the density of the suspending medium with cesium chloride. The medium manipulation, in combination with FFA, was further used to enable the fractionation of red cells, platelets, and leukocytes. The results show that free flow acoustophoresis can be used to perform complex separation tasks, thereby offering an alternative to expensive and time-consuming methods currently in use.
---
paper_title: Tuneable hydrophoretic separation using elastic deformation of poly(dimethylsiloxane)
paper_content:
This paper demonstrates a method for tuning elastomeric microchannels for hydrophoretic separation made in poly(dimethylsiloxane) (PDMS). Uniform compressive strain is imposed on the elastomeric microchannel between two acrylic substrates by fastening the bolts. The elastomeric microchannel can change its cross-section during compression, simultaneously tuning the criterion for hydrophoretic ordering. The change of the channel cross-section under compression is studied using a confocal microscope and finite element method (FEM). By pressing the channel for hydrophoretic separation, we achieved tuning of the separation criterion from 7 to 2.5 µm in particle diameter.
---
paper_title: Hydrodynamic mechanisms of cell and particle trapping in microfluidics
paper_content:
Focusing and sorting cells and particles utilizing microfluidic phenomena have been flourishing areas of development in recent years. These processes are largely beneficial in biomedical applications and fundamental studies of cell biology as they provide cost-effective and point-of-care miniaturized diagnostic devices and rare cell enrichment techniques. Due to inherent problems of isolation methods based on the biomarkers and antigens, separation approaches exploiting physical characteristics of cells of interest, such as size, deformability, and electric and magnetic properties, have gained currency in many medical assays. Here, we present an overview of the cell/particle sorting techniques by harnessing intrinsic hydrodynamic effects in microchannels. Our emphasis is on the underlying fluid dynamical mechanisms causing cross stream migration of objects in shear and vortical flows. We also highlight the advantages and drawbacks of each method in terms of throughput, separation efficiency, and cell viability. Finally, we discuss the future research areas for extending the scope of hydrodynamic mechanisms and exploring new physical directions for microfluidic applications.
---
paper_title: A particle trapping chip using the wide and uniform slit formed by a deformable membrane and air bubble plugs
paper_content:
We present a high-efficient particle trapping chip, where a wide and uniform slit is formed by a deformable membrane barrier with air bubble plugs. The air bubble plugs remained in the extended microchannel during sample filling process, block the particle passage at the both side ends of the membrane, thus all particles flow through the uniform slit gap. Therefore, high-efficient particle trapping without particle loss can be achieved. In the experimental study using 10.3μm-diameter polystyrene beads, the membrane barrier with the air bubble plugs successfully trapped the injected beads with the trapping efficiency of 100% at the flow rate of 10μl/min, while the barrier without the air bubble plugs showed low efficiency of 20%. The present simple and effective particle trapping device is applicable for the high-efficient bioparticle isolation and recovery in the micro total analysis system.
---
paper_title: Tunable Open-Channel Microfluidics on Soft Poly(dimethylsiloxane) (PDMS) Substrates with Sinusoidal Grooves
paper_content:
On soft poly(dimethylsiloxane) (PDMS) substrates with 1D sinusoidal wrinkle patterns, we study the anisotropic wetting behavior and fluidic transport as a function of surface energy and groove geometry. On grooved substrates with a contact angle greater than 90°, liquids form dropletlike morphology, and its contact angle in the direction perpendicular to the grooves is larger than that parallel to the grooves. This wetting anisotropy, for a fixed Young’s contact angle, is found to increase when the grooves become deeper. On substrates with a contact angle smaller than 90° and deep grooves (aspect ratio ≥0.3), liquids form filament-like morphology. When the groove depth is further increased by compressing the PDMS film beyond a threshold value, which depends on the surface wettability, fluid starts imbibing the grooves spontaneously. The dynamics of the liquid imbibition of grooves is studied, and a square-law dependence between the length of the liquid filament and time is found, which obeys Washburn’s la...
---
paper_title: Clog-free cell filtration using resettable cell traps
paper_content:
The separation of cells by filtration through microstructured constrictions is limited by clogging and adsorption, which reduce selectivity and prevent the extraction of separated cells. To address this key challenge, we developed a mechanism for simply and reliably adjusting the cross-section of a microfluidic channel to selectively capture cells based on a combination of size and deformability. After a brief holding period, trapped cells can then be released back into flow, and if necessary, extracted for subsequent analysis. Periodically clearing filter constrictions of separated cells greatly improves selectivity and throughput, and minimizes adsorption of cells to the filter microstructure. This mechanism is capable of discriminating cell-sized polystyrene microspheres with <1 μm resolution. Rare cancer cells doped into leukocytes can be enriched ~1800× with ~90% yield despite a significant overlap in size between these cell types. An important characteristic of this process is that contaminant leukocytes are captured by non-specific adsorption and not mechanical constraint, enabling repeated filtration to improve performance. The throughput of this mechanism is 900 000 cells per hour for 32 multiplexed microchannels, or ~1 200 000 cells cm−2 h−1 on a per area basis, which exceeds existing micropore filtration mechanisms by a factor of 20.
---
paper_title: Squeeze-chip: a finger-controlled microfluidic flow network device and its application to biochemical assays
paper_content:
We designed and fabricated a novel microfluidic device that can be operated through simple finger squeezing. On-chip microfluidic flow control is enabled through an optimized network of check-valves and squeeze-pumps. The sophisticated flow system can be easily constructed by combining a few key elements. We implemented this device to perform quantitative biochemical assays with no requirement for precision instruments.
---
paper_title: Microfluidic applications of magnetic particles for biological analysis and catalysis.
paper_content:
Keywords: Spin-Valve Sensors ; Cell Tracking Velocimetry ; On-A-Chip ; Polymerase-Chain-Reaction ; Total Analysis Systems ; Iron-Oxide Nanoparticles ; Field-Flow Fractionation ; Cross-Coupling Reactions ; Circulating Tumor-Cells ; Mode Magnetophoretic Microseparator Reference LMIS2-ARTICLE-2010-004doi:10.1021/cr9001929View record in Web of Science Record created on 2010-01-20, modified on 2016-08-08
---
paper_title: Mechanically switchable wetting on wrinkled elastomers with dual-scale roughness
paper_content:
We report the fabrication of a new superhydrophobic surface with dual-scale roughness by coating silica nanoparticles on a poly(dimethylsiloxane) (PDMS) elastomer bilayer film with micro-scaled ripples. The wetting behavior of the surface can be reversibly tuned by applying a mechanical strain, which induces the change in micro-scale roughness determined by the ripples. The dual-scale roughness promotes the wetting transition of the final dual-structure surface from Wenzel region into the Cassie region, thus, reducing the sliding angle at least three times in comparison to that from the surfaces with single-scale roughness (either from the nanoparticle film or the wrinkled PDMS film). In addition, three-times and fast-response tunability of the sliding angle by applying mechanical strain on this dual-roughness surface is demonstrated.
---
paper_title: Microfluidic immunoaffinity separations for bioanalysis.
paper_content:
Microfluidic devices often rely on antibody-antigen interactions as a means of separating analytes of interest from sample matrices. Immunoassays and immunoaffinity separations performed in miniaturized formats offer selective target isolation with minimal reagent consumption and reduced analysis times. The introduction of biological fluids and other complicated matrices often requires sample pretreatment or system modifications for compatibility with small-scale devices. Miniaturization of external equipment facilitates the potential for portable use such as in patient point-of-care settings. Microfluidic immunoaffinity systems including capillary and chip platforms have been assembled from basic instrument components for fluid control, sample introduction, and detection. The current review focuses on the use of immunoaffinity separations in microfluidic devices with an emphasis on pump-based flow and biological sample analysis.
---
paper_title: A Mechanically Tunable Microfluidic Cell-Trapping Device.
paper_content:
Abstract Controlled manipulation, such as isolation, positioning, and trapping of cells, is important in basic biological research and clinical diagnostics. Micro/nanotechnologies have been enabling more effective and efficient cell trapping than possible with conventional platforms. Currently available micro/nanoscale methods for cell trapping, however, still lack flexibility in precisely controlling the number of trapped cells. We exploited the large compliance of elastomers to create an array of cell-trapping microstructures, whose dimensions can be mechanically modulated by inducing uniformly distributed strain via application of external force on the chip. The device consists of two elastomer polydimethylsiloxane (PDMS) sheets, one of which bears dam-like, cup-shaped geometries to physically capture cells. The mechanical modulation is used to tune the characteristics of cell trapping to capture a predetermined number of cells, from single cells to multiple cells. Thus, enhanced utility and flexibility for practical applications can be attained, as demonstrated by tunable trapping of MCF-7 cells, a human breast cancer cell line.
---
paper_title: What is the Young's Modulus of Silicon?
paper_content:
The Young's modulus (E) of a material is a key parameter for mechanical engineering design. Silicon, the most common single material used in microelectromechanical systems (MEMS), is an anisotropic crystalline material whose material properties depend on orientation relative to the crystal lattice. This fact means that the correct value of E for analyzing two different designs in silicon may differ by up to 45%. However, perhaps, because of the perceived complexity of the subject, many researchers oversimplify silicon elastic behavior and use inaccurate values for design and analysis. This paper presents the best known elasticity data for silicon, both in depth and in a summary form, so that it may be readily accessible to MEMS designers.
---
paper_title: Consideration of Nonuniformity in Elongation of Microstructures in a Mechanically Tunable Microfluidic Device for Size-Based Isolation of Microparticles
paper_content:
This paper presents an investigation of the nonlinearity behavior in deformation that appears during the linear stretching of the elastomeric microstructures in a pillar-based microfilter. Determining the impact of the nonuniformity in strain on the geometry and performance of such a planar device under in-plane stretch is the motivation for this study. A semiempirical model is used to explain the physical strain–stress behavior from the root to the tip of the micropillars in the linear arrays in the device. For microfabrication of the device, the main substrate is elastomeric polyurethane methacrylate, which is utilized in an ultraviolet-molding method. Optical imaging and scanning electron microscopy were used to evaluate the deformation of the microstructures under different loading conditions. It was demonstrated that by applying mechanical strains of $\Delta L/L_{o})$ on the elastomeric device using a modified syringe pump, the spacing of the pillars is increased effectively to about three times the size of the initial setting of 5.5 $\mu $ m, which corresponds to a strain of above 180% in the absence of nonuniformity effects. This simple yet interesting behavior can be exploited to rapidly adjust a microfluidic device for application to the separation of microbeads or blood cells, which would normally require the geometrical redesign and fabrication of a new device. [2014-0213]
---
paper_title: A pillar-based microfilter for isolation of white blood cells on elastomeric substrate
paper_content:
Our goal is to design, fabricate, and characterize a pillar-based microfluidic device for size-based separation of human blood cells on an elastomeric substrate with application in the low-cost rapid prototyping of lab-chip devices. The single inlet single outlet device is using parallel U-shape arrays of pillars with cutoff size of 5.5 μm for trapping white blood cells (WBCs) in a pillar chamber with internal dead-volume of less than 1.0 μl. The microstructures are designed to limit the elastomeric deformation against fluid pressures. Numerical analysis showed that at maximum pressure loss of 15 kPa which is lower than the device conformal bonding strength, the pillar elastomeric deformation is less than 5% for flow rates of up to 1.0 ml min−1. Molding technique was employed for device prototyping using polyurethane methacrylate (PUMA) resin and polydimethylsiloxane (PDMS) mold. Characterization of the dual-layer device with beads and blood samples is performed. Tests with blood injection showed that ∼18%–25% of WBCs are trapped and ∼84%–89% of red blood cells (RBCs) are passed at flow rates of 15–50 μl min−1 with a slight decrease of WBCs trap and improve of the RBCs pass at higher flow rates. Similar results were obtained by separation of mixed microspheres of different size injected at flow rates of up to 400 μl min−1. Tests with blood samples stained by fluorescent gel demonstrated that the WBCs are accumulated in the arrays of pillars that later end up to blockage of the device. Filtration results of using elastomeric substrate present a good consistency with the trend of separation efficiencies of the similar silicon-based filters.
---
paper_title: A new UV-curing elastomeric substrate for rapid prototyping of microfluidic devices
paper_content:
Rapid prototyping in the design cycle of new microfluidic devices is very important for shortening time-to-market. Researchers are facing the challenge to explore new and suitable substrates with simple and efficient microfabrication techniques. In this paper, we introduce and characterize a UV-curing elastomeric polyurethane methacrylate (PUMA) for rapid prototyping of microfluidic devices. The swelling and solubility of PUMA in different chemicals is determined. Time-dependent measurements of water contact angle show that the native PUMA is hydrophilic without surface treatment. The current monitoring method is used for measurement of the electroosmotic flow mobility in the microchannels made from PUMA. The optical, physical, thermal and mechanical properties of PUMA are evaluated. The UV-lithography and molding process is used for making micropillars and deep channel microfluidic structures integrated to the supporting base layer. Spin coating is characterized for producing different layer thicknesses of PUMA resin. A device is fabricated and tested for examining the strength of different bonding techniques such as conformal, corona treating and semi-curing of two PUMA layers in microfluidic application and the results show that the bonding strengths are comparable to that of PDMS. We also report fabrication and testing of a three-layer multi inlet/outlet microfluidic device including a very effective fluidic interconnect for application demonstration of PUMA as a promising new substrate. A simple micro-device is developed and employed for observing the pressure deflection of membrane made from PUMA as a very effective elastomeric valve in microfluidic devices. (Some figures may appear in colour only in the online journal)
---
paper_title: Construction of Stess-Strain Curves for Brittle Materials by Indentation in a Wide Temperature Range
paper_content:
A test method procedure for constructing stress-strain curves by indentation of brittle and low plastic materials under temperature ranging from 20 to 900°C was developed recently by Yu. Milman, B. Galanov et al. According to this test method procedure stress-strain curves σ - Є for Si, Ge, SiC, TiB2 and WC/Co hard alloy were constructed in the above temperature region and mechanical parameters such as elastic point, σe, yield stress, σs, etc. were extracted by using the measurement results obtained by a set of trihedral pyramid indenters with different angles at the tip, γ1, ranging from 45 to 85°C.
---
paper_title: Poisson's ratio and modern materials
paper_content:
Poisson's ratio describes the resistance of a material to distort under mechanical load rather than to alter in volume. On the bicentenary of the publication of Poisson's Traite de Mecanique, the continuing relevance of Poisson's ratio in the understanding of modern materials is reviewed.
---
paper_title: Rapid prototyping polymers for microfluidic devices and high pressure injections.
paper_content:
Multiple methods of fabrication exist for microfluidic devices, with different advantages depending on the end goal of industrial mass production or rapid prototyping for the research laboratory. Polydimethylsiloxane (PDMS) has been the mainstay for rapid prototyping in the academic microfluidics community, because of its low cost, robustness and straightforward fabrication, which are particularly advantageous in the exploratory stages of research. However, despite its many advantages and its broad use in academic laboratories, its low elastic modulus becomes a significant issue for high pressure operation as it leads to a large alteration of channel geometry. Among other consequences, such deformation makes it difficult to accurately predict the flow rates in complex microfluidic networks, change flow speed quickly for applications in stop-flow lithography, or to have predictable inertial focusing positions for cytometry applications where an accurate alignment of the optical system is critical. Recently, other polymers have been identified as complementary to PDMS, with similar fabrication procedures being characteristic of rapid prototyping but with higher rigidity and better resistance to solvents; Thermoset Polyester (TPE), Polyurethane Methacrylate (PUMA) and Norland Adhesive 81 (NOA81). In this review, we assess these different polymer alternatives to PDMS for rapid prototyping, especially in view of high pressure injections with the specific example of inertial flow conditions. These materials are compared to PDMS, for which magnitudes of deformation and dynamic characteristics are also characterized. We provide a complete and systematic analysis of these materials with side-by-side experiments conducted in our lab that also evaluate other properties, such as biocompatibility, solvent compatibility, and ease of fabrication. We emphasize that these polymer alternatives, TPE, PUMA and NOA, have some considerable strengths for rapid prototyping when bond strength, predictable operation at high pressure, or transitioning to commercialization are considered important for the application.
---
paper_title: Stress-Strain Response of PMMA as a Function of Strain-Rate and Temperature
paper_content:
The strain rate response of PMMA was investigated under uniaxial compression at room temperature at strain-rates ranging from 0.0001/sec to about 4300/sec. In addition, the temperature response of PMMA was investigated at strain-rates of 1/sec and 0.001/sec at temperatures ranging from 0°C to 115°C (below Tg). High rate experiments at room temperature (greater than 1/sec rates) were conducted using a split-Hopkinson Pressure bar (SHPB) with pulse-shaping. This is necessary to induce a compressive loading on the specimen at a constant strain rate to achieve dynamic stress equilibrium. Results conducted at room temperature show that PMMA is strain rate sensitive from quasi-static to dynamic loading. Additionally, the stress-strain response exhibits a decrease in the flow stress with an increase in temperature. These experimental data are being used to develop constitutive behavior models of PMMA.
---
paper_title: Mechanical properties of silicones for MEMS
paper_content:
This paper focuses on the mechanical properties of polydimethylsiloxane (PDMS) relevant for microelectromechanical system (MEMS) applications. In view of the limited amount of published data, we analyzed the two products most commonly used in MEMS, namely RTV 615 from Bayer Silicones and Sylgard 184 from Dow Corning. With regard to mechanical properties, we focused on the dependence of the elastic modulus on the thinner concentration, temperature and strain rate. In addition, creep and thermal aging were analyzed. We conclude that the isotropic and constant elastic modulus has strong dependence on the hardening conditions. At high hardening temperatures and long hardening time, RTV 615 displays an elastic modulus of 1.91 MPa and Sylgard 184 of 2.60 MPa in a range up to 40% strain.
---
paper_title: Low cost prototyping of microfluidic structure
paper_content:
An elastomeric microfluidic structure was developed by polymerization of a liquid photopolymer using UV lithography process. Integration of different components of a microfluidic device including microchannel, micromixer, and microchamber, on one layer and by using one photomask is practiced. More complicated geometries and 2-layer chip was examined in this research and gives promising results. This process enables well control over the thicknesses of the layers, and dimensions of the components considering. The proposed simple process benefits from low cost in development time and expenses of the materials.
---
paper_title: Design of sealed cavity microstructures formed by silicon wafer bonding
paper_content:
Three fabrication issues related to the design and fabrication of micromechanical devices using sealed cavities within bonded silicon wafers are discussed. The first concerns the resultant residual gas pressure within a sealed cavity between two bonded wafers after bonding and a high-temperature anneal. The second concerns the prediction of plastic deformation in capping layers of single-crystal silicon over sealed cavities. Exposure of sealed cavity structures to a high-temperature environment causes the trapped residual gas to expand, which can result in the plastic deformation of the capping layer. A model for analytically predicting the occurrence of plastic deformatio in these silicon capping layers has been developed. The third fabrication issue concerns the prediction of the resultant height of plastically deformed capping layers of silicon after cooling. A model which gives a lower and an upper bound on the height, based on an analytical spherical shell membrane stress equation, has been developed. >
---
paper_title: Exploiting the Oxygen Inhibitory Effect on UV Curing in Microfabrication: A Modified Lithography Technique
paper_content:
Rapid prototyping (RP) of microfluidic channels in liquid photopolymers using standard lithography (SL) involves multiple deposition steps and curing by ultraviolet (UV) light for the construction of a microstructure layer. In this work, the conflicting effect of oxygen diffusion and UV curing of liquid polyurethane methacrylate (PUMA) is investigated in microfabrication and utilized to reduce the deposition steps and to obtain a monolithic product. The conventional fabrication process is altered to control for the best use of the oxygen presence in polymerization. A novel and modified lithography technique is introduced in which a single step of PUMA coating and two steps of UV exposure are used to create a microchannel. The first exposure is maskless and incorporates oxygen diffusion into PUMA for inhibition of the polymerization of a thin layer from the top surface while the UV rays penetrate the photopolymer. The second exposure is for transferring the patterns of the microfluidic channels from the contact photomask onto the uncured material. The UV curing of PUMA as the main substrate in the presence of oxygen is characterized analytically and experimentally. A few typical elastomeric microstructures are manufactured. It is demonstrated that the obtained heights of the fabricated structures in PUMA are associated with the oxygen concentration and the UV dose. The proposed technique is promising for the RP of molds and microfluidic channels in terms of shorter processing time, fewer fabrication steps and creation of microstructure layers with higher integrity.
---
paper_title: A new USP Class VI-compliant substrate for manufacturing disposable microfluidic devices.
paper_content:
As microfluidic systems transition from research tools to disposable clinical-diagnostic devices, new substrate materials are needed to meet both the regulatory requirement as well as the economics of disposable devices. This paper introduces a UV-curable polyurethane-methacrylate (PUMA) substrate that has been qualified for medical use and meets all of the challenges of manufacturing microfluidic devices. PUMA is optically transparent, biocompatible, and exhibits high electroosmotic mobility without surface modification. We report two production processes that are compatible with the existing methods of rapid prototyping and present characterizations of the resultant PUMA microfluidic devices.
---
paper_title: What is the Young's Modulus of Silicon?
paper_content:
The Young's modulus (E) of a material is a key parameter for mechanical engineering design. Silicon, the most common single material used in microelectromechanical systems (MEMS), is an anisotropic crystalline material whose material properties depend on orientation relative to the crystal lattice. This fact means that the correct value of E for analyzing two different designs in silicon may differ by up to 45%. However, perhaps, because of the perceived complexity of the subject, many researchers oversimplify silicon elastic behavior and use inaccurate values for design and analysis. This paper presents the best known elasticity data for silicon, both in depth and in a summary form, so that it may be readily accessible to MEMS designers.
---
paper_title: Consideration of Nonuniformity in Elongation of Microstructures in a Mechanically Tunable Microfluidic Device for Size-Based Isolation of Microparticles
paper_content:
This paper presents an investigation of the nonlinearity behavior in deformation that appears during the linear stretching of the elastomeric microstructures in a pillar-based microfilter. Determining the impact of the nonuniformity in strain on the geometry and performance of such a planar device under in-plane stretch is the motivation for this study. A semiempirical model is used to explain the physical strain–stress behavior from the root to the tip of the micropillars in the linear arrays in the device. For microfabrication of the device, the main substrate is elastomeric polyurethane methacrylate, which is utilized in an ultraviolet-molding method. Optical imaging and scanning electron microscopy were used to evaluate the deformation of the microstructures under different loading conditions. It was demonstrated that by applying mechanical strains of $\Delta L/L_{o})$ on the elastomeric device using a modified syringe pump, the spacing of the pillars is increased effectively to about three times the size of the initial setting of 5.5 $\mu $ m, which corresponds to a strain of above 180% in the absence of nonuniformity effects. This simple yet interesting behavior can be exploited to rapidly adjust a microfluidic device for application to the separation of microbeads or blood cells, which would normally require the geometrical redesign and fabrication of a new device. [2014-0213]
---
paper_title: A tunable microfluidic-based filter modulated by pneumatic pressure for separation of blood cells
paper_content:
This article presents a new microfluidic-based filter for the separation of microbeads or blood cells with a high filtration rate. The device was composed of a circular micropump for automatic liquid transport, and a normally closed valve located at the filter zone for separation of beads or cells. The filtration mechanism was based on the tunable deformation of polydimethylsiloxane (PDMS) membranes that defined the gap between a floating block structure and the substrate which determined the maximum diameter of the beads/cells that can pass through the filter. Another unique feature of this filter is an unclogging mechanism using a suction force, resulting in a back flow to remove any trapped beads/cells in the filter zone when the PDMS membrane was restored to its initial state. The separation performance of the proposed device was first experimentally evaluated by using microbeads. The results showed that this device was capable of providing size-tunable filtration with a high recovery efficiency (95.25–96.21%) for microbeads with sizes smaller than the defined gap in the filter zone. Furthermore, the proposed device was also capable of performing separation of blood cells and blood plasma from human whole blood. Experimental results showed that an optimum filtration rate of 21.40 and 3.00 μl/min correspond to high recovery efficiencies of 86.69 and 80.66%, respectively, for red blood cells (RBCs) and blood plasma. The separation method developed in this work could be used for various point-of-care diagnostic applications involving separation of plasma and blood cells.
---
paper_title: Tuneable separation in elastomeric microfluidics devices.
paper_content:
We describe how the elastomeric properties of PDMS (polydimethylsiloxane) can be utilised to achieve tuneable particle separation in Deterministic Lateral Displacement devices via strain controlled alteration of inter-obstacle distances, a development that opens up new avenues toward more effective separation of particles in microfluidics devices.
---
paper_title: Development and multiplexed control of latching pneumatic valves using microfluidic logical structures.
paper_content:
Novel latching microfluidic valve structures are developed, characterized, and controlled independently using an on-chip pneumatic demultiplexer. These structures are based on pneumatic monolithic membrane valves and depend upon their normally-closed nature. Latching valves consisting of both three- and four-valve circuits are demonstrated. Vacuum or pressure pulses as short as 120 ms are adequate to hold these latching valves open or closed for several minutes. In addition, an on-chip demultiplexer is demonstrated that requires only n pneumatic inputs to control 2(n-1) independent latching valves. These structures can reduce the size, power consumption, and cost of microfluidic analysis devices by decreasing the number of off-chip controllers. Since these valve assemblies can form the standard logic gates familiar in electronic circuit design, they should be useful in developing complex pneumatic circuits.
---
paper_title: A tunable micro filter modulated by pneumatic pressure for cell separation
paper_content:
Abstract This study reports a new microfluidic-based filter for size-tunable separation of microbeads or cells. The filtration separation mechanism is based on the pneumatically tunable deformation of poly-dimethylsiloxane (PDMS) membranes, which block the fluid channel with a varied degree. This defines the dimensions of the open area of the fluid channel and thus determines the maximum diameter the microbeads or cells which can pass through. The proposed device incorporates pneumatic micropumps for automatic liquid handling. Another unique feature of this filter is an unclogging mechanism using a back-flush operating mode, by which a reverse-directional flow is utilized to flush the clogged filter zone. The separation performance of the proposed device has been experimentally evaluated. Results show that this developed device is able to provide precise size-dependent filtration, with a high passage efficiency (82–89%) for microbeads with sizes smaller than the defined void space in the filter zone. Also, the proposed separation mechanism is also capable of providing a reasonable filtration rate (14.9–3.3 μl/min). Furthermore, the separation of chondrocytes from a 30 μl suspension of enzymatically digested tissue is successfully demonstrated, showing an excellent cell passage efficiency of 93% and a cell viability of 96%. The proposed device is therefore capable of performing cell separation in situations where either the harvested specimen is limited or the sample cell content is sparse. It also paves a new route to delicately separate or to isolate cells in a simple and controllable manner.
---
paper_title: Dynamic trapping and high-throughput patterning of cells using pneumatic microstructures in an integrated microfluidic device.
paper_content:
Microfluidic trapping methods create significant opportunities to establish highly controlled cell positioning and arrangement for the microscale study of numerous cellular physiological and pathological activities. However, a simple, straightforward, dynamic, and high-throughput method for cell trapping is not yet well established. In the present paper, we report a direct active trapping method using an integrated microfluidic device with pneumatic microstructures (PμSs) for both operationally and quantitatively dynamic localization of cells, as well as for high-throughput cell patterning. We designed and fabricated U-shape PμS arrays to replace the conventional fixed microstructures for reversible trapping. Multidimensional dynamics and spatial consistency of the PμSs were optically characterized and quantitatively demonstrated. Furthermore, we performed a systematic trapping investigation of the PμSs actuated at a pressure range of 0 psi to 20 psi using three types of popularly applied mammalian cells, namely, human lung adenocarcinoma A549 cells, human hepatocellular liver carcinoma HepG2 cells, and human breast adenocarcinoma MCF-7 cells. The cells were quantitatively trapped and controlled by the U-shape PμSs in a programmatic and parallel manner, and could be opportunely released. The trapped cells with high viability were hydrodynamically protected by the real-time actuation of specifically designed umbrella-like PμSs. We demonstrate that PμSs can be applied as an active microfluidic component for large-scale cell patterning and manipulation, which could be useful in many cell-based tissue organization, immunosensor, and high-throughput imaging and screening.
---
paper_title: Tuneable hydrophoretic separation using elastic deformation of poly(dimethylsiloxane)
paper_content:
This paper demonstrates a method for tuning elastomeric microchannels for hydrophoretic separation made in poly(dimethylsiloxane) (PDMS). Uniform compressive strain is imposed on the elastomeric microchannel between two acrylic substrates by fastening the bolts. The elastomeric microchannel can change its cross-section during compression, simultaneously tuning the criterion for hydrophoretic ordering. The change of the channel cross-section under compression is studied using a confocal microscope and finite element method (FEM). By pressing the channel for hydrophoretic separation, we achieved tuning of the separation criterion from 7 to 2.5 µm in particle diameter.
---
paper_title: A particle trapping chip using the wide and uniform slit formed by a deformable membrane and air bubble plugs
paper_content:
We present a high-efficient particle trapping chip, where a wide and uniform slit is formed by a deformable membrane barrier with air bubble plugs. The air bubble plugs remained in the extended microchannel during sample filling process, block the particle passage at the both side ends of the membrane, thus all particles flow through the uniform slit gap. Therefore, high-efficient particle trapping without particle loss can be achieved. In the experimental study using 10.3μm-diameter polystyrene beads, the membrane barrier with the air bubble plugs successfully trapped the injected beads with the trapping efficiency of 100% at the flow rate of 10μl/min, while the barrier without the air bubble plugs showed low efficiency of 20%. The present simple and effective particle trapping device is applicable for the high-efficient bioparticle isolation and recovery in the micro total analysis system.
---
paper_title: A Dynamic Microarray with Pneumatic Valves for Selective Trapping and Releasing of Microbeads
paper_content:
This paper describes a dynamic microarray device with pneumatic valves for trapping and releasing microbeads selectively. We fabricated thin membranes of polydimethylsiloxane (PDMS) inside our conventional dynamic microfluidic device. The membrane works as a pneumatically driven valve that changes the fluidic resistance, which ultimately determines the modes of the device: trapping, passing, or releasing. Using this device, we successfully controlled these three modes by using 100 ?m polystyrene microbeads. Moreover, we succeeded in arraying two different types of microbeads alternately in a single device by selectively activating and deactivating the pneumatic valves.
---
paper_title: Consideration of Nonuniformity in Elongation of Microstructures in a Mechanically Tunable Microfluidic Device for Size-Based Isolation of Microparticles
paper_content:
This paper presents an investigation of the nonlinearity behavior in deformation that appears during the linear stretching of the elastomeric microstructures in a pillar-based microfilter. Determining the impact of the nonuniformity in strain on the geometry and performance of such a planar device under in-plane stretch is the motivation for this study. A semiempirical model is used to explain the physical strain–stress behavior from the root to the tip of the micropillars in the linear arrays in the device. For microfabrication of the device, the main substrate is elastomeric polyurethane methacrylate, which is utilized in an ultraviolet-molding method. Optical imaging and scanning electron microscopy were used to evaluate the deformation of the microstructures under different loading conditions. It was demonstrated that by applying mechanical strains of $\Delta L/L_{o})$ on the elastomeric device using a modified syringe pump, the spacing of the pillars is increased effectively to about three times the size of the initial setting of 5.5 $\mu $ m, which corresponds to a strain of above 180% in the absence of nonuniformity effects. This simple yet interesting behavior can be exploited to rapidly adjust a microfluidic device for application to the separation of microbeads or blood cells, which would normally require the geometrical redesign and fabrication of a new device. [2014-0213]
---
paper_title: Tuneable separation in elastomeric microfluidics devices.
paper_content:
We describe how the elastomeric properties of PDMS (polydimethylsiloxane) can be utilised to achieve tuneable particle separation in Deterministic Lateral Displacement devices via strain controlled alteration of inter-obstacle distances, a development that opens up new avenues toward more effective separation of particles in microfluidics devices.
---
paper_title: A microfluidic-based hydrodynamic trap: design and implementation.
paper_content:
We report an integrated microfluidic device for fine-scale manipulation and confinement of micro- and nanoscale particles in free-solution. Using this device, single particles are trapped in a stagnation point flow at the junction of two intersecting microchannels. The hydrodynamic trap is based on active flow control at a fluid stagnation point using an integrated on-chip valve in a monolithic PDMS-based microfluidic device. In this work, we characterize device design parameters enabling precise control of stagnation point position for efficient trap performance. The microfluidic-based hydrodynamic trap facilitates particle trapping using the sole action of fluid flow and provides a viable alternative to existing confinement and manipulation techniques based on electric, optical, magnetic or acoustic force fields. Overall, the hydrodynamic trap enables non-contact confinement of fluorescent and non-fluorescent particles for extended times and provides a new platform for fundamental studies in biology, biotechnology and materials science.
---
paper_title: Hydrodynamic trap-and-release of single particles using dual-function elastomeric valves: design, fabrication, and characterization
paper_content:
This paper introduces a simple method for trapping and releasing single particles, such as microbeads and living cells, using dual-function elastomeric valves. Our key technique is the utilization of the elastomeric valve as a dual-function removable trap instead of a fixed trap and a separate component for releasing trapped particles, thereby enabling a simple yet effective trap-and-release of particles. We designed, fabricated, and characterized a microfluidic-based device for trapping and releasing single beads by controlling elastomeric valves driven by pneumatic pressure and a fluid flow action. The fluid flow is controlled to ensure that beads flowing in a main stream enter into a branch channel. A bead is trapped by deflected elastomeric valves positioned at the entrance of a branch channel. The trapped bead is easily released by removing the applied pressure. The trapping and releasing of single beads of 21 μm in diameter were successfully performed under an optimized pressure and flow rate ratio. Moreover, we confirmed that continuous trapping and releasing of single beads by repeatedly switching elastomeric valves enables the collection of a controllable number of beads. Our simple method can be integrated into microfluidic systems that require single or multiple particle arrays for quantitative and high-throughput assays in applications within the fields of biology and chemistry.
---
paper_title: Hydrodynamic mechanisms of cell and particle trapping in microfluidics
paper_content:
Focusing and sorting cells and particles utilizing microfluidic phenomena have been flourishing areas of development in recent years. These processes are largely beneficial in biomedical applications and fundamental studies of cell biology as they provide cost-effective and point-of-care miniaturized diagnostic devices and rare cell enrichment techniques. Due to inherent problems of isolation methods based on the biomarkers and antigens, separation approaches exploiting physical characteristics of cells of interest, such as size, deformability, and electric and magnetic properties, have gained currency in many medical assays. Here, we present an overview of the cell/particle sorting techniques by harnessing intrinsic hydrodynamic effects in microchannels. Our emphasis is on the underlying fluid dynamical mechanisms causing cross stream migration of objects in shear and vortical flows. We also highlight the advantages and drawbacks of each method in terms of throughput, separation efficiency, and cell viability. Finally, we discuss the future research areas for extending the scope of hydrodynamic mechanisms and exploring new physical directions for microfluidic applications.
---
paper_title: Consideration of Nonuniformity in Elongation of Microstructures in a Mechanically Tunable Microfluidic Device for Size-Based Isolation of Microparticles
paper_content:
This paper presents an investigation of the nonlinearity behavior in deformation that appears during the linear stretching of the elastomeric microstructures in a pillar-based microfilter. Determining the impact of the nonuniformity in strain on the geometry and performance of such a planar device under in-plane stretch is the motivation for this study. A semiempirical model is used to explain the physical strain–stress behavior from the root to the tip of the micropillars in the linear arrays in the device. For microfabrication of the device, the main substrate is elastomeric polyurethane methacrylate, which is utilized in an ultraviolet-molding method. Optical imaging and scanning electron microscopy were used to evaluate the deformation of the microstructures under different loading conditions. It was demonstrated that by applying mechanical strains of $\Delta L/L_{o})$ on the elastomeric device using a modified syringe pump, the spacing of the pillars is increased effectively to about three times the size of the initial setting of 5.5 $\mu $ m, which corresponds to a strain of above 180% in the absence of nonuniformity effects. This simple yet interesting behavior can be exploited to rapidly adjust a microfluidic device for application to the separation of microbeads or blood cells, which would normally require the geometrical redesign and fabrication of a new device. [2014-0213]
---
paper_title: A technique of optimization of microfiltration using a tunable platform
paper_content:
The optimum efficiency of size-based filtration in microfluidic devices is highly dependent on characteristics of design, deformability of microparticles/cells, and fluid flow. The effects of filter pores and flow rate, which are the two major determining and related factors of characterization in the separation of particles and cells are investigated in this work. An elastomeric microfluidic device consisting of parallel arrays of pillars with mechanically tunable spacings is employed as an adjustable microfiltration platform. The tunable filtration system is used for finding the best conditions of separation of solid microbeads or deformable blood cells in a crossflow pillar-based method. It is demonstrated that increasing flow rate in the range of 1.0–80.0 µl min−1 has an adverse effect on the device performance in terms of decreased separation efficiency of deformable blood cells. However, by tuning the gap size in the range of 2.5–7.5 µm, the selectivity of the separation is controlled from about 5.0 to 95.0% for white blood cells (WBCs) and 40.0 to 95.0% for red blood cells (RBCs). Finally, the best range of trapping and passing efficiencies of ~70–80.0% simultaneously for WBCs and RBCs in whole blood sample is achieved at optimum gap size of ~3.5–4.0 µm.
---
paper_title: A pillar-based microfilter for isolation of white blood cells on elastomeric substrate
paper_content:
Our goal is to design, fabricate, and characterize a pillar-based microfluidic device for size-based separation of human blood cells on an elastomeric substrate with application in the low-cost rapid prototyping of lab-chip devices. The single inlet single outlet device is using parallel U-shape arrays of pillars with cutoff size of 5.5 μm for trapping white blood cells (WBCs) in a pillar chamber with internal dead-volume of less than 1.0 μl. The microstructures are designed to limit the elastomeric deformation against fluid pressures. Numerical analysis showed that at maximum pressure loss of 15 kPa which is lower than the device conformal bonding strength, the pillar elastomeric deformation is less than 5% for flow rates of up to 1.0 ml min−1. Molding technique was employed for device prototyping using polyurethane methacrylate (PUMA) resin and polydimethylsiloxane (PDMS) mold. Characterization of the dual-layer device with beads and blood samples is performed. Tests with blood injection showed that ∼18%–25% of WBCs are trapped and ∼84%–89% of red blood cells (RBCs) are passed at flow rates of 15–50 μl min−1 with a slight decrease of WBCs trap and improve of the RBCs pass at higher flow rates. Similar results were obtained by separation of mixed microspheres of different size injected at flow rates of up to 400 μl min−1. Tests with blood samples stained by fluorescent gel demonstrated that the WBCs are accumulated in the arrays of pillars that later end up to blockage of the device. Filtration results of using elastomeric substrate present a good consistency with the trend of separation efficiencies of the similar silicon-based filters.
---
paper_title: A Mechanically Tunable Microfluidic Cell-Trapping Device.
paper_content:
Abstract Controlled manipulation, such as isolation, positioning, and trapping of cells, is important in basic biological research and clinical diagnostics. Micro/nanotechnologies have been enabling more effective and efficient cell trapping than possible with conventional platforms. Currently available micro/nanoscale methods for cell trapping, however, still lack flexibility in precisely controlling the number of trapped cells. We exploited the large compliance of elastomers to create an array of cell-trapping microstructures, whose dimensions can be mechanically modulated by inducing uniformly distributed strain via application of external force on the chip. The device consists of two elastomer polydimethylsiloxane (PDMS) sheets, one of which bears dam-like, cup-shaped geometries to physically capture cells. The mechanical modulation is used to tune the characteristics of cell trapping to capture a predetermined number of cells, from single cells to multiple cells. Thus, enhanced utility and flexibility for practical applications can be attained, as demonstrated by tunable trapping of MCF-7 cells, a human breast cancer cell line.
---
paper_title: Tuneable separation in elastomeric microfluidics devices.
paper_content:
We describe how the elastomeric properties of PDMS (polydimethylsiloxane) can be utilised to achieve tuneable particle separation in Deterministic Lateral Displacement devices via strain controlled alteration of inter-obstacle distances, a development that opens up new avenues toward more effective separation of particles in microfluidics devices.
---
paper_title: Continuous Particle Separation Through Deterministic Lateral Displacement
paper_content:
We report on a microfluidic particle-separation device that makes use of the asymmetric bifurcation of laminar flow around obstacles. A particle chooses its path deterministically on the basis of its size. All particles of a given size follow equivalent migration paths, leading to high resolution. The microspheres of 0.8, 0.9, and 1.0 micrometers that were used to characterize the device were sorted in 40seconds with a resolution of ∼10nanometers, which was better than the time and resolution of conventional flow techniques. Bacterial artificial chromosomes could be separated in 10 minutes with a resolution of ∼12%.
---
paper_title: Tuneable hydrophoretic separation using elastic deformation of poly(dimethylsiloxane)
paper_content:
This paper demonstrates a method for tuning elastomeric microchannels for hydrophoretic separation made in poly(dimethylsiloxane) (PDMS). Uniform compressive strain is imposed on the elastomeric microchannel between two acrylic substrates by fastening the bolts. The elastomeric microchannel can change its cross-section during compression, simultaneously tuning the criterion for hydrophoretic ordering. The change of the channel cross-section under compression is studied using a confocal microscope and finite element method (FEM). By pressing the channel for hydrophoretic separation, we achieved tuning of the separation criterion from 7 to 2.5 µm in particle diameter.
---
paper_title: Single-Cell Enzyme Concentrations, Kinetics, and Inhibition Analysis Using High-Density Hydrodynamic Cell Isolation Arrays
paper_content:
High-quality single-cell data are required for a quantitative systems biology description of cellular function. However, data of this type are difficult and time-consuming to collect using traditional techniques. We present a robust and simple microfluidic method for trapping single cells in large arrays to address this problem. Ordered single-cell isolation arrays allow for high-density microscopic analysis with simplified image processing. Moreover, for fluorescent assays, on-chip sample preparation (e.g., fluorescent labeling, washing) can be performed, as opposed to manual intensive operations of incubation, centrifugation, and resuspension in previous techniquessaving time and reagents. This technology was applied to determine novel single-cell enzyme kinetics for three different cell types (HeLa, 293T, Jurkat). A kinetic model of this process predicted this varied response was due to variation in the concentration of carboxylesterase between cell types. Nordihydroguaiaretic acid (NDGA) was also char...
---
paper_title: A Mechanically Tunable Microfluidic Cell-Trapping Device.
paper_content:
Abstract Controlled manipulation, such as isolation, positioning, and trapping of cells, is important in basic biological research and clinical diagnostics. Micro/nanotechnologies have been enabling more effective and efficient cell trapping than possible with conventional platforms. Currently available micro/nanoscale methods for cell trapping, however, still lack flexibility in precisely controlling the number of trapped cells. We exploited the large compliance of elastomers to create an array of cell-trapping microstructures, whose dimensions can be mechanically modulated by inducing uniformly distributed strain via application of external force on the chip. The device consists of two elastomer polydimethylsiloxane (PDMS) sheets, one of which bears dam-like, cup-shaped geometries to physically capture cells. The mechanical modulation is used to tune the characteristics of cell trapping to capture a predetermined number of cells, from single cells to multiple cells. Thus, enhanced utility and flexibility for practical applications can be attained, as demonstrated by tunable trapping of MCF-7 cells, a human breast cancer cell line.
---
paper_title: Continuous blood cell separation by hydrophoretic filtration.
paper_content:
We propose a new hydrophoretic method for continuous blood cell separation using a microfluidic device composed of slanted obstacles and filtration obstacles. The slanted obstacles have a larger height and gap than the particles in order to focus them to a sidewall by hydrophoresis. In the successive structure, the height and gap of the filtration obstacles with a filtration pore are set between the diameters of small and large particles, which defines the critical separation diameter. Accordingly, the particles smaller than the criterion freely pass through the gap and keep their focused position. In contrast, the particles larger than the criterion collide against the filtration obstacle and move into the filtration pore. The microfluidic device was characterized with polystyrene beads with a minimum diameter difference of 7.3%. We completely separated polystyrene microbeads of 9 and 12 microm diameter with a separation resolution of approximately 6.2. This resolution is increased by 6.4-fold compared with our previous separation method based on hydrophoresis (S. Choi and J.-K. Park, Lab Chip, 2007, 7, 890, ref. 1). In the isolation of white blood cells (WBCs) from red blood cells (RBCs), the microfluidic device isolated WBCs with 210-fold enrichment within a short filtration time of approximately 0.3 s. These results show that the device can be useful for the binary separation of a wide range of biological particles by size. The hydrophoretic filtration as a sample preparation unit offers potential for a power-free cell sorter to be integrated into disposable lab-on-a-chip devices.
---
paper_title: Tuneable hydrophoretic separation using elastic deformation of poly(dimethylsiloxane)
paper_content:
This paper demonstrates a method for tuning elastomeric microchannels for hydrophoretic separation made in poly(dimethylsiloxane) (PDMS). Uniform compressive strain is imposed on the elastomeric microchannel between two acrylic substrates by fastening the bolts. The elastomeric microchannel can change its cross-section during compression, simultaneously tuning the criterion for hydrophoretic ordering. The change of the channel cross-section under compression is studied using a confocal microscope and finite element method (FEM). By pressing the channel for hydrophoretic separation, we achieved tuning of the separation criterion from 7 to 2.5 µm in particle diameter.
---
paper_title: Cell handling using microstructured membranes.
paper_content:
Gentle and precise handling of cell suspensions is essential for scientific research and clinical diagnostic applications. Although different techniques for cell analysis at the micro-scale have been proposed, many still require that preliminary sample preparation steps be performed off the chip. Here we present a microstructured membrane as a new microfluidic design concept, enabling the implementation of common sample preparation procedures for suspensions of eukaryotic cells in lab-on-a-chip devices. We demonstrate the novel capabilities for sample preparation procedures by the implementation of metered sampling of nanoliter volumes of whole blood, concentration increase up to three orders of magnitude of sparse cell suspension, and circumferentially uniform, sequential exposure of cells to reagents. We implemented these functions by using microstructured membranes that are pneumatically actuated and allowed to reversibly decouple the flow of fluids and the displacement of eukaryotic cells in suspensions. Furthermore, by integrating multiple structures on the same membrane, complex sequential procedures are possible using a limited number of control steps.
---
paper_title: A microfluidic-based hydrodynamic trap: design and implementation.
paper_content:
We report an integrated microfluidic device for fine-scale manipulation and confinement of micro- and nanoscale particles in free-solution. Using this device, single particles are trapped in a stagnation point flow at the junction of two intersecting microchannels. The hydrodynamic trap is based on active flow control at a fluid stagnation point using an integrated on-chip valve in a monolithic PDMS-based microfluidic device. In this work, we characterize device design parameters enabling precise control of stagnation point position for efficient trap performance. The microfluidic-based hydrodynamic trap facilitates particle trapping using the sole action of fluid flow and provides a viable alternative to existing confinement and manipulation techniques based on electric, optical, magnetic or acoustic force fields. Overall, the hydrodynamic trap enables non-contact confinement of fluorescent and non-fluorescent particles for extended times and provides a new platform for fundamental studies in biology, biotechnology and materials science.
---
paper_title: Hydrodynamic trap-and-release of single particles using dual-function elastomeric valves: design, fabrication, and characterization
paper_content:
This paper introduces a simple method for trapping and releasing single particles, such as microbeads and living cells, using dual-function elastomeric valves. Our key technique is the utilization of the elastomeric valve as a dual-function removable trap instead of a fixed trap and a separate component for releasing trapped particles, thereby enabling a simple yet effective trap-and-release of particles. We designed, fabricated, and characterized a microfluidic-based device for trapping and releasing single beads by controlling elastomeric valves driven by pneumatic pressure and a fluid flow action. The fluid flow is controlled to ensure that beads flowing in a main stream enter into a branch channel. A bead is trapped by deflected elastomeric valves positioned at the entrance of a branch channel. The trapped bead is easily released by removing the applied pressure. The trapping and releasing of single beads of 21 μm in diameter were successfully performed under an optimized pressure and flow rate ratio. Moreover, we confirmed that continuous trapping and releasing of single beads by repeatedly switching elastomeric valves enables the collection of a controllable number of beads. Our simple method can be integrated into microfluidic systems that require single or multiple particle arrays for quantitative and high-throughput assays in applications within the fields of biology and chemistry.
---
paper_title: Chromatographic behaviour of single cells in a microchannel with dynamic geometry.
paper_content:
We present the design of a microchannel with dynamic geometry that imparts different flow rates to different cells based on their physical properties. This dynamic microchannel is formed between a textured surface and a flexible membrane. As cells flow through the microchannel, the height of the channel oscillates causing periodic entrapment of the larger cells, and as a result, attenuating their velocity relative to the bulk liquid. The smaller cells are not slowed by the moving microstructure, and move synchronously with the bulk liquid. The ability of the dynamic microchannel to selectively attenuate the flow rate of eukaryotic cells is similar to a size-exclusion chromatography column, but with the opposite behavior. The speed of smaller substances is attenuated relative to the larger substances in traditional size-exclusion chromatography columns, whereas the speed of the larger substances that is attenuated in the dynamic microchannel. We verified this property by tracking the flow of single cells through the dynamic microchannel. L1210 mouse lymphoma cells (MLCs), peripheral blood mononuclear cells (PBMCs), and red blood cells (RBCs) were used as model cells. We showed that the flow rate of MLC is slowed by more than 50% compared to PBMCs and RBCs. We characterized the operation of the microchannel by measuring the velocity of each of the three cell types as a function of the pressures used to oscillate the membrane position, as well as the duty cycle of the oscillation.
---
paper_title: A tunable micro filter modulated by pneumatic pressure for cell separation
paper_content:
Abstract This study reports a new microfluidic-based filter for size-tunable separation of microbeads or cells. The filtration separation mechanism is based on the pneumatically tunable deformation of poly-dimethylsiloxane (PDMS) membranes, which block the fluid channel with a varied degree. This defines the dimensions of the open area of the fluid channel and thus determines the maximum diameter the microbeads or cells which can pass through. The proposed device incorporates pneumatic micropumps for automatic liquid handling. Another unique feature of this filter is an unclogging mechanism using a back-flush operating mode, by which a reverse-directional flow is utilized to flush the clogged filter zone. The separation performance of the proposed device has been experimentally evaluated. Results show that this developed device is able to provide precise size-dependent filtration, with a high passage efficiency (82–89%) for microbeads with sizes smaller than the defined void space in the filter zone. Also, the proposed separation mechanism is also capable of providing a reasonable filtration rate (14.9–3.3 μl/min). Furthermore, the separation of chondrocytes from a 30 μl suspension of enzymatically digested tissue is successfully demonstrated, showing an excellent cell passage efficiency of 93% and a cell viability of 96%. The proposed device is therefore capable of performing cell separation in situations where either the harvested specimen is limited or the sample cell content is sparse. It also paves a new route to delicately separate or to isolate cells in a simple and controllable manner.
---
paper_title: A particle trapping chip using the wide and uniform slit formed by a deformable membrane and air bubble plugs
paper_content:
We present a high-efficient particle trapping chip, where a wide and uniform slit is formed by a deformable membrane barrier with air bubble plugs. The air bubble plugs remained in the extended microchannel during sample filling process, block the particle passage at the both side ends of the membrane, thus all particles flow through the uniform slit gap. Therefore, high-efficient particle trapping without particle loss can be achieved. In the experimental study using 10.3μm-diameter polystyrene beads, the membrane barrier with the air bubble plugs successfully trapped the injected beads with the trapping efficiency of 100% at the flow rate of 10μl/min, while the barrier without the air bubble plugs showed low efficiency of 20%. The present simple and effective particle trapping device is applicable for the high-efficient bioparticle isolation and recovery in the micro total analysis system.
---
paper_title: Clog-free cell filtration using resettable cell traps
paper_content:
The separation of cells by filtration through microstructured constrictions is limited by clogging and adsorption, which reduce selectivity and prevent the extraction of separated cells. To address this key challenge, we developed a mechanism for simply and reliably adjusting the cross-section of a microfluidic channel to selectively capture cells based on a combination of size and deformability. After a brief holding period, trapped cells can then be released back into flow, and if necessary, extracted for subsequent analysis. Periodically clearing filter constrictions of separated cells greatly improves selectivity and throughput, and minimizes adsorption of cells to the filter microstructure. This mechanism is capable of discriminating cell-sized polystyrene microspheres with <1 μm resolution. Rare cancer cells doped into leukocytes can be enriched ~1800× with ~90% yield despite a significant overlap in size between these cell types. An important characteristic of this process is that contaminant leukocytes are captured by non-specific adsorption and not mechanical constraint, enabling repeated filtration to improve performance. The throughput of this mechanism is 900 000 cells per hour for 32 multiplexed microchannels, or ~1 200 000 cells cm−2 h−1 on a per area basis, which exceeds existing micropore filtration mechanisms by a factor of 20.
---
paper_title: A tunable microfluidic-based filter modulated by pneumatic pressure for separation of blood cells
paper_content:
This article presents a new microfluidic-based filter for the separation of microbeads or blood cells with a high filtration rate. The device was composed of a circular micropump for automatic liquid transport, and a normally closed valve located at the filter zone for separation of beads or cells. The filtration mechanism was based on the tunable deformation of polydimethylsiloxane (PDMS) membranes that defined the gap between a floating block structure and the substrate which determined the maximum diameter of the beads/cells that can pass through the filter. Another unique feature of this filter is an unclogging mechanism using a suction force, resulting in a back flow to remove any trapped beads/cells in the filter zone when the PDMS membrane was restored to its initial state. The separation performance of the proposed device was first experimentally evaluated by using microbeads. The results showed that this device was capable of providing size-tunable filtration with a high recovery efficiency (95.25–96.21%) for microbeads with sizes smaller than the defined gap in the filter zone. Furthermore, the proposed device was also capable of performing separation of blood cells and blood plasma from human whole blood. Experimental results showed that an optimum filtration rate of 21.40 and 3.00 μl/min correspond to high recovery efficiencies of 86.69 and 80.66%, respectively, for red blood cells (RBCs) and blood plasma. The separation method developed in this work could be used for various point-of-care diagnostic applications involving separation of plasma and blood cells.
---
paper_title: Cell handling using microstructured membranes.
paper_content:
Gentle and precise handling of cell suspensions is essential for scientific research and clinical diagnostic applications. Although different techniques for cell analysis at the micro-scale have been proposed, many still require that preliminary sample preparation steps be performed off the chip. Here we present a microstructured membrane as a new microfluidic design concept, enabling the implementation of common sample preparation procedures for suspensions of eukaryotic cells in lab-on-a-chip devices. We demonstrate the novel capabilities for sample preparation procedures by the implementation of metered sampling of nanoliter volumes of whole blood, concentration increase up to three orders of magnitude of sparse cell suspension, and circumferentially uniform, sequential exposure of cells to reagents. We implemented these functions by using microstructured membranes that are pneumatically actuated and allowed to reversibly decouple the flow of fluids and the displacement of eukaryotic cells in suspensions. Furthermore, by integrating multiple structures on the same membrane, complex sequential procedures are possible using a limited number of control steps.
---
paper_title: A tunable microfluidic-based filter modulated by pneumatic pressure for separation of blood cells
paper_content:
This article presents a new microfluidic-based filter for the separation of microbeads or blood cells with a high filtration rate. The device was composed of a circular micropump for automatic liquid transport, and a normally closed valve located at the filter zone for separation of beads or cells. The filtration mechanism was based on the tunable deformation of polydimethylsiloxane (PDMS) membranes that defined the gap between a floating block structure and the substrate which determined the maximum diameter of the beads/cells that can pass through the filter. Another unique feature of this filter is an unclogging mechanism using a suction force, resulting in a back flow to remove any trapped beads/cells in the filter zone when the PDMS membrane was restored to its initial state. The separation performance of the proposed device was first experimentally evaluated by using microbeads. The results showed that this device was capable of providing size-tunable filtration with a high recovery efficiency (95.25–96.21%) for microbeads with sizes smaller than the defined gap in the filter zone. Furthermore, the proposed device was also capable of performing separation of blood cells and blood plasma from human whole blood. Experimental results showed that an optimum filtration rate of 21.40 and 3.00 μl/min correspond to high recovery efficiencies of 86.69 and 80.66%, respectively, for red blood cells (RBCs) and blood plasma. The separation method developed in this work could be used for various point-of-care diagnostic applications involving separation of plasma and blood cells.
---
paper_title: A Mechanically Tunable Microfluidic Cell-Trapping Device.
paper_content:
Abstract Controlled manipulation, such as isolation, positioning, and trapping of cells, is important in basic biological research and clinical diagnostics. Micro/nanotechnologies have been enabling more effective and efficient cell trapping than possible with conventional platforms. Currently available micro/nanoscale methods for cell trapping, however, still lack flexibility in precisely controlling the number of trapped cells. We exploited the large compliance of elastomers to create an array of cell-trapping microstructures, whose dimensions can be mechanically modulated by inducing uniformly distributed strain via application of external force on the chip. The device consists of two elastomer polydimethylsiloxane (PDMS) sheets, one of which bears dam-like, cup-shaped geometries to physically capture cells. The mechanical modulation is used to tune the characteristics of cell trapping to capture a predetermined number of cells, from single cells to multiple cells. Thus, enhanced utility and flexibility for practical applications can be attained, as demonstrated by tunable trapping of MCF-7 cells, a human breast cancer cell line.
---
paper_title: A tunable micro filter modulated by pneumatic pressure for cell separation
paper_content:
Abstract This study reports a new microfluidic-based filter for size-tunable separation of microbeads or cells. The filtration separation mechanism is based on the pneumatically tunable deformation of poly-dimethylsiloxane (PDMS) membranes, which block the fluid channel with a varied degree. This defines the dimensions of the open area of the fluid channel and thus determines the maximum diameter the microbeads or cells which can pass through. The proposed device incorporates pneumatic micropumps for automatic liquid handling. Another unique feature of this filter is an unclogging mechanism using a back-flush operating mode, by which a reverse-directional flow is utilized to flush the clogged filter zone. The separation performance of the proposed device has been experimentally evaluated. Results show that this developed device is able to provide precise size-dependent filtration, with a high passage efficiency (82–89%) for microbeads with sizes smaller than the defined void space in the filter zone. Also, the proposed separation mechanism is also capable of providing a reasonable filtration rate (14.9–3.3 μl/min). Furthermore, the separation of chondrocytes from a 30 μl suspension of enzymatically digested tissue is successfully demonstrated, showing an excellent cell passage efficiency of 93% and a cell viability of 96%. The proposed device is therefore capable of performing cell separation in situations where either the harvested specimen is limited or the sample cell content is sparse. It also paves a new route to delicately separate or to isolate cells in a simple and controllable manner.
---
paper_title: Dynamic trapping and high-throughput patterning of cells using pneumatic microstructures in an integrated microfluidic device.
paper_content:
Microfluidic trapping methods create significant opportunities to establish highly controlled cell positioning and arrangement for the microscale study of numerous cellular physiological and pathological activities. However, a simple, straightforward, dynamic, and high-throughput method for cell trapping is not yet well established. In the present paper, we report a direct active trapping method using an integrated microfluidic device with pneumatic microstructures (PμSs) for both operationally and quantitatively dynamic localization of cells, as well as for high-throughput cell patterning. We designed and fabricated U-shape PμS arrays to replace the conventional fixed microstructures for reversible trapping. Multidimensional dynamics and spatial consistency of the PμSs were optically characterized and quantitatively demonstrated. Furthermore, we performed a systematic trapping investigation of the PμSs actuated at a pressure range of 0 psi to 20 psi using three types of popularly applied mammalian cells, namely, human lung adenocarcinoma A549 cells, human hepatocellular liver carcinoma HepG2 cells, and human breast adenocarcinoma MCF-7 cells. The cells were quantitatively trapped and controlled by the U-shape PμSs in a programmatic and parallel manner, and could be opportunely released. The trapped cells with high viability were hydrodynamically protected by the real-time actuation of specifically designed umbrella-like PμSs. We demonstrate that PμSs can be applied as an active microfluidic component for large-scale cell patterning and manipulation, which could be useful in many cell-based tissue organization, immunosensor, and high-throughput imaging and screening.
---
paper_title: A particle trapping chip using the wide and uniform slit formed by a deformable membrane and air bubble plugs
paper_content:
We present a high-efficient particle trapping chip, where a wide and uniform slit is formed by a deformable membrane barrier with air bubble plugs. The air bubble plugs remained in the extended microchannel during sample filling process, block the particle passage at the both side ends of the membrane, thus all particles flow through the uniform slit gap. Therefore, high-efficient particle trapping without particle loss can be achieved. In the experimental study using 10.3μm-diameter polystyrene beads, the membrane barrier with the air bubble plugs successfully trapped the injected beads with the trapping efficiency of 100% at the flow rate of 10μl/min, while the barrier without the air bubble plugs showed low efficiency of 20%. The present simple and effective particle trapping device is applicable for the high-efficient bioparticle isolation and recovery in the micro total analysis system.
---
paper_title: Clog-free cell filtration using resettable cell traps
paper_content:
The separation of cells by filtration through microstructured constrictions is limited by clogging and adsorption, which reduce selectivity and prevent the extraction of separated cells. To address this key challenge, we developed a mechanism for simply and reliably adjusting the cross-section of a microfluidic channel to selectively capture cells based on a combination of size and deformability. After a brief holding period, trapped cells can then be released back into flow, and if necessary, extracted for subsequent analysis. Periodically clearing filter constrictions of separated cells greatly improves selectivity and throughput, and minimizes adsorption of cells to the filter microstructure. This mechanism is capable of discriminating cell-sized polystyrene microspheres with <1 μm resolution. Rare cancer cells doped into leukocytes can be enriched ~1800× with ~90% yield despite a significant overlap in size between these cell types. An important characteristic of this process is that contaminant leukocytes are captured by non-specific adsorption and not mechanical constraint, enabling repeated filtration to improve performance. The throughput of this mechanism is 900 000 cells per hour for 32 multiplexed microchannels, or ~1 200 000 cells cm−2 h−1 on a per area basis, which exceeds existing micropore filtration mechanisms by a factor of 20.
---
paper_title: Continuous blood cell separation by hydrophoretic filtration.
paper_content:
We propose a new hydrophoretic method for continuous blood cell separation using a microfluidic device composed of slanted obstacles and filtration obstacles. The slanted obstacles have a larger height and gap than the particles in order to focus them to a sidewall by hydrophoresis. In the successive structure, the height and gap of the filtration obstacles with a filtration pore are set between the diameters of small and large particles, which defines the critical separation diameter. Accordingly, the particles smaller than the criterion freely pass through the gap and keep their focused position. In contrast, the particles larger than the criterion collide against the filtration obstacle and move into the filtration pore. The microfluidic device was characterized with polystyrene beads with a minimum diameter difference of 7.3%. We completely separated polystyrene microbeads of 9 and 12 microm diameter with a separation resolution of approximately 6.2. This resolution is increased by 6.4-fold compared with our previous separation method based on hydrophoresis (S. Choi and J.-K. Park, Lab Chip, 2007, 7, 890, ref. 1). In the isolation of white blood cells (WBCs) from red blood cells (RBCs), the microfluidic device isolated WBCs with 210-fold enrichment within a short filtration time of approximately 0.3 s. These results show that the device can be useful for the binary separation of a wide range of biological particles by size. The hydrophoretic filtration as a sample preparation unit offers potential for a power-free cell sorter to be integrated into disposable lab-on-a-chip devices.
---
paper_title: A pillar-based microfilter for isolation of white blood cells on elastomeric substrate
paper_content:
Our goal is to design, fabricate, and characterize a pillar-based microfluidic device for size-based separation of human blood cells on an elastomeric substrate with application in the low-cost rapid prototyping of lab-chip devices. The single inlet single outlet device is using parallel U-shape arrays of pillars with cutoff size of 5.5 μm for trapping white blood cells (WBCs) in a pillar chamber with internal dead-volume of less than 1.0 μl. The microstructures are designed to limit the elastomeric deformation against fluid pressures. Numerical analysis showed that at maximum pressure loss of 15 kPa which is lower than the device conformal bonding strength, the pillar elastomeric deformation is less than 5% for flow rates of up to 1.0 ml min−1. Molding technique was employed for device prototyping using polyurethane methacrylate (PUMA) resin and polydimethylsiloxane (PDMS) mold. Characterization of the dual-layer device with beads and blood samples is performed. Tests with blood injection showed that ∼18%–25% of WBCs are trapped and ∼84%–89% of red blood cells (RBCs) are passed at flow rates of 15–50 μl min−1 with a slight decrease of WBCs trap and improve of the RBCs pass at higher flow rates. Similar results were obtained by separation of mixed microspheres of different size injected at flow rates of up to 400 μl min−1. Tests with blood samples stained by fluorescent gel demonstrated that the WBCs are accumulated in the arrays of pillars that later end up to blockage of the device. Filtration results of using elastomeric substrate present a good consistency with the trend of separation efficiencies of the similar silicon-based filters.
---
paper_title: Single-Cell Enzyme Concentrations, Kinetics, and Inhibition Analysis Using High-Density Hydrodynamic Cell Isolation Arrays
paper_content:
High-quality single-cell data are required for a quantitative systems biology description of cellular function. However, data of this type are difficult and time-consuming to collect using traditional techniques. We present a robust and simple microfluidic method for trapping single cells in large arrays to address this problem. Ordered single-cell isolation arrays allow for high-density microscopic analysis with simplified image processing. Moreover, for fluorescent assays, on-chip sample preparation (e.g., fluorescent labeling, washing) can be performed, as opposed to manual intensive operations of incubation, centrifugation, and resuspension in previous techniquessaving time and reagents. This technology was applied to determine novel single-cell enzyme kinetics for three different cell types (HeLa, 293T, Jurkat). A kinetic model of this process predicted this varied response was due to variation in the concentration of carboxylesterase between cell types. Nordihydroguaiaretic acid (NDGA) was also char...
---
paper_title: Continuous Particle Separation Through Deterministic Lateral Displacement
paper_content:
We report on a microfluidic particle-separation device that makes use of the asymmetric bifurcation of laminar flow around obstacles. A particle chooses its path deterministically on the basis of its size. All particles of a given size follow equivalent migration paths, leading to high resolution. The microspheres of 0.8, 0.9, and 1.0 micrometers that were used to characterize the device were sorted in 40seconds with a resolution of ∼10nanometers, which was better than the time and resolution of conventional flow techniques. Bacterial artificial chromosomes could be separated in 10 minutes with a resolution of ∼12%.
---
| Title: Tunable Microfluidic Devices for Hydrodynamic Fractionation of Cells and Beads: A Review
Section 1: Introduction
Description 1: Introduce the significance of effective fractionation of beads and cells in microfluidic devices and outline the paper's focus on tunable microfluidic devices using mechanical actuators for hydrodynamic-based fractionation.
Section 2: Mechanical Tunability
Description 2: Discuss the concept of mechanical tunability in microfluidic devices, including the effect of forced deformation on microstructures and how it can alter fluid flow parameters and interaction with microbeads or cells.
Section 3: Elastomeric Substrate
Description 3: Review the materials used in fabrication of microfluidic devices, focusing on elastomeric substrates like PDMS and their importance for achieving substantial deformation under small forces.
Section 4: Microstructure Design
Description 4: Explore the design of microstructures for size-based hydrodynamic manipulation of beads and cells, detailing methods like compressing or stretching to achieve mechanical tuning.
Section 5: Device Tuning Methods
Description 5: Describe different models introduced for manipulating microbeads and cells via mechanical tuning, including bulk deformation and local or membrane-based deformation.
Section 6: Deformation of Microstructure Layer
Description 6: Examine how physical manipulation of cells and beads is implemented through bulk deformation by altering the overall dimensions of the microfluidic device.
Section 7: Separation by Arrays of Micropillars
Description 7: Detail the use of elastomeric microfluidic devices with pillar-based microstructures for separation of microbeads and cells, including specific examples and their tunable characteristics.
Section 8: Cup-Shaped Elements for Trapping
Description 8: Discuss tunable microfluidic devices with cup-shaped elements for cell trapping, including methods for adjusting the trapping capacity by stretching the device.
Section 9: Tuning of Hydrophoretic Effect
Description 9: Review the concept and methods of tunable hydrophoretic continuous separation by elastomeric deformation of microchannel cross-sections.
Section 10: Elastomeric Membrane Deformation
Description 10: Analyze methods of local deformation of thin membranes under pneumatic pressures for mechanical tuning in microfluidic devices, focusing on techniques like blockage of microchannel cross-sections and dynamic cup-shaped elements.
Section 11: Conclusions
Description 11: Summarize the advancements in mechanically tunable microfluidic devices for size-based fractionation of cells and beads, discussing their potential applications, limitations, and future development opportunities. |
History of materials used for recording static and dynamic occlusal contact marks: a literature review | 18 | ---
paper_title: In vivo and in vitro evaluation of occlusal indicator sensitivity.
paper_content:
Abstract Statement of Problem. Indicators used to locate and eliminate occlusal disharmonies have not demonstrated specific sensitivity and reliability. Purpose. The sensitivity and reliability of articulating papers, foils, silk strips, and T-Scan systems used as occlusal indicators were investigated. The effect of saliva on the materials also was determined. Material and Methods. In the in vitro part of the study, a test model (mounted in an articulator and in a universal testing machine) was established with the use of maxillary and mandibular dentate casts. Articulating papers, foils, silk strips, and the T-Scan system were used to examine the loss of sensitivity of the recording materials after 3 consecutive strokes. The differences in the contact points of the test model determined by each of the recording materials were evaluated both in the articulator and in a universal testing machine. In the in vivo part of the study, occlusal contact recordings of 3 subjects were made before and after drying their mouths. The significance of the differences between the strokes repeated more than once was evaluated with the Friedman 2-way analysis of variance and Kruskal-Wallis tests. To examine the effect of the oral environment, the Wilcoxon matched-pairs signed-ranks test was applied. In all statistical analyses, the level of significance was α=.05. Results. The results demonstrated significant differences in the sensitivity of the recording materials tested ( P P P P Conclusion. Within the limitations of this study, the results indicated that multiple use of the recording materials tested may lead to inaccurate occlusal analysis results. It is recommended that the recording materials be used only once and that the teeth be dry during occlusal analysis. (J Prosthet Dent 2002;88:522-6.)
---
paper_title: Evaluation of three occlusal examination methods used to record tooth contacts in lateral excursive movements.
paper_content:
Abstract Accurate and repeatable methods for recording tooth contacts are required for the clinical management of problems related to occlusion. A thorough understanding of the materials and procedures used in these methods is important to achieve desirable results in the treatment of such problems. This study compared three occlusal examination methods to determine the influence of materials and procedures on the number of tooth contacts recorded. Tooth contacts were analyzed at two lateral mandibular positions with each method. It was found that the method that uses black silicone recorded the highest number of tooth contacts. Thus the most frequent type of occlusal pattern observed was full-balanced occlusion. This study suggested that the disparities of results reported in literature on occlusal contact patterns could be the result of the different materials and methods used for occlusal registration.
---
| Title: History of materials used for recording static and dynamic occlusal contact marks: a literature review
Section 1: Introduction
Description 1: Provide an overview of the importance of detecting occlusal contacts and the issues associated with occlusal interferences in dentistry.
Section 2: Classification of tooth-contact patterns
Description 2: Explain the various classifications of tooth contact patterns and discuss how these patterns were traditionally recognized and evaluated.
Section 3: Occlusion indicating materials and techniques used in the past and present
Description 3: Describe the different materials and methods that have been employed over time for detecting occlusal contact points, including their accuracy, sensitivity, and reproducibility.
Section 4: Mylar Paper / Shimstock films
Description 4: Focus on the use of Mylar paper and Shimstock films in occlusal analysis, including their advantages and reliability.
Section 5: Polyether Silicon Impression Bites
Description 5: Discuss the utility of Polyether Silicon Impression materials in recording occlusal contact patterns and their practical applications.
Section 6: Silicon Putty Material
Description 6: Examine the use of silicone putty materials in occlusal studies and highlight their distinct methodologies and findings.
Section 7: Wax
Description 7: Address the use of wax materials in recording occlusal contacts, including both its advantages and limitations.
Section 8: Wax Articulation Paper
Description 8: Discuss the properties and clinical significance of wax articulation papers in the identification of occlusal high spots.
Section 9: Silk Strips
Description 9: Investigate the effectiveness of silk strips as an occlusal indicator, noting any challenges to their use.
Section 10: Foil
Description 10: Provide an overview of the application of foils in detecting occlusal contacts, emphasizing their accuracy and usage.
Section 11: Black Silicone
Description 11: Explain the use of black silicone in occlusal registration and its efficacy in identifying contact points.
Section 12: High Spot Indicator
Description 12: Describe the use of liquid high spot indicators for detecting occlusal contacts, and their effectiveness on highly polished surfaces.
Section 13: Occlusal Sprays
Description 13: Discuss the use and advantages of occlusal sprays as universal color indicators for occlusal contact detection.
Section 14: Photo-Occlusion
Description 14: Examine the photo-occlusion system and its application in determining tooth contact points, including the challenges associated with its use.
Section 15: Occlusion Sonography
Description 15: Explore the use of sound-based techniques in detecting occlusal contacts and their historical development.
Section 16: T-Scan
Description 16: Provide insights into the computerized T-Scan system for occlusal analysis, including its methodology, advantages, and limitations.
Section 17: Pressure Sensitive Films
Description 17: Evaluate the use of pressure-sensitive films for recording occlusal patterns, noting their reliability and application limitations.
Section 18: Conclusion
Description 18: Summarize the review findings, discuss the varying rates of effectiveness of different occlusal indicating materials, and conclude with remarks on the best practices based on different clinical situations. |
Modeling Industrial Lot Sizing Problems: A Review | 11 | ---
paper_title: Meta-heuristics for dynamic lot sizing: A review and comparison of solution approaches
paper_content:
Proofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search, genetic algorithms and simulated annealing, have become popular and efficient tools for solving hard combinational optimization problems. We review the various meta-heuristics that have been specifically developed to solve lot sizing problems, discussing their main components such as representation, evaluation neighborhood definition and genetic operators. Further, we briefly review other solution approaches, such as dynamic programming, cutting planes, Dantzig-Wolfe decomposition, Lagrange relaxation and dedicated heuristics. This allows us to compare these techniques. Understanding their respective advantages and disadvantages gives insight into how we can integrate elements from several solution approaches into more powerful hybrid algorithms. Finally, we discuss general guidelines for computational experiments and illustrate these with several examples.
---
paper_title: Sequencing and scheduling : algorithms and complexity
paper_content:
Sequencing and scheduling as a research area is motivated by questions that arise in production planning, in computer control, and generally in all situations in which scarce resources have to be allocated to activities over time. In this survey, we concentrate on the area of deterministic machine scheduling. We review complexity results and optimization and approximation algorithms for problems involving a single machine, parallel machines, open shops, flow shops and job shops. We also pay attention to two extensions of this area: resource-constrained project scheduling and stochastic machine scheduling.
---
paper_title: Inventory Management and Production Planning and Scheduling
paper_content:
THE CONTEXT AND IMPORTANCE OF INVENTORY MANAGEMENT AND PRODUCTION PLANNING AND SCHEDULING. The Importance of Inventory Management and Production Planning and Scheduling. Strategic Issues. Frameworks for Inventory Management and Production Planning and Scheduling. Forecasting. TRADITIONAL REPLENISHMENT SYSTEMS FOR MANAGING INDIVIDUAL--ITEM INVENTORIES. Order Quantities When Demand is Approximately Level. Lot Sizing for Individual Items with Time--Varying Demand. Individual Items with Probabilistic Demand. SPECIAL CLASSES OF ITEMS. Managing the Most Important (Class A) Inventories. Managing Routine (Class C) Inventories. Style Goods and Perishable Items. THE COMPLEXITIES OF MULTIPLE ITEMS AND MULTIPLE LOCATIONS. Coorinated Replenishments at a Single Stocking Point. Supply Chain Management and Multiechelon Inventories. PRODUCTION PLANNING AND SCHEDULING. An Overall Framework for Production Planning and Scheduling. Medium--Range Aggregate Production Planning. Material Requirements Planning and its Extensions. Just--in--Time and Optimized Production Technology. Short--Range Production Scheduling. Summary. Appendices. Indexes.
---
paper_title: Manufacturing Planning and Control Systems
paper_content:
The book is well-known for having the most current coverage available. A "non-numerical" approach is used with thoroughly integrated real applications. The Third Edition will provide complete integration of JIT concepts and techniques, continued use of real-world examples, and improved organization and style. There is more coverage of global factors, human issues, and strategic issues. The book also provides an introduction to production planning and control, as well as coverage of more advanced topics.
---
paper_title: Dynamic Version of the Economic Lot Size Model
paper_content:
(This article originally appeared in Management Science, October 1958, Volume 5, Number 1, pp. 89-96, published by The Institute of Management Sciences.) ::: ::: A forward algorithm for a solution to the following dynamic version of the economic lot size model is given: allowing the possibility of demands for a single item, inventory holding charges, and setup costs to vary over N periods, we desire a minimum total cost inventory management scheme which satisfies known demand in every period. Disjoint planning horizons are shown to be possible which eliminate the necessity of having data for the full N periods.
---
paper_title: A Backlogging Model and a Multi-Echelon Model of a Dynamic Economic Lot Size Production System---A Network Approach
paper_content:
Two dynamic economic lot size production systems are analyzed in this paper, the first being a single product model with backlogging and the second a multi-echelon model. In each model the objective is to find a production schedule that minimizes the total production and inventory costs. ::: ::: A key conceptual difficulty is that the mathematically perplexing problem of minimizing a concave function is being considered. It is shown that both models are naturally represented via single source networks. The network formulations reveal the underlying structure of the models, and facilitate development of efficient dynamic programming algorithms for calculating the optimal production schedules.
---
paper_title: Uncapacitated Lot-Sizing Problems with Start-Up Costs
paper_content:
We consider the uncapacitated economic lot-sizing problem with start-up costs as a mixed integer program. A family of strong valid inequalities is derived for the class as well as a polynomial separation algorithm. It is then shown how equivalent, or possibly stronger, formulations are obtained by the introduction of auxiliary variables. Finally, some limited computational results for a single item model and a multi-item model with changeover costs are reported.
---
paper_title: The discrete lot-sizing and scheduling problem
paper_content:
Abstract The discrete lot-sizing and scheduling problem consists in scheduling several products on a single machine so as to meet the known dynamic demand and to minimize the sum of inventory and setup cost. The planning interval is phased into many short periods, e.g. shifts or days, and setups may occur only at the beginning of a period. A branch-and-bound procedure is presented using Lagrangean relaxation for determining both lower bounds and feasible solutions. The relaxed problems are solved by dynamic programming. Computational results on a personal computer are reported for various examples from the literature with up to 12 products and 122 periods or 3 products and 250 periods. The procedure yields optimal solutions or at least feasible solutions with tight lower bounds in a few minutes. The results are compared with those obtained by solving the usual capacitated lot-sizing problem.
---
paper_title: The capacitated dynamic lot-sizing problem with startup and reservation costs: A forward algorithm solution
paper_content:
Abstract Often in production environments, there are several products available to be processed on a machine. The decisions that must be made are: 1) which product should be processed next, and 2) how much time should elapse before a changeover occurs and a different product is chosen for processing. This problem is known as the multiple product cycling problem . The single product variation of this problem is addressed in this paper. It is assumed that the product to be processed next has already been chosen (decision #1). The second decision concerning production run length must now be determined. In the single product problem, there is a fixed startup cost associated with turning the machine on in a period when the machine was off in the previous period and a fixed reservation cost incurred in each period in which the machine is on. Ending inventory in a period is assessed a holding cost. The relationship of the single product problem to the multiple period problem is as follows. The startup cost is identical to the changeover cost in the multiple product problem. By processing a particular product, other products must wait. This effect is captured by the reservation cost which is the opportunity cost one would incur by having the machine dedicated to processing a particular product. A forward dynamic programming algorithm is presented which uses heuristic rules to detect empirical decision horizons. The entire problem is partitioned into smaller Subproblems which are then easily solved. The subproblems are created by the occurrence of startup regeneration points. These are periods where there is no ending inventory and the following period contains a machine startup. Computational experience is reported which shows that reductions in computational effort (measured in CPU time and the number of nodes in the decision tree evaluated) can be as high as 95 to 99% when using the decision horizon techniques and bounding procedures described in this paper. These results are based on the solution of 28 test problems that were each 20 periods in length. Because a forward algorithm was used, not only were the solutions to the 20-period problems obtained, but the solutions to the one-period problems through 19-period problems were obtained as well. Even though the procedure described in this paper cannot guarantee that it will always find the optimal solution, it did find the optimal solution in every problem tested. By being able to solve the single product problem presented in this paper quickly, the ability to rapidly generate optimal or near optimal solutions to the multiple product problem (which uses the procedure described in this paper as a subroutine) appears closer to realization.
---
paper_title: Discrete Lotsizing and Scheduling by Batch Sequencing
paper_content:
The discrete lotsizing and scheduling problem for one machine with sequence dependent setup times and setup costs is solved as a single machine scheduling problem, which we term the batch sequencing problem. The relationship between the lotsizing problem and the batch sequencing problem is analyzed. The batch sequencing problem is solved with a brauch & bound algorithm which is accelerated by bounding and dominance rules. The algorithm is compared with recently published procedures for solving variants of the DLSP and is found to be more efficient if the number of items is not large.
---
paper_title: A dynamic lot-sizing model with multi-mode replenishments: polynomial algorithms for special cases with dual and multiple modes
paper_content:
Abstract This paper generalizes the classical dynamic lot-sizing model to consider the case where replenishment orders may be delivered by multiple shipment modes. Each mode may have a different lead time and is characterized by a different cost function. The model represents those applications in which products can be purchased through various suppliers or delivered from a single source using various transportation modes with different lead times and costs. The problem is challenging due to the consideration of cargo capacity constraints, i.e., the multiple set-ups cost structure, associated with a replenishment mode. The paper presents several structural optimality properties of the problem and develops efficient algorithms, based on the dynamic programming approach, to find the optimal solution. The special, yet practical, cases of the two-mode replenishment problem analyzed in this paper are analytically tractable, and hence, the respective problems can be solved in polynomial time.
---
paper_title: Rolling-horizon lot-sizing when set-up times are sequence-dependent
paper_content:
The challenging problem of efficient lot sizing on parallel machines with sequence-dependent set-up times is modelled using a new mixed integer programming (MIP) formulation that permits multiple set-ups per planning period. The resulting model is generally too large to solve optimally and, given that it will be used on a rolling horizon basis with imperfect demand forecasts, approximate models that only generate exact schedules for the immediate periods are developed. Both static and rolling horizon snapshot tests are carried out. The approximate models are tested and found to be practical rolling horizon proxies for the exact model, reducing the dimensionality of the problem and allowing for faster solution by MIP and metaheuristic methods. However, for large problems the approximate models can also consume an impractical amount of computing time and so a rapid solution approach is presented to generate schedules by solving a succession of fast MIP models. Tests show that this approach is able to produc...
---
paper_title: Multi-Item Lot Size Scheduling by Heuristic Part I: With Fixed Resources
paper_content:
The multi-item lot size problem has been a formal management issue for some decades. The lot size decision of how much to produce and when usually considers the trade-off between lost productivity from frequent set-ups and short runs and the higher inventory costs arising from longer runs. When the decision must also consider shared limited production resources, the problem becomes complex. ::: ::: The paper is presented in two parts. Part I outlines a heuristic for solving the multi-item lot size problem given fixed production resources. The problem is initially structured as a network of unlimited capacity. An arc-cutting criterion is suggested, successively paring the unconstrained lot size optimum in low-cost increments until a feasible integer solution occurs. ::: ::: In Part II of this 2 part paper, the heuristic is extended to include variable capacity constraints. Computational results for both the fixed and variable capacity configurations and the bibliography conclude the presentation.
---
paper_title: The Joint Replenishment Problem with Time-Varying Costs and Demands: Efficient, Asymptotic and ε-Optimal Solutions
paper_content:
We address the Joint Replenishment Problem (JRP) where, in the presence of joint setup costs, dynamic lot sizing schedules need to be determined for m items over a planning horizon of N periods, with general time-varying cost and demand parameters. We develop a new, so-called, partitioning heuristic for this problem, which partitions the complete horizon of N periods into several relatively small intervals, specifies an associated joint replenishment problem for each of these, and solves them via a new, efficient branch-and-bound method. The efficiency of the branch-and-bound method is due to the use of a new, tight lower bound to evaluate the nodes of the tree, a new branching rule, and a new upper bound for the cost of the entire problem. The partitioning heuristic can be implemented with complexity O(mN2log log N). It can be designed to guarantee an e-optimal solution for any e > 0, provided that some of the model parameters are uniformly bounded from above or below. In particular, the heuristic is asy...
---
paper_title: An effective heuristic for the CLSP with set-up times
paper_content:
The problem of multi-item, single level, capacitated, dynamic lot-sizing with set-up times (CLSP with set-up times) is considered. The difficulty of the problem compared with its counterpart without set-up times is explained. A lower bound on the value of the objective function is calculated by Lagrangian relaxation with subgradient optimisation. During the process, attempts are made to get good feasible solutions (ie. upper bounds) through a smoothing heuristic, followed by a local search with a variable neighbourhood. Solutions found in this way are further optimised by solving a capacitated transshipment problem. The paper describes the various elements of the solution procedure and presents the results of extensive numerical experimentation.
---
paper_title: Minimum Change-Over Scheduling of Several Products on One Machine
paper_content:
Given a finite horizon delivery schedule for n products we wish to schedule production on a single machine to meet deliveries and minimize the number of change-overs of the machine from one product to another. A state space is defined and in it a network is constructed such that the shortest distance through the network corresponds to the minimum number of production change-overs. Certain properties of the optimal path are deduced from the dynamic programming formulation of the shortest route problem, and these properties are utilized in the construction of an algorithm that finds the optimal path. A numerical example illustrates the method.
---
paper_title: Setup cost reduction in the dynamic lot-size model
paper_content:
Abstract Recently, one primary focus of Operations Management has turned to setup reduction because of the growth of Just-in-Time(JIT) manufacturing. Porteus [1] and Billington [2] have developed optimal policies for calculating investment in setup reduction and lot-size in the EOQ model when setup cost is some function of investment. Zangwill [3] has examined the effects of incremental setup cost reductions in the multi-factility dynamic demand environment on costs and zeroinventory facilities. His focus is on obtaining zero-inventory facilities and maximizing savings from lower setup costs without including the cost of such an event. As an extension of these developments, we model the Wagner-Whitin problem with a one-time oportunity to invest in setup reduction. Setup cost is treated as a policy variable and defined as a function of the decision variable representing the annual amortized investment in setup cost reduction. We use an exponential setup reduction function, but speculate that the results also hold for other functions as well. In contrast to Zangwill, we take a direct approach by explicitly finding the optimal investment in setup cost reduction while generating an optimal lot-sizing schedule. Solving a model that incorporates the trade-off between investment and savings results in more realistic solutions. We use a golden section search and the Wagner-Whitin algorithm to obtain solutions for lot-size, setup cost, and the investment in setup reduction. This model is also formulated as a network to better illustrate the interaction between the decision variables. The network formulation can also be exploited to solve the problem with linear programming or network techniques. Finally, we state and prove two theorems that hold for all setup reduction models whether they assume constant or dynamic demand. The first theorem asserts that optimal values for setupt cost and lot-sizes stay fixed over a particular range of holding costs. The second theorem states that the optimal setup cost is independent of initial setup cost.
---
paper_title: Algorithms for Capacitated, Multi-Item Lot-Sizing without Set-Ups
paper_content:
The multi-item lot-sizing problem considered here is concerned with finding the lot sizes over a horizon of discrete time periods to meet known future demand without incurring backlogs, such that the total cost of production and inventory holding is minimized. The capacity constraints arise because the production of each item consumes capacitated production resources at a given rate. Production is assumed to occur without set-ups. The problem is formulated as a capacitated trans-shipment problem. Use of modern, minimum-cost network flow algorithms, coupled with appropriate starting procedures, allows realistically large problem instances to be solved efficiently; thus obviating the need for specialized algorithms based on restrictive assumptions regarding cost structures.
---
paper_title: The Capacitated Lot-Sizing Problem with Linked Lot Sizes
paper_content:
In this paper a new mixed integer programming (MIP) model formulation and its incorporation into a time-oriented decomposition heuristic for the capacitated lot-sizing problem with linked lot sizes (CLSPL) is proposed. The solution approach is based on an extended model formulation and valid inequalities to yield a tight formulation. Extensive computational tests prove the capability of this approach and show a superior solution quality with respect to other solution algorithms published so far.
---
paper_title: A framework for modelling setup carryover in the capacitated lot sizing problem
paper_content:
This paper addresses the single-level capacitated lot sizing problem (CLSP) with setup carryover. Specifically, we consider a class of production planning problems in which multiple products can be produced within a time period and significant setup times are incurred when changing from one product to another. Hence, there might be instances where developing a feasible schedule becomes possible only if setups are carried over from one period to another. We develop a modelling framework to formulate the CLSP with setup times and setup carryovers. We then extend the modelling framework to include multiple machines and tool requirements planning. The need for such a model that integrates both planning and lot sizing decisions is motivated by the existence of a similar problem in a paper mill. We apply the modelling framework to solve optimally, an instance of the paper mill's problem.
---
paper_title: The discrete lot-sizing and scheduling problem with sequence-dependent setup costs
paper_content:
Abstract We consider the problem of scheduling several products on a single machine so as to meet the known dynamic demand and to minimize the sum of inventory costs and sequence-dependent setup costs. The planning interval is subdivided into many short periods, e.g. shifts or days, and any lot must last one or several full periods. We formulate this problem as a travelling salesman problem with time windows and present a new procedure for determining lower bounds using Lagrangean relaxation as well as a heuristic. Computational results for problems with up to 10 products and 150 periods are reported.
---
paper_title: A Dual Ascent Procedure for Multiproduct Dynamic Demand Coordinated Replenishment with Backlogging
paper_content:
This paper describes a mixed-integer programming formulation and dual ascent based branch-and-bound algorithm for the multiproduct dynamic demand coordinated replenishment problem with backlogging. The single sourcing properties of the formulation and the hierarchical structure of the fixed-charge and continuous variables yield an extremely tight linear programming relaxation for the problem. A branch-and-bound algorithm based on Erlenkotter's dual ascent, dual adjustment, and primal construction concepts exploits these properties to obtain an efficient solution procedure. Computational results indicate that the new procedures find optimal solutions in less than five percent of the computational time of the most efficient previous algorithm. The heuristic performance of the procedures also demonstrate their superiority over existing approaches. We solved problems with 12 time periods and 20 products in 0.41 CPU seconds, and heuristic solutions with a worst-case three-percent optimality gap are found in 0.068 CPU seconds. The efficiency and large-scale capability of the procedures make their potential application in inventory requirements planning systems promising.
---
paper_title: A Dynamic Lot Sizing Model with Learning in Setups
paper_content:
This paper considers the dynamic lot sizing problem of H. M. Wagner and T. M. Whitin with the assumption that the total cost of n setups is a concave nondecreasing function of n. Such setup costs could arise from the worker learning in setups and/or technological improvements in setup methods. An efficient dynamic programming algorithm is developed to solve a finite horizon problem and results are presented to find decision/forecast horizons. Several new results presented in the paper have potential use in solving other related problems.
---
paper_title: Production Smoothing of Economic Lot Sizes with Non-Decreasing Requirements
paper_content:
This paper considers the problem of finding a production schedule, in terms of how much to produce in each period, that minimizes the total cost of supplying known market requirements for a single product. The costs include a concave production cost, a concave inventory cost, and a piecewise concave cost of changing the production level from one period to the next. Assuming that there is no backlogging of requirements and that the market requirements are monotone increasing, i.e., not decreasing from period to period, the form of the minimum cost production schedule is obtained. This form is then exploited in a dynamic programming algorithm to provide an efficient means of exactly determining the minimum cost schedule. An interesting calculation reducing theorem is also developed to further enhance the efficiency of the dynamic programming algorithm.
---
paper_title: Lot Sizing and Scheduling with Sequence Dependent Setup Costs and Times and Efficient Rescheduling Opportunities
paper_content:
Abstract This paper deals with lot sizing and scheduling for a single-stage, single-machine production system where setup costs and times are sequence dependent. A large-bucket mixed integer programming (MIP) model is formulated which considers only efficient sequences. A tailor-made enumeration method of the branch-and-bound type solves problem instances optimally and efficiently. The size of solvable cases ranges from 3 items and 15 periods to 10 items and 3 periods. Furthermore, it will become clear that rescheduling can neatly be done.
---
paper_title: Simultaneous lotsizing and scheduling by combining local search with dual reoptimization
paper_content:
Abstract The contribution of this paper is twofold. On the one hand, the particular problem of integrating lotsizing and scheduling of several products on a single, capacitated production line is modelled and solved, taking into account sequence-dependent setup times. Thereby, continuous lotsizes, meeting deterministic dynamic demands, are to be determined and scheduled with the objective of minimizing inventory holding costs and sequence-dependent setup costs. On the other hand, a new general algorithmic approach is presented: A dual reoptimization algorithm is combined with a local search heuristic for solving a mixed integer programming problem. This idea is applied to the above lotsizing and scheduling problem by embedding a dual network flow algorithm into threshold accepting and simulated annealing, respectively. Computational tests show the effectiveness of the new solution method.
---
paper_title: The coordinated replenishment dynamic lot-sizing problem with quantity discounts
paper_content:
Abstract In the classical coordinated replenishment dynamic lot-sizing problem, the primary motivation for coordination is in the presence of the major and minor setup costs. In this paper, a separate element of coordination made possible by the offer of quantity discounts is considered. A mathematical programming formulation for the extended problem under the all-units discount price structure and the incremental discount price structure is provided. Then, using variable redefinitions, tighter formulations are presented in order to obtain tight lower bounds for reasonable size problems. More significantly, as the problem is NP-hard, we present an effective polynomial time heuristic procedure, for the incremental discount version of the problem, that is capable of solving reasonably large size problems. Computational results for the heuristic procedure are reported in the paper.
---
paper_title: The capacitated dynamic lot size problem with variable technology
paper_content:
Traditionally, the technological coefficients in production models were assumed to be fixed. In recent years however, researchers have used the learning curve model to represent nonlinear technological coefficients, and the dynamic lot-sizing problem with learning in setups has received attention. This article extends the research to consider capacity restrictions in the single-level, multi-item case. The research has two goals, first, to analyze the effects of setup learning on a production schedule, and second, to investigate efficient ways of solving the resulting nonlinear integer model. Previously derived algorithms do not address the issue of capacity; thus a heuristic is developed and its solution is compared with the optimal solution, where possible, or to a lower bound solution.
---
paper_title: Capacitated lot-sizing with sequence dependent setup costs
paper_content:
A new model is presented for capacitated lot-sizing with sequence dependent setup costs. The model is solved heuristically with a backward oriented method; the sequence and lot-size decisions are based on a priority rule which consists of a convex combination of setup and holding costs. A computational study is performed where the heuristic is compared with the Fleischmann approach for the discrete lot-sizing and scheduling problem with sequence dependent setup costs.
---
paper_title: Coordinated Capacitated Lot-Sizing Problem with Dynamic Demand: A Lagrangian Heuristic
paper_content:
Coordinated replenishment problems are common in manufacturing and distribution when a family of items shares a common production line, supplier, or a mode of transportation. In these situations the coordination of shared, and often limited, resources across items is economically attractive. This paper describes a mixed-integer programming formulation and Lagrangian relaxation solution procedure for the single-family coordinated capacitated lot-sizing problem with dynamic demand. The problem extends both the multi-item capacitated dynamic demand lot-sizing problem and the uncapacitated coordinated dynamic demand lot-sizing problem. We provide the results of computational experiments investigating the mathematical properties of the formulation and the performance of the Lagrangian procedures. The results indicate the superiority of the dual-based heuristic over linear programming-based approaches to the problem. The quality of the Lagrangian heuristic solution improved in most instances with increases in problem size. Heuristic solutions averaged 2.52% above optimal. The procedures were applied to an industry test problem yielding a 22.5% reduction in total costs.
---
paper_title: Programming of Economic Lot Sizes
paper_content:
This paper studies the planning problem faced by a machine shop required to produce many different items so as to meet a rigid delivery schedule, remain within capacity limitations, and at the same time minimize the use of premium-cost overtime labor. It differs from alternative approaches to this well-known problem by allowing for setup cost indivisibilities. ::: ::: As an approximation, the following linear programming model is suggested: Let an activity be defined as a sequence of the inputs required to satisfy the delivery requirements for a single item over time. The input coefficients for each such activity may then be constructed so as to allow for all setup costs incurred when the activity is operated at the level of unity or at zero. It is then shown that in any solution to this problem, all activity levels will turn out to be either unity or zero, except for those related to a group of items which, in number, must be equal to or less than the original number of capacity constraints. This result means that the linear programming solution should provide a good approximation whenever the number of items being manufactured is large in comparison with the number of capacity constraints.
---
paper_title: Lotsizing and Scheduling on Parallel Machines with Sequence-Dependent Setup Costs
paper_content:
Industrial lotsizing and scheduling pose very difficult analytical problems. We propose an unconventional model that deals with sequence-dependent setup costs in a multiple-machine environment. The sequence-splitting model splits an entire schedule into subsequences, leading to tractable subproblems. An optimization approach based on a column generation/branch and bound methodology is developed, and heuristically adapted to test problems including five real-world problem instances gathered from industry.
---
paper_title: Multiperiod Production Planning Carrying over Set-up Time
paper_content:
Set-ups eat production capacity time and continue troubling production planning, especially on bottlenecks. The shortening of production planning periods to days, shifts or even less has increased the relative length of set-up times against the periods. Yet, many production planning models either ignore set-up times or, paradoxically, split longer multiperiod batches by adding set-ups at breaks between planning periods. The MILP-based capacitated lot-sizing models that include set-up carry-overs, i.e. allow a carry-over of a set-up of a product to the next period in case a product can be produced in subsequent periods, have incorporated fixed set-up fees without consideration of capacity consumed by set-up time. Inspired by production planning in process industries where set-up times still remain substantial, we incorporated set-up time with associated cost in two modifications of carry-over models. Comparison with an earlier benchmark model without set-up carry-over shows that substantial savings can be derived from the fundamentally different production plans enforced by carry-overs. Moreover, we show that heuristic inclusion of carry-overs by removal of set-ups from non-carry-over solutions is inefficient.
---
paper_title: Some Optimum Algorithms for Scheduling Problems with Changeover Costs
paper_content:
We consider a production line that can produce one of n items per day. The demand schedule for all items is known in advance, and all items must be produced on or before their deadlines. We want a production schedule that meets all demand deadlines and minimizes the total changeover cost. The changeover cost has a special structure: it is i one dollar if the production line changes from producing item i to item j and i is less than j, and ii zero if i is greater than or equal to j. We also consider multiple identical production lines with all demands due at the end of every month, and assume that there is exactly enough demand at the end of every month. We obtain optimum production schedules for both the single-line and multiple-line case.
---
paper_title: The capacitated lot-sizing and scheduling problem with sequence-dependent setup costs and setup times
paper_content:
We consider the single machine capacitated lot-sizing and scheduling problem (CLSP) with sequence-dependent setup costs and non-zero setup times, with the additional feature that setups may be carried over from one period to the next, and that setups are preserved over idle periods. We provide an exact formulation of this problem as a mixed-integer program.It is well known that the CLSP is NP-hard. Therefore, we have also developed a heuristic for solving large problem instances. This is coupled with a procedure for obtaining a lower bound on the optimal solution. We carry out a computational study to test the accuracy of several different lower bounding linear relaxations and the approximate solution obtained by the heuristic. In our study, the average deviation of the heuristic solution from the corresponding exact solution depends on the size of the problem and ranges from 10 to 16%. The heuristic is more effective when there are many more products than there are planning periods. This is a desirable property from a practical viewpoint since most firms are likely to implement such a procedure on a rolling horizon basis, solving the problem repeatedly for a few periods at a time.
---
paper_title: A heuristic for production scheduling and inventory control in the presence of sequence-dependent setup times
paper_content:
The consideration of sequence-dependent setup times is one of the most difficult aspects of production scheduling problems. This paper reports on the development of a heuristic procedure to address a realistic production and inventory control problem in the presence of sequence-dependent setup times. The problem considers known monthly demands, variable production rates, holding costs, minimum and maximum inventory levels per product, and regular and overtime capacity limits. The problem is formulated as a Mixed-Integer Program (MIP), where subtour elimination constraints are needed to enforce the generation of job sequences in each month. By relaxing the subtour elimination constraints, the MIP formulation can be used to find a lower bound on the optimal solution. CPLEX 3.0 is used to calculate lower bounds for relatively small instances of this production problem, which are then used to assess the merit of a proposed heuristic. The heuristic is based on a simple short-term memory tabu search method that coordinates linear programming and traveling salesperson solvers in the search for optimal or near-optimal production plans.
---
paper_title: A modified framework for modelling set-up carryover in the capacitated lotsizing problem
paper_content:
In this note, a modified framework to model set-up carryovers in the capacitated lotsizing problem is presented. The proposed framework allows product dependent set-up times and costs to be incorporated. This is an extension of an earlier published work on modelling set-up carryovers for the constant set-up time scenario. An example to illustrate the modified framework is also provided.
---
paper_title: An integrated model for job-shop planning and scheduling
paper_content:
We consider an integrated job-shop planning and scheduling model. To solve the problem we use a multi-pass decomposition approach which alternates between solving a planning problem with a fixed sequence of products on the machines, and a job-shop scheduling problem for a fixed choice of the production plan. The generated production plans are feasible i.e., there exists at least one feasible schedule to realize that plan. Quality of the solution is investigated and numerical results are presented.
---
paper_title: An economic lot-sizing problem with perishable inventory and economies of scale costs: Approximation solutions and worst case analysis
paper_content:
The costs of many economic activities such as production, purchasing, distribution, and inventory exhibit economies of scale under which the average unit cost decreases as the total volume of the activity increases. In this paper, we consider an economic lot-sizing problem with general economies of scale cost functions. Our model is applicable to both nonperishable and perishable products. For perishable products, the deterioration rate and inventory carrying cost in each period depend on the age of the inventory. Realizing that the problem is NP-hard, we analyze the effectiveness of easily implementable policies. We show that the cost of the best Consecutive-Cover-Ordering (CCO) policy, which can be found in polynomial time, is guaranteed to be no more than (42 5)/7 1.52 times the optimal cost. In addition, if the ordering cost function does not change from period to period, the cost of the best CCO policy is no more than 1.5 times the optimal cost. © 2005 Wiley Periodicals, Inc. Naval Research Logistics 52: 536 -548, 2005.
---
paper_title: Rolling-horizon lot-sizing when set-up times are sequence-dependent
paper_content:
The challenging problem of efficient lot sizing on parallel machines with sequence-dependent set-up times is modelled using a new mixed integer programming (MIP) formulation that permits multiple set-ups per planning period. The resulting model is generally too large to solve optimally and, given that it will be used on a rolling horizon basis with imperfect demand forecasts, approximate models that only generate exact schedules for the immediate periods are developed. Both static and rolling horizon snapshot tests are carried out. The approximate models are tested and found to be practical rolling horizon proxies for the exact model, reducing the dimensionality of the problem and allowing for faster solution by MIP and metaheuristic methods. However, for large problems the approximate models can also consume an impractical amount of computing time and so a rapid solution approach is presented to generate schedules by solving a succession of fast MIP models. Tests show that this approach is able to produc...
---
paper_title: MRP lot sizing with variable production/purchasing costs: formulation and solution
paper_content:
SUMMARY The research on lot sizing is extensive; however, no author in the literature reviewed to date provides an optimal solution algorithm to a prevalent problem which is found in manufacturing. A multi-level, general product-structure, variable-cost model is presented which follows the procedure of a closed-loop material requirements planning (MRP) system, and incorporates many conditions that production and material managers find in practice. A branch and bound (B&B7) algorithm is developed. The efficiency of B&B is derived from effective lower bounds and solution procedures which are determined on the basis of the space-time structure of the MRP lot-sizing problem and its non-convex total-cost function. This path-dependent lower bound is computationally efficient and guarantees an optimal solution. The B&B algorithm is tested on problems and compared to heuristics in the literature.
---
paper_title: Integration of lotsizing and scheduling decisions in a job-shop
paper_content:
Abstract In this paper, we consider an integrated model for job-shop lotsizing and scheduling in order to determine a feasible plan, i.e., a plan with at least one feasible schedule. Our method consists in alternatively solving problems at two different levels, one in which lotsizes are computed for a given sequence of jobs on each machine, and one in which a sequence is computed given fixed lotsizes. Different approaches are investigated, and computational results are reported.
---
paper_title: The coordinated replenishment dynamic lot-sizing problem with quantity discounts
paper_content:
Abstract In the classical coordinated replenishment dynamic lot-sizing problem, the primary motivation for coordination is in the presence of the major and minor setup costs. In this paper, a separate element of coordination made possible by the offer of quantity discounts is considered. A mathematical programming formulation for the extended problem under the all-units discount price structure and the incremental discount price structure is provided. Then, using variable redefinitions, tighter formulations are presented in order to obtain tight lower bounds for reasonable size problems. More significantly, as the problem is NP-hard, we present an effective polynomial time heuristic procedure, for the incremental discount version of the problem, that is capable of solving reasonably large size problems. Computational results for the heuristic procedure are reported in the paper.
---
paper_title: Dynamic Lot Sizing with Batch Ordering and Truckload Discounts
paper_content:
This paper studies two important variants of the dynamic economic lot-sizing problem that are applicable to a wide range of real-world situations. In the first model, production in each time period is restricted to a multiple of a constant batch size, where backlogging is allowed and all cost parameters are time varying. Several properties of the optimal solution are discussed. Based on these properties, an efficient dynamic programming algorithm is developed. The efficiency of the dynamic program is further improved through the use of Monge matrices. Using the results developed for the first model, an O(n3log n) algorithm is developed to solve the second model, which has a general form of product acquisition cost structure, including a fixed charge for each acquisition, a variable unit production cost, and a freight cost with a truckload discount. This algorithm can also be used to solve a more general problem with concave cost functions.
---
paper_title: An Efficient Algorithm for Multi-Item Scheduling
paper_content:
A number of resource-allocation problems, including that of multi-item scheduling, may be solved approximately as large linear programs, as in Manne [Management Sci. 4, 115-135 1958]. Dzielinski and Gomory [Management Sci. 11, 874-890 1965] applied the Dantzig-Wolfe decomposition principle to this problem. Here, the problem is attacked directly, using a column generation technique and Dantzig and Van Slyke's generalized upper-bounding method [J. Comp. and Syst. Sci. 1, 213-226 1967]. For problems involving I items and T time periods, one need deal with a basis matrix of dimension only T by T. A lower bound on the optimal cost may be developed, and intermediate solutions all have Manne's integer property loc. cit.. Computational experiments, including an option for pricing out subproblem solutions until none is useful, show a number of iterations to optimality of from one-half to one-ninth the number required by the decomposition principle with work per iteration remaining approximately the same. Extensions of the basic model are also described. These form the core of an automated production-scheduling and inventory-control system, currently being used by a major U. S. manufacturer. Computational experience with this extended model is presented.
---
paper_title: Hybrid heuristics for the capacitated lot sizing and loading problem with setup times and overtime decisions
paper_content:
The capacitated lot sizing and loading problem (CLSLP) deals with the issue of determining the lot sizes of product families/end items and loading them on parallel facilities to satisfy dynamic demand over a given planning horizon. The capacity restrictions in the CLSLP are imposed by constraints specific to the production environment considered. When a lot size is positive in a specific period, it is loaded on a facility without exceeding the sum of the regular and overtime capacity limits. Each family may have a different process time on each facility and furthermore, it may be technologically feasible to load a family only on a subset of existing facilities. So, in the most general case, the loading problem may involve unrelated parallel facilities of different classes. Once loaded on a facility, a family may consume capacity during setup time. Inventory holding and overtime costs are minimized in the objective function. Setup costs can be included if setups incur costs other than lost production capacity. The CLSLP is relevant in many industrial applications and may be generalized to multi-stage production planning and loading models. The CLSLP is a synthesis of three different planning and loading problems, i.e., the capacitated lot sizing problem (CLSP) with overtime decisions and setup times, minimizing total tardiness on unrelated parallel processors, and, the class scheduling problem, each of which is NP in the feasibility and optimality problems. Consequently, we develop hybrid heuristics involving powerful search techniques such as simulated annealing (SA), tabu search (TS) and genetic algorithms (GA) to deal with the CLSLP. Results are compared with optimal solutions for 108 randomly generated small test problems. The procedures developed here are also compared against each other in 36 larger size problems.
---
paper_title: An Algorithm for Single-Item Capacitated Economic Lot Sizing with Piecewise Linear Production Costs and General Holding Costs
paper_content:
We consider the Capacitated Economic Lot Size problem with piecewise linear production costs and general holding costs, which is an NP-hard problem but solvable in pseudo-polynomial time. A straightforward dynamic programming approach to this problem results in an [TeX: $O(n^2 \\bar{c} \\bar{d} )$] algorithm, where [TeX: $n$] is the number of periods, and [TeX: $\\bar d$ and $\\bar c$] are the average demand and the average production capacity over the $n$ periods, respectively. However, we present a dynamic programming procedure with complexity [TeX: $O(n^2 \\bar{q} \\bar{d} )$], where [TeX: $\\bar q$] is the average number of pieces of the production cost functions. In particular, this means that problems in which the production functions consist of a fixed set-up cost plus a linear variable cost are solved in [TeX: $O(n^2 \\bar{d})$] time. Hence, the running time of our algorithm is only linearly dependent on the magnitude of the data. This result also holds if extensions such as backlogging and start-up costs are considered. Moreover, computational experiments indicate that the algorithm is capable of solving quite large problem instances within a reasonable amount of time. For example, the average time needed to solve test instances with 96 periods, 8 pieces in every production function and average demand of 100 units, is approximately 40 seconds on a SUN SPARC 5 workstation.
---
paper_title: Hybrid heuristics for the multi-stage capacitated lot sizing and loading problem
paper_content:
The multi-stage capacitated lot sizing and loading problem (MCLSLP) deals with the issue of determining the lot sizes of product items in serially-arranged manufacturing stages and loading them on parallel facilities in each stage to satisfy dynamic demand over a given planning horizon. It is assumed that regular time capacity decisions have already been made in the tactical level and allocated to the stages, but it is still an important decision problem whether to augment regular time capacity by overtime capacity. Each item may be processed on a technologically feasible subset of existing facilities with different process and setup times on each facility. Since the solution of the MCLSLP requires the design of a powerful algorithm, simulated annealing (SA) and genetic algorithms (GA) are integrated to enhance their individual performances. Furthermore, these global optimisation methods are incorporated into a Lagrangean relaxation scheme, hence creating a hybrid solution methodology. Numerical results obtained using these methods confirm the mutual benefits of integrating different solution techniques.
---
paper_title: Lotsizing and Scheduling on Parallel Machines with Sequence-Dependent Setup Costs
paper_content:
Industrial lotsizing and scheduling pose very difficult analytical problems. We propose an unconventional model that deals with sequence-dependent setup costs in a multiple-machine environment. The sequence-splitting model splits an entire schedule into subsequences, leading to tractable subproblems. An optimization approach based on a column generation/branch and bound methodology is developed, and heuristically adapted to test problems including five real-world problem instances gathered from industry.
---
paper_title: Lot sizing problems with strong set-up interactions
paper_content:
We address the problem of coordinated replenishment of products when the products can be produced only in fixed proportion to each other. Such problems commonly arise in the manufacture of sheet/plate metal parts or die-cast parts. The problem is a variant of the well-known Joint Replenishment Problem. We call this problem the Strong Interaction Problem (SIP). After giving a mathematical formulation of the problem, we show that the general problem is NP-hard. An important variant of the problem, in which products are unique to a family, is shown to be polynomially solvable. We present several lower bounds, an exact algorithm and a heuristic for the problem. Computational testing on randomly generated problems suggests that our exact algorithm performs very well when compared with a commercially available integer programming solver. The heuristic method also gives good solutions.
---
paper_title: Inventory replenishment model: lot sizing versus just-in-time delivery
paper_content:
Motivated by a practical industrial problem where a manufacturer stipulates a minimum order from each buyer but where a local dealer promises the buyer a just-in-time delivery with a slightly higher unit cost, this paper uses a dynamic lot-sizing model with a stepwise cargo cost function and a minimum order amount constraint to help the buyer select the supplier with minimum total cost.
---
paper_title: An economic lot-sizing problem with perishable inventory and economies of scale costs: Approximation solutions and worst case analysis
paper_content:
The costs of many economic activities such as production, purchasing, distribution, and inventory exhibit economies of scale under which the average unit cost decreases as the total volume of the activity increases. In this paper, we consider an economic lot-sizing problem with general economies of scale cost functions. Our model is applicable to both nonperishable and perishable products. For perishable products, the deterioration rate and inventory carrying cost in each period depend on the age of the inventory. Realizing that the problem is NP-hard, we analyze the effectiveness of easily implementable policies. We show that the cost of the best Consecutive-Cover-Ordering (CCO) policy, which can be found in polynomial time, is guaranteed to be no more than (42 5)/7 1.52 times the optimal cost. In addition, if the ordering cost function does not change from period to period, the cost of the best CCO policy is no more than 1.5 times the optimal cost. © 2005 Wiley Periodicals, Inc. Naval Research Logistics 52: 536 -548, 2005.
---
paper_title: Warehouse space capacity and delivery time window considerations in dynamic lot-sizing for a simple supply chain
paper_content:
Abstract This paper studies a single item, two-echelon dynamic lot-sizing model with delivery time windows, early shipment penalties, and warehouse space capacity constraints. The two-echelon system consists of a warehouse and a distribution center. The underlying problem is motivated by third party logistics and vendor managed inventory applications in the computer industry where delivery time windows are typically specified by the distribution center under the terms of a supply contract. The capacity of the warehouse is limited. This constraint should be considered explicitly because the finished products are expensive items (such as computer equipment and peripherals), and they have to be stored in the warehouse in an appropriate climate before they are shipped to the distribution center. Studying the optimality properties of the problem, the paper provides a polynomial time algorithm for computing its solution. The optimal solution includes: (i) the replenishment plan specifying “when, and in what quantities, to replenish the stock at the third-party warehouse,” and (ii) the dispatch plan specifying “when, and in what quantities, to release an outbound shipment to the distribution center, and in which order to satisfy the demands.” The algorithm is based on dynamic programming and requires O(T3) computational complexity.
---
paper_title: Lot sizing for a product subject to obsolescence or perishability
paper_content:
Abstract This paper presents a stochastic dynamic programming model for determining the optimal ordering policy for a perishable or potentially obsolete product so as to satisfy known time-varying demand over a specified planning horizon. We have considered random life time perishability where, at the end of each discrete period, the total remaining inventory either becomes worthless or remains usable for at least the next period. Two approximate solution methods are shown. The optimal and heuristic methods are compared on a large set of test problems and their performance as a function of various problem parameters is analyzed.
---
paper_title: Bounded Production and Inventory Models with Piecewise Concave Costs
paper_content:
An n period single-product single-facility model with known requirements and separable piecewise concave production and storage costs is considered. It is shown using network flow concepts that for arbitrary bounds on production and inventory in each period there is an optimal schedule such that if, for any two periods, production does not equal zero or its upper or lower bound, then the inventory level in some intermediate period equals zero or its upper or lower bound. An algorithm for searching such schedules for an optimal one is given where the bounds on production are -∞, 0 or ∞. A more efficient algorithm assumes further that inventory bounds satisfy “exact requirements.”
---
paper_title: Minimum Concave-Cost Solution of Leontief Substitution Models of Multi-Facility Inventory Systems
paper_content:
The paper shows that a broad class of problems can be formulated as minimizing a concave function over the solution set of a Leontief substitution system. The class includes deterministic single- and multi-facility economic lot size, lot-size smoothing, warehousing, product-assortment, batch-queuing, capacity-expansion, investment consumption, and reservoir-control problems with concave cost functions. Because in such problems an optimum occurs at an extreme point of the solution set, we can utilize the characterization of the extreme points given in a companion paper to obtain most existing qualitative characterizations of optimal policies for inventory models with concave costs in a unified manner. Dynamic programming recursions for searching the extreme points to find an optimal point are given in a number of cases. We only give algorithms whose computational effort increases algebraically (instead of exponentially) with the size of the problem.
---
paper_title: A new characterization for the dynamic lot size problem with bounded inventory
paper_content:
In this paper, we address the dynamic lot size problem with storage capacity. As in the unconstrained dynamic lot size problem, this problem admits a reduction of the state space. New properties to obtain optimal policies are introduced. Based on these properties a new dynamic programming algorithm is devised. Superiority of the new algorithm to the existing procedure is demonstrated. Furthermore, the new algorithm runs in O(T) expected time when demands vary between zero and the storage capacity. Computational results are reported for randomly generated problems.
---
paper_title: Transshipment through crossdocks with inventory and time windows
paper_content:
The supply chain between manufacturers and retailers always includes transshipments through a network of locations. A major challenge in making demand meet supply has been to coordinate transshipment activities across the chain aimed at reducing costs and increasing service levels in the face of a range of factors, including demand fluctuations, short lead times, warehouse limitations and transportation and inventory costs. The success in implementing push-pull strategies, when firms change from one strategy to another in managing the chain and where time lines are crucial, is dependent on adaptive transshipment scheduling. Yet again, in transshipment through crossdocks, where just-in-time objectives prevail, precise scheduling between suppliers, crossdocks and customers is required to avoid inventory backups or delays.
---
paper_title: Warehouse space capacity and delivery time window considerations in dynamic lot-sizing for a simple supply chain
paper_content:
Abstract This paper studies a single item, two-echelon dynamic lot-sizing model with delivery time windows, early shipment penalties, and warehouse space capacity constraints. The two-echelon system consists of a warehouse and a distribution center. The underlying problem is motivated by third party logistics and vendor managed inventory applications in the computer industry where delivery time windows are typically specified by the distribution center under the terms of a supply contract. The capacity of the warehouse is limited. This constraint should be considered explicitly because the finished products are expensive items (such as computer equipment and peripherals), and they have to be stored in the warehouse in an appropriate climate before they are shipped to the distribution center. Studying the optimality properties of the problem, the paper provides a polynomial time algorithm for computing its solution. The optimal solution includes: (i) the replenishment plan specifying “when, and in what quantities, to replenish the stock at the third-party warehouse,” and (ii) the dispatch plan specifying “when, and in what quantities, to release an outbound shipment to the distribution center, and in which order to satisfy the demands.” The algorithm is based on dynamic programming and requires O(T3) computational complexity.
---
paper_title: Production Smoothing of Economic Lot Sizes with Non-Decreasing Requirements
paper_content:
This paper considers the problem of finding a production schedule, in terms of how much to produce in each period, that minimizes the total cost of supplying known market requirements for a single product. The costs include a concave production cost, a concave inventory cost, and a piecewise concave cost of changing the production level from one period to the next. Assuming that there is no backlogging of requirements and that the market requirements are monotone increasing, i.e., not decreasing from period to period, the form of the minimum cost production schedule is obtained. This form is then exploited in a dynamic programming algorithm to provide an efficient means of exactly determining the minimum cost schedule. An interesting calculation reducing theorem is also developed to further enhance the efficiency of the dynamic programming algorithm.
---
paper_title: Mathematical Programming Models and Formulations for Deterministic Production Planning Problems.
paper_content:
We study in this lecture the literature on mixed integer programming models and formulations for a specific problem class, namely deterministic production planning problems. The objective is to present the classical optimization approaches used, and the known models, for dealing with such management problems.We describe first production planning models in the general context of manufacturing planning and control systems, and explain in which sense most optimization solution approaches are based on the decomposition of the problem into single-item subproblems.Then we study in detail the reformulations for the core or simplest subproblem in production planning, the single-item uncapacitated lot-sizing problem, and some of its variants. Such reformulations are either obtained by adding variables - to obtain so called extended reformulations - or by adding constraints to the initial formulation. This typically allows one to obtain a linear description of the convexh ull of the feasible solutions of the subproblem. Such tight reformulations for the subproblems play an important role in solving the original planning problem to optimality.We then review two important classes of extensions for the production planning models, capacitated models and multi-stage or multi-level models. For each, we describe the classical modeling approaches used. Finally, we conclude by giving our personal view on some new directions to be investigated in modeling production planning problems. These include better models for capacity utilization and setup times, new models to represent the product structure - or recipes - in process industries, and the study of continuous time planning and scheduling models as opposed to the discrete time models studied in this review.
---
paper_title: A Backlogging Model and a Multi-Echelon Model of a Dynamic Economic Lot Size Production System---A Network Approach
paper_content:
Two dynamic economic lot size production systems are analyzed in this paper, the first being a single product model with backlogging and the second a multi-echelon model. In each model the objective is to find a production schedule that minimizes the total production and inventory costs. ::: ::: A key conceptual difficulty is that the mathematically perplexing problem of minimizing a concave function is being considered. It is shown that both models are naturally represented via single source networks. The network formulations reveal the underlying structure of the models, and facilitate development of efficient dynamic programming algorithms for calculating the optimal production schedules.
---
paper_title: Dynamic lot-sizing model with demand time windows and speculative cost structure
paper_content:
We consider a deterministic lot-sizing problem with demand time windows, where speculative motive is allowed. Utilizing an untraditional decomposition principle, we provide an optimal algorithm that runs in O(nT^3) time, where n is the number of demands and T is the length of the planning horizon.
---
paper_title: A model for parallel machine replacement with capacity expansion
paper_content:
Abstract This paper considers an environment where several identical machines are used to meet product/service demands. It is assumed that the product/service demand is growing over time and therefore new machines will be purchased for capacity expansion. Also, older machines are assumed to be more expensive to operate than newer machines; and at some time, it may be economical to replace older machines by newer ones. There may be economies of scale in purchase of new machines. This paper develops an integrated model to find an optimal purchase schedule of new machines to meet both the capacity expansion and the machines replacement requirements. An optimal algorithm and an effective heuristic are developed. Our computational results show that the heuristic is close to optimal.
---
paper_title: Operations Research and Capacity Expansion Problems: A Survey
paper_content:
Planning for the expansion of production capacity is of vital importance in many applications within the private and public sectors. Examples can be found in heavy process industries, communication networks, electrical power services, and water resource systems. In all of these applications, the expansion of production capacity requires the commitment of substantial capital resources over long periods of time. Capacity expansion planning consists primarily of determining future expansion times, sizes, and locations, as well as the types of production facilities. Since the late 1950s, operations research methodology has been used to develop various models and solution approaches suitable for different applications. In this paper, we attempt to unify the existing literature on capacity expansion problems, emphasizing modeling approaches, algorithmic solutions, and relevant applications. The paper includes an extensive list of references covering a broad spectrum of capacity expansion problems.
---
paper_title: A Lagrangean-based heuristic for multi-plant, multi-item, multi-period capacitated lot-sizing problems with inter-plant transfers
paper_content:
This paper addresses scheduling of lot sizes in a multi-plant, multi-item, multi-period, capacitated environment with inter-plant transfers. A real-world problem in a company manufacturing steel rolled products provided motivation to this research. A Lagrangean-based approach, embedded with a lot shifting-splitting-merging routine, has been used for solving the multi-plant, capacitated lot-sizing problem. A "good" solution procedure developed by Sambasivan (Ph.D. Dissertation, University of Alabama, Tuscaloosa, 1994) has been used for solving the relaxed problem. About 120 randomly generated instances of the problem have been solved and it has been found that Lagrangean-based approach works quite "efficiently" for this problem.
---
paper_title: Integrated Production, Distribution, and Inventory Planning at Libbey-Owens-Ford
paper_content:
FLAGPOL, a large-scale linear-programming model of the production, distribution, and inventory operations in the flat glass business of Libbey-Owens-Ford deals with four plants, over 200 products, and over 40 demand centers in a 12-month planning horizon. Annual savings from a variety of sources are estimated at over $2,000,000.
---
paper_title: Using Lagrangean Techniques to Solve Hierarchical Production Planning Problems
paper_content:
This paper proposes and tests a procedure for decomposing a large scale production planning problem modeled as a mixed-integer linear program. We interpret this decomposition in the context of Hax and Meal's hierarchical framework for production planning. The procedure decomposes the production planning problem into two subproblems which correspond to the aggregate planning subproblem and a disaggregation subproblem in the Hax-Meal framework. The linking mechanism for these two subproblems is an inventory consistency relationship which is priced out by a set of Lagrange multipliers. The best values for the multipliers are found by an iterative procedure which may be interpreted as a feedback mechanism in the Hax-Meal framework. At each iteration, the procedure finds both a lower bound on the optimal value to the production planning problem and a feasible solution from which an upper bound is obtained. Our computational tests show that the best feasible solution found from this procedure is very close to optimal. For thirty-six test problems the percentage deviation from optimality never exceeds 4.4%, and the average percentage deviation is 2.2%. In addition, these best feasible solutions dominate the corresponding solutions obtained by a hierarchical procedure.
---
paper_title: Dynamic Capacity Expansion Problem with Deferred Expansion and Age-Dependent Shortage Cost
paper_content:
Deferring capacity expansion may be a cost effective decision when there is anticipation of cheaper capacity in the near future and/or the current demand is too low to justify an immediate expansion. This paper studies a finite-horizon capacity expansion problem (CEP) with deferred capacity expansion. The operating cost and the cost of holding unused capacity in each period depend on the time when the capacity is acquired, and the shortage cost depends on the time when the shortage occurred. Our model is a generalization of the Wagner-Whitin formulation of the CEP and an extension (with deferred expansion) of two other polynomially solvable CEPs in the literature. We explore structural properties of the problem and develop an efficient dynamic programming algorithm to solve the problem in polynomial time.
---
paper_title: Aggregate production planning — A survey of models and methodologies
paper_content:
Abstract Aggregate production planning (APP) involves the simultaneous determination of company's production, inventory and employment levels over a finite time horizon. Its objective is to minimize the total relevant costs while meeting non-constant, time varying demand, assuming fixed sales and production capacity. Despite numerous and varied solution techniques few have been implemented in industry. One problem with their implementation could be that little literature exists, outside of cursory textbook presentations and state-of-the-art summaries in 1967 and 1972 by Silver, which specifically summarizes the existing APP techniques into a simple classification scheme. In this paper we present a classification scheme that categorizes the literature on APP since early 1950, summarizing the various existing techniques into a framework depending upon their ability to either produce an exact optimal or near-optimal solution. The research literature that we compiled consisted of 140 journal articles and 14 books. The articles came from 17 different journals. It is intended to facilitate practitioners and researchers in finding the source work for existing methodologies, and to assist them in determining the suitability for implementation to their particular manufacturing problem and also to identify those areas that might require additional investigation and research.
---
paper_title: Coordination of production and distribution planning
paper_content:
Abstract This paper is a computational study to investigate the value of coordinating production and distribution planning. The particular scenario we consider concerns a plant that produces a number of products over time and maintains an inventory of finished goods at the plant. The products are distributed by a fleet of trucks to a number of retail outlets at which the demand for each product is known for every period of a planning horizon. We compare two approaches to managing this operation, one in which the production scheduling and vehicle routing problems are solved separately, and another in which they are coordinated within a single model. The two approaches are applied to 132 distinct test cases with different values of the basic model parameters, which include the length of the planning horizon, the number of products and retail outlets, and the cost of setups, inventory holding and vehicle travel. The reduction in total operating cost from coordination ranged from 3% to 20%. These results indicate the conditions under which companies should consider the organizational changes necessary to support coordination of production and distribution.
---
paper_title: A deterministic model for planning production quantities in a multi-plant, multi-warehouse environment with extensible capacities
paper_content:
Abstract We have developed a deterministic model for planning production and transportation quantities in multi-plant and multi-warehouse environment with extensible capacities. The model determines a production mix that maximizes total profit over a finite planning horizon. When production cannot meet demand due to lack of adequate resources, the model allows shortfalls to be met through subcontracting or the use of inventory. However, it does not allow subcontracting when adequate resources are available. When solved, the model produces the quantity of each product to be: (1) produced at each plant, (2) transported from each plant to each warehouse, (3) subcontracted at each warehouse, and (4) kept in inventory at each warehouse. Furthermore, it identifies the warehouses that need extensions at any period and the corresponding amounts of extensions needed. We also develop a procedure for reducing the size of the zero–one MILP problem that can be obtained during any application of the model. Numerical examples given to illustrate the model and to compare the full and the reduced versions of the model show that the model works well and that both the full and the reduced versions produce exactly the same results. Finally, we discuss the assumptions underlining the model and highlight the approaches that can be taken to eliminate or, at least, prevent errors that are associated with violations of the assumptions.
---
paper_title: A Dynamic Model for Inventory Lot Sizing and Outbound Shipment Scheduling at a Third-Party Warehouse
paper_content:
This paper presents a model for computing the parameters of an integrated inventory replenishment and outbound dispatch scheduling policy under dynamic demand considerations. The optimal policy parameters specify (i) how often and in what quantities to replenish the stock at an upstream supply chain member (e.g., a warehouse), and (ii) how often to release an outbound shipment to a downstream supply-chain member (e.g., a distribution center). The problem is represented using a two-echelon dynamic lot-sizing model with pre-shipping and late-shipping considerations, where outbound cargo capacity constraints are considered via a stepwise cargo cost function. Although the paper is motivated by a third-party warehousing application, the underlying model is applicable in the general context of coordinating inventory and outbound transportation decisions. The problem is challenging due to the stepwise cargo cost structure modeled. The paper presents several structural properties of the problem and develops a polynomial time algorithm for computing the optimal solution.
---
paper_title: The Dynamic Lot-Size Model with Stochastic Lead Times
paper_content:
Optimal solutions for the dynamic lot-sizing problem with deterministic demands but stochastic lead times are "lumpy." If lead time distributions are arbitrary except that they are independent of order size and do not allow orders to cross in time, then each order in an optimal solution will exactly satisfy a consecutive sequence of demands, a natural extension of the classic results by Wagner and Whitin. If, on the other hand, orders can cross in time, then optimal solutions are still "lumpy" in the sense that each order will satisfy a set, not necessarily consecutive, of the demands. An example shows how this characterization can be used to find a solution to a problem where interdependence of lead times is critical. This characterization of optimal solutions facilitates dynamic programming approaches to this problem.
---
paper_title: Plant co-ordination in pharmaceutics supply networks
paper_content:
The production of active ingredients in the chemical-pharmaceutical industry involves numerous production stages with cumulative lead times of up to two years. Mainly because of rigorous purity requirements and the need of extensive cleaning of the equipment units, production is carried out in campaigns, i.e. multiple batches of the same product type are produced successively before changing to another product type. Each campaign requires a specific configuration of equipment units according to the recipes of the particular chemical process. In the chemical-pharmaceutical industry, production stages are often assigned to different locations, even different countries. Hence the co-ordination of plant operations within the resulting multi-national supply network is of major importance. A key issue is the co-ordination of campaign schedules at different production stages in the various plants. In practice, it is almost impossible to determine exact optimal solutions to the corresponding complex supply network problem with respect to overall logistics costs. In order to reduce the required computational effort, we introduce several aggregation schemes and a novel MILP model formulation which is based on a continuous representation of time. Moreover, we propose an iterative near-optimal solution procedure which can be successfully applied to even exceptionally large real life problem instances. The applicability of the approach suggested is shown using a case study from industry.
---
paper_title: Operational production planning in a chemical manufacturing environment
paper_content:
Abstract This paper develops a heuristic for operational production planning in a chemical processing environment, characterized by a single bottleneck machine, fixed batch sizes, sequence-dependent setup times, as well as production and storage capacity constraints. The algorithm aims at pre-producing future production requirements in order to reduce the total relevant cost, consisting of inventory holding costs and opportunity costs due to setups between production runs. The procedure takes care of lot-sizing, but also determines pretty good production sequences per period according to an approximate traveling salesman algorithm. A numerical example is given in which production lot-sizes and production sequences are determined for 15 products over a 20 week planning horizon under tight capacity conditions.
---
paper_title: A model for lot sizing and sequencing in process industries
paper_content:
Abstract Scheduling in process industries is exceedingly challenging, due to the high capital intensity, relatively long and sequence-dependent setup times, and extremely limited capacity resources that are found in these industries. As a result, it is important to simultaneously consider lot sizing and sequencing factors in the development of a production schedule. This paper presents a mixed integer linear programming model for scheduling production in process industries that embodies the economic trade-offs encompassed in three avenues of research: capacitated lot sizing, flowshop scheduling and sequencing with sequence-dependent setup times. The model is used to schedule production Tor a problem representative of those found in the food processing industry. The corresponding schedule is then compared with approaches that consider lot sizing and sequencing as independent decisions and it is shown that decomposing the scheduling problem into smaller subproblems can result in the generation of infeasible...
---
paper_title: A tabu search heuristic for scheduling the production processes at an oil refinery
paper_content:
In this paper we present a tabu search heuristic which can be used for scheduling the production at an oil refinery. The scheduling problem is to decide which production modes to use at the different processing units at each point in time. The problem is a type of lot-sizing problem where costs of changeovers, inventories and production are considered. In the suggested tabu search heuristic we explore the use of variable neighbourhood, dynamic penalty and different tabu lists. Computational results are presented for different versions of the heuristic and the results are compared to the best-known lower bound for a set of scheduling scenarios.
---
paper_title: A Coordinated Production Planning Model with Capacity Expansion and Inventory Management
paper_content:
Motivated by a problem faced by a large manufacturer of a consumer product, we explore the interaction between production planning and capacity acquisition decisions in environments with demand growth. We study a firm producing multiple items in a multiperiod environment where demand for items is known but varies over time with a long-term growth and possible short-term fluctuations. The production equipment is characterized by significant changeover time between the production of different items. While demand growth is gradual, capacity additions are discrete. Therefore, periods immediately following a machine purchase are characterized by excess machine capacity. We develop a mathematical programming model and an effective solution approach to determine the optimal capacity acquisition, production and inventory decisions over time. Through a computational study, we show the effectiveness of the solution approach in terms of solution quality and investigate the impact of product variety, cost of capital, and other important parameters on the capacity and inventory decisions. The computational results bring out some key insights--increasing product variety may not result in excessive inventory and even a substantial increase in set-up times or holding costs may not increase the total cost over the horizon in a significant manner due to the ability to acquire additional capacity. We also provide solutions and insights to the real problem that motivated this work.
---
paper_title: A Lagrangean-based heuristic for multi-plant, multi-item, multi-period capacitated lot-sizing problems with inter-plant transfers
paper_content:
This paper addresses scheduling of lot sizes in a multi-plant, multi-item, multi-period, capacitated environment with inter-plant transfers. A real-world problem in a company manufacturing steel rolled products provided motivation to this research. A Lagrangean-based approach, embedded with a lot shifting-splitting-merging routine, has been used for solving the multi-plant, capacitated lot-sizing problem. A "good" solution procedure developed by Sambasivan (Ph.D. Dissertation, University of Alabama, Tuscaloosa, 1994) has been used for solving the relaxed problem. About 120 randomly generated instances of the problem have been solved and it has been found that Lagrangean-based approach works quite "efficiently" for this problem.
---
paper_title: Multi-Item, Multi-Period Production Planning with Uncertain Demand
paper_content:
We provide a formulation and solution algorithm for the finite-horizon capacitated production planning problem with random demand for multiple products. Using Lagrangian relaxation, we develop a subgradient optimization algorithm to solve this formulation. We also provide some computational results that indicate this approach works well for rolling-horizon planning compared with the rolling-horizon performance of the corresponding optimal finite-horizon solution. The advantage of our approach is that realistic problem instances can be solved quickly while optimal solutions to such instances are computationally intractable.
---
paper_title: Requirements Planning with Pricing and Order Selection Flexibility
paper_content:
Past requirements-planning research has typically assumed that the firms demands are determined prior to production planning. In contrast, we explore a single-stage planning model that implicitly decides, through pricing decisions, the demand levels the firm should satisfy in order to maximize contribution to profit. We briefly discuss solution methods and properties for these problems when production capacities are unlimited. The key result of this work is a polynomial-time solution approach to the problem under time-invariant finite production capacities and piecewise-linear and concave revenue functions in price.
---
paper_title: Solving Planning and Design Problems in the Process Industry Using Mixed Integer and Global Optimization
paper_content:
This contribution gives an overview on the state-of-the-art and recent advances in mixed integer optimization to solve planning and design problems in the process industry. In some case studies specific aspects are stressed and the typical difficulties of real world problems are addressed.Mixed integer linear optimization is widely used to solve supply chain planning problems. Some of the complicating features such as origin tracing and shelf life constraints are discussed in more detail. If properly done the planning models can also be used to do product and customer portfolio analysis.We also stress the importance of multi-criteria optimization and correct modeling for optimization under uncertainty. Stochastic programming for continuous LP problems is now part of most optimization packages, and there is encouraging progress in the field of stochastic MILP and robust MILP.Process and network design problems often lead to nonconvex mixed integer nonlinear programming models. If the time to compute the solution is not bounded, there are already a commercial solvers available which can compute the global optima of such problems within hours. If time is more restricted, then tailored solution techniques are required.
---
paper_title: The multiscenario lot size problem with concave costs
paper_content:
Abstract The dynamic single-facility single-item lot size problem is addressed. The finite planning horizon is divided into several time periods. Although the total demand is assumed to be a fixed value, the distribution of this demand among the different periods is unknown. Therefore, for each period the demand can be chosen from a discrete set of values. For this reason, all the combinations of the demand vector yield a set of different scenarios. Moreover, we assume that the production/reorder and holding cost vectors can vary from one scenario to another. For each scenario, we consider as the objective function the sum of the production/reorder and the holding costs. The problem consists of determining all the Pareto-optimal or non-dominated production plans with respect to all scenarios. We propose a solution method based on a multiobjective branch and bound approach. Depending on whether shortages are considered or not, different upper bound sets are provided. Computational results on several randomly generated problems are reported.
---
paper_title: An integrated production-inventory-marketing model for deteriorating items
paper_content:
Generally, inventory control policies for deteriorating items are very sensitive to different marketing policies especially in chemical, food and pharmaceutical industries. Realizing the importance of such inventory policies in practice, an integrated production-inventory-marketing model is developed for determining the economic production quantity (EPQ) and economic order quantity (EOQ) for raw materials in a multi-stage production system. This model considers the effect of different marketing policies such as the price per unit product and the advertisement frequency on the demand of a perishable item. A search method is employed to determine the values of EPQ and EOQ which would result in the maximum total net profit.
---
paper_title: MIP modelling of changeovers in production planning and scheduling problems
paper_content:
The goal here is to survey some recent and not so recent work that can be used to improve problem formulations either by a priori reformulation, or by the addition of valid inequalities. The main topic examined is the handling of changeovers, both sequence-independent and -dependent, in production planning and machine sequencing, with in the background the question of how to model time. We first present results for lot-sizing problems, in particular the interval submodular inequalities of Constantino that provide insight into the structure of single item problems with capacities and start-ups, and a unit flow formulation of Karmarkar and Schrage that is effective in modelling changeovers. Then we present various extensions and an application to machine sequencing with the unit flow formulation. We terminate with brief sections on the use of dynamic programming and of time-indexed formulations, which provide two alternative approaches for the treatment of time. (C) 1997 Elsevier Science B.V.
---
paper_title: Mathematical Programming Models and Formulations for Deterministic Production Planning Problems.
paper_content:
We study in this lecture the literature on mixed integer programming models and formulations for a specific problem class, namely deterministic production planning problems. The objective is to present the classical optimization approaches used, and the known models, for dealing with such management problems.We describe first production planning models in the general context of manufacturing planning and control systems, and explain in which sense most optimization solution approaches are based on the decomposition of the problem into single-item subproblems.Then we study in detail the reformulations for the core or simplest subproblem in production planning, the single-item uncapacitated lot-sizing problem, and some of its variants. Such reformulations are either obtained by adding variables - to obtain so called extended reformulations - or by adding constraints to the initial formulation. This typically allows one to obtain a linear description of the convexh ull of the feasible solutions of the subproblem. Such tight reformulations for the subproblems play an important role in solving the original planning problem to optimality.We then review two important classes of extensions for the production planning models, capacitated models and multi-stage or multi-level models. For each, we describe the classical modeling approaches used. Finally, we conclude by giving our personal view on some new directions to be investigated in modeling production planning problems. These include better models for capacity utilization and setup times, new models to represent the product structure - or recipes - in process industries, and the study of continuous time planning and scheduling models as opposed to the discrete time models studied in this review.
---
paper_title: Planning and scheduling in the process industry
paper_content:
Since there has been tremendous progress in planning and scheduling in the process industry during the last 20 years, it might be worthwhile to give an overview of the current state-of-the-art of planning and scheduling problems in the chemical process industry. This is the purpose of the current review which has the following structure: we start with some conceptional thoughts and some comments on special features of planning and scheduling problems in the process industry. In Section 2 the focus is on planning problems while in Section 3 different types of scheduling problems are discussed. Section 4 presents some solution approaches especially those applied to a benchmark problem which has received considerable interest during the last years. Section 5 allows a short view into the future of planning and scheduling. In the appendix we describe the Westenberger-Kallrath problem which has already been used extensively as a benchmark problem for planning and scheduling in the process industr y.
---
paper_title: An Empirical Investigation of Costs in Batching Decisions
paper_content:
This paper examines the use of costs and cost functions to model lot-sizing decisions in batch manufacturing. The cost functions used to model a wide variety of manufacturing systems are typically derived from average cost models of unconstrained inventory problems. The use of setups and average inventories as the basis for modeling the economics of a typical batch manufacturing cell is shown to be inadequate. An alternative physical model that focuses on lead times provides a model that more closely represents the underlying value of such a cell.
---
paper_title: Aclips: a Capacity and Lead Time Integrated Procedure for Scheduling
paper_content:
We propose a general hierarchical procedure to address real-life job shop scheduling problems. The shop typically produces a variety of products, each with its own arrival stream, its ow route through the shop and a given customer due date. The procedure first determines the manufacturing lot sizes for each product. The objective is to minimize the expected lead time, and therefore we model the production environment as a queueing network. Given these lead times, release dates are set dynamically. This in turn creates a time window for every manufacturing order in which the various operations have to be sequenced. The sequencing logic is based on an Extended Shifting Bottleneck Procedure. These three major decisions are next incorporated into a four-phase, hierarchical, operational implementation scheme. A small numerical example is used to illustrate the methodology. The final objective however is to develop a procedure that is useful for large, real-life shops. We therefore report on a real-life application.
---
paper_title: A comparison of two lot sizing-sequencing heuristics for the process industry
paper_content:
Abstract Two heuristics for operational production planning in a chemical processing environment are compared, characterized by a single bottleneck machine, fixed batch sizes, sequence-dependent setup-times, as well as production and storage constraints. Performance of both heuristics is measured by means of simulation experiments in which the planning horizon is partially frozen and rolled a number of times, as would be the case in practical applications. Furthermore, demand uncertainty is simulated as well as the variability among setup times. The performance measure used is the total cost for executing a particular production plan over its entire planning horizon.
---
paper_title: On formulations of the stochastic uncapacitated lot-sizing problem
paper_content:
We consider two formulations of a stochastic uncapacitated lot-sizing problem. We show that by adding (@?,S) inequalities to the one with the smaller number of variables, both formulations give the same LP bound. Then we show that for two-period problems, adding another class of inequalities gives the convex hull of integral solutions.
---
paper_title: Campaign planning for multi-stage batch processes in the chemical industry
paper_content:
Inspired by a case study from industry, the production of special chemical products is considered. In this industrial environment, multi-purpose equipment is operated in batch mode to carry out the diverse processing tasks. Often, extensive set-up and cleaning of the equipment are required when production switches between different types of products. Hence, processes are scheduled in campaign mode, i.e. a number of batches of the same type are processed in sequence. The production of chem ical products usually involves various stages with significant cumulative lead times. Typically, these production stages are assigned to different plants. A hierarchical modelling approach is presented which co-ordinates the various plant operations within the entire supply network. In the first stage, the length of the campaigns, their timing, the corresponding material flows, and equipment requirements have to be determined. At this stage, an aggregation scheme based on feasibility constraints is employed in order to reflect the limited availability of the various types of production equipment. The second stage consists of an assignment model, which allocates the available equipment units between the production campaigns determined in the first stage of the solution procedure. In the third stage, resource conflicts are resolved, which may occur if clean-out operations and minimal campaign lengths have to be considered. The proposed hierarchical approach allows a more compact model formulation compared to ot her approaches known from the literature. As a result, a very efficient and flexible solution approach is obtained. In particular, commercially available standard solvers can be used to solve a wide range of campaign planning problems arising in the chemical industry.
---
| Title: Modeling Industrial Lot Sizing Problems: A Review
Section 1: Introduction
Description 1: Introduce the topic of lot sizing models, their classification, relevance, and objectives of the review.
Section 2: The single-item uncapacitated lot sizing problem
Description 2: Discuss the simplest form of dynamic lot sizing with the key variables and constraints associated with this model.
Section 3: Capacitated Multi-Item Lot Sizing Problem (CLSP)
Description 3: Elaborate on the complexities introduced with capacitated multi-item lot sizing models, and different lot sizing problems like the CSLP, DLSP, etc.
Section 4: Further Extensions of Lot Sizing Models
Description 4: Provide an overview of further extensions to lot sizing models, covering expanded topics in setups, production, inventory, demand, and integration with other planning stages.
Section 5: Extension on the set ups
Description 5: Cover extended considerations in lot sizing models related to setups, such as coordinated replenishment and carry-over setups.
Section 6: Extensions on the Production
Description 6: Discuss additional production-related extensions including batch production, sequence-dependent setup costs, and integration with scheduling.
Section 7: Extensions on the inventory
Description 7: Explore how inventory constraints and perishability are treated in advanced lot sizing models.
Section 8: Extension on the demand
Description 8: Address how demand variability, backlogging, and speculative cost structures are incorporated into lot sizing models.
Section 9: Time Horizon
Description 9: Explain the implications of using a rolling horizon approach in lot sizing and how it affects planning and outcomes.
Section 10: Tactical and strategic models
Description 10: Discuss the integration of lot sizing models with higher-level tactical and strategic production planning decisions.
Section 11: Conclusions and New Research Directions
Description 11: Summarize the insights gained from the review and suggest new areas where further research could be directed. |
Nonlinear vibroacoustic wave modulations for structural damage detection: an overview | 9 | ---
paper_title: Nonlinear Mesoscopic Elasticity: Evidence for a New Class of Materials
paper_content:
A squash ball almost doesn't bounce; a Superball bounces first left then right, seeming to have a mind of its own. Remarkable and complex elastic behavior isn't confined to sports equipment and toys. Indeed, it can be found in some surprising places. When the elastic behavior of a rock is probed, for instance, it shows extreme nonlinearity hysteresis and discrete memory (the Flint‐stones could have had a computer that used a sandstone for random‐access memory). Rocks are an example of a class of unusual elastic materials that includes sand. soil, cement, concrete, ceramics and, it turns out, damaged materials, Many members of this class are the blue‐collar materials of daily life: They are in the bridges we cross on the way to work, the roofs over our heads and the ground beneath our cities—such as the Los Angeles basin (home to many earthquakes). The elastic behavior of these materials is of more than academic interest.
---
paper_title: Fatigue damage assessment by nonlinear ultrasonic materials characterization
paper_content:
The ultimate strength of most structural materials is mainly limited by the presence of microscopic imperfections serving as nuclei of the fracture process. Since these nuclei are considerably shorter than the acoustic wavelength at the frequencies normally used in ultrasonic nondestructive evaluation (NDE), linear acoustic characteristics are not sufficiently sensitive to this kind of microscopic degradation of the material's integrity. On the other hand, even very small imperfections can produce very significant excess nonlinearity which can be orders of magnitude higher than the intrinsic nonlinearity of the intact material. The excess nonlinearity is produced mainly by the strong local nonlinearity of microcracks whose opening is smaller than the particle displacement. Parametric modulation via crack-closure significantly increases the stress-dependence of fatigued materials. A special experimental technique was introduced to measure the second-order acousto-elastic coefficient in a great variety of materials including plastics, metals, composites and adhesives. Experimental results are presented to illustrate that the nonlinear acoustic parameters are earlier and more sensitive indicators of fatigue damage than their linear counterparts.
---
paper_title: Nonlinear Interaction of Acoustical Waves Due to Cracks and Its Possible Usage for Cracks Detection
paper_content:
An interaction of a CW acoustic wave and a powerful acoustic pulse due to nonlinear properties of a crack-type discontinuity in a solid is considered. Characteristics of nonstationary variations of the reflected wave amplitude and the phase of the transmitted wave, which are induced by the powerful pulse, are determined. The effects should allow one to distinguish cracks from other scatterers and can be used as a base of a new method of crack detection and positioning. Demonstrative signal estimates based on two simplified crack models are presented.
---
paper_title: Handbook of Nondestructive Evaluation
paper_content:
Chapter 1: Introduction to Nondestructive Testing Chapter 2: Dicontinuities - Origins and Classification Chapter 3: Visual Testing Chapter 4: Penetrant Testing Chapter 5: Magnetic Particle Testing Chapter 6: Radiographic Testing New - Digital Radiography Chapter 7: Ultrasonic Testing New - Phased Array Ultrasonics New - Guided Wave Ultrasonics Chapter 8: Eddy current Testing Chapter 9: Thermal Infrared Testing Chapter 10: Acoustic Emission Testing
---
paper_title: A Baseline and Vision of Ultrasonic Guided Wave Inspection Potential
paper_content:
Ultrasonic guided wave inspection is expanding rapidly to many different areas of manufacturing and in-service inspection. The purpose of this paper is to provide a vision of ultrasonic guided wave inspection potential aswe move forward into the new millennium. An increased understanding of the basic physics and wave mechanics associated with guided wave inspection has led to an increase in practical nondestructive evaluation and inspection problems. Some fundamental concepts and a number of different applications that are currently being considered will be presented in the paper along with a brief description of the sensor and software technology that will make ultrasonic guided wave inspection commonplace in the next century.
---
paper_title: Structural Health Monitoring Using Guided Ultrasonic Waves
paper_content:
Maintenance of air, land and sea structures is an important engineering activity in a wide range of industries including transportation and Civil Engineering. Effective maintenance minimises not only the cost of ownership of structures but also improves safety and the perception of safety. Inspection for material/structural damage, such as fatigue cracks and corrosion in metallics or delamination in composites, is an essential part of maintenance.
---
paper_title: Dynamic nonlinear elasticity in geomaterials
paper_content:
This invention relates to a lens for presbyopia free from distortional aberration for use in correcting an old-age eyesight. In the lens for presbyopia with a front lens surface having a smaller radius of curvature than a rear lens face, a lens surface has a refractive power successively corrected as the lens surface extends radially outwardly away from a geometric center of the lens so that lateral magnifications for all principal rays always equal a lateral magnification for a paraxial range. This construction is entirely free from distortional aberration, and secures a greatly enlarged range of distinct vision.
---
paper_title: Nonlinear Response of a Weakly Damaged Metal Sample: A Dissipative Modulation Mechanism of Vibro-Acoustic Interaction
paper_content:
The nonlinear vibro-acoustic response of solid samples containing quite a small amount of defects can be anomalously high in magnitude compared to the case of undamaged intact solids. Functional de pendencies of the nonlinear effects exhibit rather interesting behavior. In this paper, experimental results on nonlinearity-induced cross-modulation of a high-frequency (HF) f = 15 — 30 kHz signal by a low-frequency (LF) F = 20 - 60 Hz vibration in an aluminum plate with a small single crack are reported. Comparison with a reference sample (the identical plate without a crack) has proven that the presence of such a small defect can be easily detected due to its nonlinear manifestations. It is demonstrated that under proper choice of the sounding signal parameters, the effect level can be so pronounced that the amplitude of the modulation side-lobes originated due to the nonlinearity exceeds the amplitude of the fundamental harmonic of the HF signal. Main functional features of the observed phenomena are analyzed, and a new physical explana tion is suggested based on a dissipative mechanism of vibro-acoustic interaction. Results of the numerical simulation of the effect are also presented.
---
paper_title: Investigation of Nonlinear Vibro-Acoustic Wave Modulation Mechanisms in Composite Laminates
paper_content:
This paper investigates the effect of low-frequency vibration and the related temperature field on nonlinear vibro-acoustic wave modulations. Experimental modal analysis was used to find natural frequencies and mode shapes of a composite laminate plate with seeded delamination. Temperature distribution was analyzed with a thermographic camera in the vicinity of damage for the identified vibration modes. These frequencies of these vibration modes were then used for low-frequency excitation in nonlinear acoustic tests. The correlation between the thermal field and the observed wave modulations was analyzed.
---
paper_title: Luxemburg-Gorky Effect Retooled for Elastic Waves: A Mechanism and Experimental Evidence
paper_content:
A new mechanism is proposed for the linear and amplitude-dependent dissipation due to elastic-wavecrack interaction. We have observed one of its strong manifestations in a direct elastic-wave analog of the Luxemburg-Gorky effect consisting of the cross modulation of radio waves at the dissipative nonlinearity of the ionosphere plasma. The counterpart acoustic mechanism implies, first, a drastic enhancement of the thermoelastic coupling at high-compliance microdefects, and, second, the high stress-sensitivity of the defects leads to a strong stress dependence of the resultant dissipation.
---
paper_title: Observation of the “Luxemburg–Gorky effect” for elastic waves
paper_content:
An experimental observation of a new nonlinear-modulation effect for longitudinal elastic waves is reported. The phenomenon is a direct elastic wave analogy with the so-called Luxemburg–Gorky (L–G) effect known over 60 years for radio waves propagating in the ionosphere. The effect consists of the appearance of modulation of a weaker initially non-modulated wave propagating in a nonlinear medium in the presence of an amplitude-modulated stronger wave that produces perturbations in the medium properties on the scale of its modulation frequency. The reported transfer of modulation from one elastic wave to another was observed in a resonator cut of a glass rod containing a few small cracks. Presence of such a small damage drastically enhances the material nonlinearity compared to elastic atomic nonlinearity of homogeneous solids, so that the pronounced L– G type cross-modulation could be observed at strain magnitude in the stronger wave down to 10 7 and smaller. Main features of the effect are pointed out and physical mechanism of the observed phenomena is discussed. 2002 Elsevier Science B.V. All rights reserved.
---
paper_title: Thermoelastic Mechanism for Logarithmic Slow Dynamics and Memory in Elastic Wave Interactions with Individual Cracks
paper_content:
Logarithmic-in-time slow dynamics has been found for individual cracks in a solid. Furthermore, this phenomenon is observed during both the crack acoustic conditioning and the subsequent relaxation. A thermoelastic mechanism is suggested which relates the log-time behavior to the essentially 2D character of the heating and cooling of the crack perimeter and inner contacts. Nonlinear perturbation of the contacts by a stronger (pump) wave causes either softening or hardening of the sample, and induces either additional absorption or transparency for a weaker (probe) acoustic wave depending on frequency of the latter. DOI: 10.1103/PhysRevLett.90.075501
---
paper_title: Nonlinear acoustics for fatigue crack detection – experimental investigations of vibro-acoustic wave modulations
paper_content:
Vibro-acoustic nonlinear wave modulations are investigated experimentally in a cracked aluminum plate. The focus is on the effect of low-frequency vibration excitation on modulation intensity and associated nonlinear wave interaction mechanisms. The study reveals that energy dissipation – not opening–closing crack action – is the major mechanism behind nonlinear modulations. The consequence is that relatively weak strain fields can be used for crack detection in metallic structures. A clear link between modulations and thermo-elastic coupling is also demonstrated, providing experimental evidence for the recently proposed non-classical, nonlinear vibro-acoustic wave interaction mechanism.
---
paper_title: Instability, chaos, and "memory" in acoustic-wave-crack interaction.
paper_content:
A new class of nonlinear acoustic phenomena has been observed for acoustic wave interactions with cracked defects in solids. Parametric modulation of crack stiffness results in fractional acoustic subharmonics, wave instability, and generation of chaotic noiselike acoustic excitations. Acoustic-wave impact on a crack is shown to exhibit amplitude hysteresis and storage for parametric and nonlinear acoustic effects. The measured storage time amounts to several hours and is believed to be due to a long-term relaxation of thermally induced microstrain within a crack area.
---
paper_title: CAN: an example of nonclassical acoustic nonlinearity in solids.
paper_content:
A new class of nonlinear acoustic phenomena has been observed for acoustic wave interaction with simulated and realistic nonbonded contact interfaces (cracked defects) in solids. "Nonclassical" effects are due to substantially asymmetric stiffness characteristics of the interface for normal stress that results in specific contact acoustic nonlinearity (CAN). The asymmetry in the contact restoring forces causes the stiffness parametric modulation and instability of oscillations, which results in acoustic wave fractional subharmonic generation. The CAN subharmonics and higher harmonics reveal threshold dynamic behaviour, evident hysteresis, and instability effects.
---
paper_title: Dynamic nonlinear elasticity in geomaterials
paper_content:
This invention relates to a lens for presbyopia free from distortional aberration for use in correcting an old-age eyesight. In the lens for presbyopia with a front lens surface having a smaller radius of curvature than a rear lens face, a lens surface has a refractive power successively corrected as the lens surface extends radially outwardly away from a geometric center of the lens so that lateral magnifications for all principal rays always equal a lateral magnification for a paraxial range. This construction is entirely free from distortional aberration, and secures a greatly enlarged range of distinct vision.
---
paper_title: Nonlinear Elastic Wave Spectroscopy (NEWS) Techniques to Discern Material Damage, Part I: Nonlinear Wave Modulation Spectroscopy (NWMS)
paper_content:
Abstract. The level of nonlinearity in the elastic response of materials containing structural damage is far greater than in materials with no structural damage. This is the basis for nonlinear wave diagnostics of damage, methods which are remarkably sensitive to the detection and progression of damage in materials. Nonlinear wave modulation spectroscopy (NWMS) is one exemplary method in this class of dynamic nondestructive evaluation techniques. The method focuses on the application of harmonics and sum and difference frequency to discern damage in materials. It consists of exciting a sample with continuous waves of two separate frequencies simultaneously, and inspecting the harmonics of the two waves, and their sum and difference frequencies (sidebands). Undamaged materials are essentially linear in their response to the two waves, while the same material, when damaged, becomes highly nonlinear, manifested by harmonics and sideband generation. We illustrate the method by experiments on uncracked and cracked Plexiglas and sandstone samples, and by applying it to intact and damaged engine components.
---
paper_title: Nonlinear modulation technique for NDE with air-coupled ultrasound
paper_content:
Abstract The present study is aimed at expanding flexibility and application area of nonlinear acoustic modulation (NAM-) technique by combining the benefits of noncontact ultrasound excitation (remote locating and imaging of defects) with sensitivity of nonlinear methods in a new air-coupled NAM-version. A pair of focused air-coupled transducers was used to generate and receive (high-frequency) longitudinal or flexural waves in plate-like samples. Low-frequency (LF-) vibrations were excited with a shaker or a loudspeaker. Temporal and spectral analysis of the output signal revealed an extremely efficient nonlinear amplitude modulation and multiple frequency side-bands for sound transmission and flexural wave propagation through cracked defects. On the contrary, a negligible modulation was observed for large and medium scale inclusions and material inhomogeneities (linear defects). A new subharmonic mode of the NAM was observed at high excitation levels. It was also shown for the first time that nonlinear vibrations of cracks resulted in radiation of a very high-order harmonics (well above 100) of the driving excitation in air that enabled imaging of cracks remotely by registration their highly nonlinear “acoustic emission” with air-coupled transducers.
---
paper_title: Scattering of Sound by Sound
paper_content:
As a preliminary to an investigation of the scattering of sound by turbulence, the scattering of one sound beam by another is studied using the Lighthill formulation. The scattered sound is predicted to be composed in part of components with frequency equal to the sum or difference of those in the primary beams. These components of the scattered field have been isolated in measurements using a selective receiver and their angular distributions are compared with those predicted. This work was supported in part by the NACA.
---
paper_title: Scattering of Sound by Sound
paper_content:
Earlier studies [J. Acoust. Soc. Am. 29, 199 (1957)] of the mutual nonlinear interaction of two plane waves of sound with each other are extended to encompass arbitrary directions of travel of one wave with respect to the other; an exact solution to the first‐order scattering process is obtained.
---
paper_title: Nonlinear acoustics in Nizhni Novgorod (A review)
paper_content:
This article reviews the development of nonlinear acoustics in Nizhni Novgorod from the days when the idea of parametric transmission and reception was conceived until the present time.
---
paper_title: Dynamic nonlinear elasticity in geomaterials
paper_content:
This invention relates to a lens for presbyopia free from distortional aberration for use in correcting an old-age eyesight. In the lens for presbyopia with a front lens surface having a smaller radius of curvature than a rear lens face, a lens surface has a refractive power successively corrected as the lens surface extends radially outwardly away from a geometric center of the lens so that lateral magnifications for all principal rays always equal a lateral magnification for a paraxial range. This construction is entirely free from distortional aberration, and secures a greatly enlarged range of distinct vision.
---
paper_title: Vibro-acoustic modulation nondestructive evaluation technique
paper_content:
Nonlinear acoustic technique has been recently introduced as a new tool for nondestructive inspection and evaluation of fatigued, defective, and fractured materials. Various defects such as cracks, debonding, fatigue, etc. lead to anomalous high level of nonlinearity as compared with flawless structures. One of the acoustic manifestations of such nonlinearity is the modulation of ultrasound by low frequency vibration. Two methods employing the nonlinear interaction of ultrasound and vibration were developed, namely vibro-modulation (VM) and impact-modulation (IM) methods. VM method employs forced harmonic vibration of a structure tested, while IM method uses impact excitation of structure natural modes of vibration. The feasibility tests were carried out for different objects and demonstrated high sensitivity of the methods for detection of cracks in steel pipes and pins, bonding quality in titanium and thermoplastic plates used for airspace applications, cracks in combustion engine, adhesion flaws in bonded composite structures, and cracks and corrosion in reinforced concrete. The model of the crack allowing to describe the modulation of sound by vibration is discussed. The developed nonlinear technique demonstrated certain advantages as compared with the conventional linear acoustic technique, specifically discrimination capabilities, sensitivity, and applicability to highly inhomogeneous structures.
---
paper_title: Impact damage detection in composite laminates using nonlinear acoustics
paper_content:
The paper demonstrates the application of nonlinear acoustics for impact damage detection in composite laminates. A composite plate is monitored for damage resulting from a low-velocity impact. The plate is instrumented with bonded low-profile piezoceramic transducers. A high-frequency acoustic wave is introduced to one transducer and picked up by a different transducer. A low-frequency flexural modal excitation is introduced to the plate at the same time using an electromagnetic shaker. The damage induced by impact is exhibited in a power spectrum of the acoustic response by a pattern of sidebands around the main acoustic harmonic. The results show that the amplitude of sidebands is related to the severity of damage. The study investigates also the effect of boundary conditions on the results.
---
paper_title: Modelling of nonlinear crack–wave interactions for damage detection based on ultrasound—A review
paper_content:
Abstract The past decades have been marked by a significant increase in research interest in nonlinearities in micro-cracked and cracked solids. As a result, a number of different nonlinear acoustic methods have been developed for damage detection. A general consensus is that – under favourable conditions – nonlinear effects exhibited by cracks are stronger than crack-induced linear phenomena. However, there is still limited understanding of physical mechanisms related to various nonlinearities. This problem remains essential for implementation of nonlinear acoustics for damage-detection applications. This paper reviews modelling approaches used for nonlinear crack–wave interactions. Various models of classical and nonclassical crack-induced elastic, thermo-elastic and dissipative nonlinearities have been discussed.
---
paper_title: Nonlinear Elastic Wave Spectroscopy (NEWS) Techniques to Discern Material Damage, Part I: Nonlinear Wave Modulation Spectroscopy (NWMS)
paper_content:
Abstract. The level of nonlinearity in the elastic response of materials containing structural damage is far greater than in materials with no structural damage. This is the basis for nonlinear wave diagnostics of damage, methods which are remarkably sensitive to the detection and progression of damage in materials. Nonlinear wave modulation spectroscopy (NWMS) is one exemplary method in this class of dynamic nondestructive evaluation techniques. The method focuses on the application of harmonics and sum and difference frequency to discern damage in materials. It consists of exciting a sample with continuous waves of two separate frequencies simultaneously, and inspecting the harmonics of the two waves, and their sum and difference frequencies (sidebands). Undamaged materials are essentially linear in their response to the two waves, while the same material, when damaged, becomes highly nonlinear, manifested by harmonics and sideband generation. We illustrate the method by experiments on uncracked and cracked Plexiglas and sandstone samples, and by applying it to intact and damaged engine components.
---
paper_title: Nonlinear modulation technique for NDE with air-coupled ultrasound
paper_content:
Abstract The present study is aimed at expanding flexibility and application area of nonlinear acoustic modulation (NAM-) technique by combining the benefits of noncontact ultrasound excitation (remote locating and imaging of defects) with sensitivity of nonlinear methods in a new air-coupled NAM-version. A pair of focused air-coupled transducers was used to generate and receive (high-frequency) longitudinal or flexural waves in plate-like samples. Low-frequency (LF-) vibrations were excited with a shaker or a loudspeaker. Temporal and spectral analysis of the output signal revealed an extremely efficient nonlinear amplitude modulation and multiple frequency side-bands for sound transmission and flexural wave propagation through cracked defects. On the contrary, a negligible modulation was observed for large and medium scale inclusions and material inhomogeneities (linear defects). A new subharmonic mode of the NAM was observed at high excitation levels. It was also shown for the first time that nonlinear vibrations of cracks resulted in radiation of a very high-order harmonics (well above 100) of the driving excitation in air that enabled imaging of cracks remotely by registration their highly nonlinear “acoustic emission” with air-coupled transducers.
---
paper_title: Nonlinear acoustic interaction on contact interfaces and its use for nondestructive testing
paper_content:
Recent theoretical and experimental studies demonstrated that a weakly or incompletely bonded interfaces exhibit highly nonlinear behavior. One of acoustic manifestations of such nonlinearity is the modulation of a probing high-frequency ultrasonic wave by low-frequency vibration. The vibration varies the contact area modulating the phase and amplitude of higher frequency probing wave passing through the interface. In the frequency domain, the result of this modulation manifests itself as side-band spectral components with respect to the frequency of the probing wave. This modulation effect has been observed experimentally for various materials (metals, composites, concrete, sandstone, glass) with various types of contact-type defects (interfaces): cracks, debondings, delaminations, and microstructural material damages. Study of this phenomenon revealed correlation between the developed modulation criterion and the quantitative characteristics of the interfaces, such as its size, loading condition, and bonding strength. These findings have been used for the development of an innovative nondestructive evaluation technique, namely Vibro-Acoustic Modulation Technique. Two modifications of this technique have been developed: Vibro-Modulation (VM) and Impact-Modulation (IM), employing CW and impact-induced vibrations, respectively. The examples of applications of these methods include crack detection in steel pipes, aircraft and auto parts, bonded composite plates etc. These methods also proved their effectiveness in the detection of cracks in concrete.
---
paper_title: Experimental Study of Impact-Damage Detection in Composite Laminates using a Cross-Modulation Vibro-Acoustic Technique
paper_content:
The paper demonstrates the application of cross-modulation vibro-acoustic technique for impact-damage detection in composite laminates. A composite plate is monitored for damage resulting from a low-velocity impact. The plate is excited simultaneously with two harmonic signals: a slow amplitude-modulated vibration pumping wave and a constant amplitude-probing wave. The frequency of both the excitation signals coincides with the resonances of the plate. An electromagnetic shaker is used to introduce the pumping wave to the plate. Two surface-bonded, low-profile piezoceramic transducers are used for probing-wave excitation and measurement. The wave modulation is transferred from the pumping wave to the probing wave in the presence of impact damage. This effect is exhibited in a power spectrum of the probing wave by a pattern of sidebands around the carrier harmonic. The results show that the amplitude of the sidebands is related to the severity of damage. The study also investigates also the effect of bound...
---
paper_title: Peridynamic model for dynamic fracture in unidirectional fiber-reinforced composites
paper_content:
Abstract We propose a computational method for a homogenized peridynamics description of fiber-reinforced composites and we use it to simulate dynamic brittle fracture and damage in these materials. With this model we analyze the dynamic effects induced by different types of dynamic loading on the fracture and damage behavior of unidirectional fiber-reinforced composites. In contrast to the results expected from quasi-static loading, the simulations show that dynamic conditions can lead to co-existence of and transitions between fracture modes; matrix shattering can happen before a splitting crack propagates. We observe matrix–fiber splitting fracture, matrix cracking, and crack migration in the matrix, including crack branching in the matrix similar to what is observed in recent dynamic experiments. The new model works for arbitrary fiber orientation relative to a uniform discretization grid and also works with random discretizations. The peridynamic composite model captures significant differences in the crack propagation behavior when dynamic loadings of different intensities are applied. An interesting result is branching of a splitting crack into two matrix cracks in transversely loaded samples. These cracks branch as in an isotropic material but here they migrate over the “fiber bonds”, without breaking them. This behavior has been observed in recent experiments. The strong influence that elastic waves have on the matrix damage and crack propagation paths is discussed. No special criteria for splitting mode fracture (Mode II), crack curving, or crack arrest are needed, and yet we obtain all these modes of material failure as a direct result of the peridynamic simulations.
---
paper_title: Luxemburg-Gorky Effect Retooled for Elastic Waves: A Mechanism and Experimental Evidence
paper_content:
A new mechanism is proposed for the linear and amplitude-dependent dissipation due to elastic-wavecrack interaction. We have observed one of its strong manifestations in a direct elastic-wave analog of the Luxemburg-Gorky effect consisting of the cross modulation of radio waves at the dissipative nonlinearity of the ionosphere plasma. The counterpart acoustic mechanism implies, first, a drastic enhancement of the thermoelastic coupling at high-compliance microdefects, and, second, the high stress-sensitivity of the defects leads to a strong stress dependence of the resultant dissipation.
---
paper_title: Observation of the “Luxemburg–Gorky effect” for elastic waves
paper_content:
An experimental observation of a new nonlinear-modulation effect for longitudinal elastic waves is reported. The phenomenon is a direct elastic wave analogy with the so-called Luxemburg–Gorky (L–G) effect known over 60 years for radio waves propagating in the ionosphere. The effect consists of the appearance of modulation of a weaker initially non-modulated wave propagating in a nonlinear medium in the presence of an amplitude-modulated stronger wave that produces perturbations in the medium properties on the scale of its modulation frequency. The reported transfer of modulation from one elastic wave to another was observed in a resonator cut of a glass rod containing a few small cracks. Presence of such a small damage drastically enhances the material nonlinearity compared to elastic atomic nonlinearity of homogeneous solids, so that the pronounced L– G type cross-modulation could be observed at strain magnitude in the stronger wave down to 10 7 and smaller. Main features of the effect are pointed out and physical mechanism of the observed phenomena is discussed. 2002 Elsevier Science B.V. All rights reserved.
---
paper_title: Nonlinear acoustics for fatigue crack detection – experimental investigations of vibro-acoustic wave modulations
paper_content:
Vibro-acoustic nonlinear wave modulations are investigated experimentally in a cracked aluminum plate. The focus is on the effect of low-frequency vibration excitation on modulation intensity and associated nonlinear wave interaction mechanisms. The study reveals that energy dissipation – not opening–closing crack action – is the major mechanism behind nonlinear modulations. The consequence is that relatively weak strain fields can be used for crack detection in metallic structures. A clear link between modulations and thermo-elastic coupling is also demonstrated, providing experimental evidence for the recently proposed non-classical, nonlinear vibro-acoustic wave interaction mechanism.
---
paper_title: Non-local modeling and simulation of wave propagation and crack growth
paper_content:
The paper presents the results of numerical analyses carried out for 2D models of aluminum plates undergoing crack propagation. Different types of numerical analyses were performed to show the area of applicability of a non-local discrete formulation of mechanical continua for NDE and QNDE. The authors’ interests dealt with: Lamb wave propagation, higher harmonics generation, the phenomenon of clapping, and wave generation and its propagation for growing crack. In the first case a 2D model allowed to find the displacements for the vertical cross-section along the direction of wave propagation and therefore assess its disturbance when introducing a notch. In-plane components of the displacements were considered for the latter 3 cases, in turn. The above cases and elaborated non-local numerical models were discussed in terms of detection of propagating crack. Apart from theoretical aspects of the presented work, the obtained results seem to be of great importance for improvement of experiments in terms of sensor distribution and required accuracy.The paper presents the results of numerical analyses carried out for 2D models of aluminum plates undergoing crack propagation. Different types of numerical analyses were performed to show the area of applicability of a non-local discrete formulation of mechanical continua for NDE and QNDE. The authors’ interests dealt with: Lamb wave propagation, higher harmonics generation, the phenomenon of clapping, and wave generation and its propagation for growing crack. In the first case a 2D model allowed to find the displacements for the vertical cross-section along the direction of wave propagation and therefore assess its disturbance when introducing a notch. In-plane components of the displacements were considered for the latter 3 cases, in turn. The above cases and elaborated non-local numerical models were discussed in terms of detection of propagating crack. Apart from theoretical aspects of the presented work, the obtained results seem to be of great importance for improvement of experiments in terms of s...
---
paper_title: Theoretical investigation of nonlinear ultrasonic wave modulation spectroscopy at crack interface
paper_content:
Abstract This paper studies theoretical results of a nonlinear ultrasonic method based on interaction of two elastic waves of different frequencies. A virtual Nonlinear Wave Modulation Spectroscopy experiment is performed in the vicinity of a crack described by a model combining classical and hysteretic nonlinearity. Quasistatic response to two frequency excitation was computed and harmonic and intermodulation components were studied. The influence of driving signal parameters and nonlinear parameters on the response is thoroughly discussed. A general way of hysteretic response description based on scaling properties is explained. In case of the combined nonlinear model, an analysis of nonlinear spectral components is performed in complex plane. Based on the complex interaction of classical and hysteretic parts, a method of their separation is proposed.
---
paper_title: Impact damage detection in composite chiral sandwich panels using nonlinear vibro-acoustic modulations
paper_content:
This paper reports an application of nonlinear acoustics to impact damage detection in a composite chiral sandwich panel. The panel is built from a chiral honeycomb and two composite skins. High-frequency ultrasonic excitation and low-frequency modal excitation were used to observe nonlinear modulations in ultrasonic waves due to structural damage. Low-profile, surface-bonded piezoceramic transducers were used for ultrasonic excitation. Non-contact laser vibrometry was applied for ultrasonic sensing. The work presented focuses on the analysis of the modulation intensities and damage-related nonlinearities. The paper demonstrates that the method can be used for impact damage detection in composite chiral sandwich panels.
---
paper_title: Vibro‐Acoustic Modulation NDE Technique. Part 2: Experimental Study
paper_content:
The crack detection capability of the Vibro‐Acoustic Modulation NDE technique is experimentally investigated over a broad range of ultrasonic frequencies. The generation of low frequency modes of the samples is performed using harmonic excitation applied by a shaker or by tapping. For both excitation methods the effect of different strain levels on the sideband activity is analyzed. The implementation of practical and reliable supports for testing the specimens is also considered in this study, as well as examples of results on industrial components.
---
paper_title: Impact damage detection in laminated composites by non-linear vibro-acoustic wave modulations
paper_content:
Abstract The paper presents an application of nonlinear acoustics for impact damage detection in composite laminates. Two composite plates were analysed. A low-velocity impact was used to damage one of the plates. Ultrasonic C-scan was applied to reveal the extent of barely visible impact damage. Finite element modelling was used to find vibration mode shapes of the plates and to estimate the local defect resonance frequency in the damaged plate. A delamination divergence study was performed to establish excitation parameters for nonlinear acoustics tests used for damage detection. Both composite plates were instrumented with surface-bonded, low-profile piezoceramic transducers that were used for the high-frequency ultrasonic excitation. Both an arbitrary frequency and a frequency corresponding to the local defect resonance were investigated. The low-frequency modal excitation was applied using an electromagnetic shaker. Scanning laser vibrometry was applied to acquire the vibro-acoustic responses from the plates. The study not only demonstrates that nonlinear vibro-acoustic modulations can successfully reveal the barely visible impact damage in composite plates, but also that the entire procedure can be enhanced when the ultrasonic excitation frequency corresponds to the resonant frequency of damage.
---
paper_title: Non-linear techniques for ultrasonic micro-damage diagnostics: A Simulation Approach
paper_content:
In the field of ultrasonic micro-damage diagnostics, nonlinear techniques seem to possess an enormous potential for applications. In fact, whenever mesoscopic features (e.g. mechanical inhomogeneities which are very small with respect to the acoustic wavelength, but much larger than interatomic spacings) are present, acoustic nonlinearity may be up to four orders of magnitude higher than in a perfect monocrystal. Correspondingly, nonlinear parameters are much more sensitive to the presence of micro-inhomogeneities and, in particular, of microdamage. Reliable techniques, such as Single Mode Nonlinear Resonance Acoustic Spectroscopy (SIMONRAS) and Nonlinear Wave Modulation Spectroscopy (NWMS), have been developed in order to quantify the acoustic nonlinearity of laboratory samples. A comparison with theoretical predictions is, however, extremely difficult due to the complexity of the phenomena involved (e.g. hysteretic behavior and end point memory). For this purpose an extension of the Local Interaction Si...
---
paper_title: N-SCAN: new vibromodulation system for detection and monitoring of cracks and other contact-type defects
paper_content:
In recent years, innovative vibro-modulation technique has been introduced for detection of contact-type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies on the contact area of the interface modulating passing through ultrasonic wave. The modulation manifests itself as additional side-band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for detection and differentiation of the contact-type defects from other structural and material inhomogeneities. Vibro-modulation technique has been implemented in N-SCAN damage detection system. The system consists of a digital synthesizer, high and low frequency amplifiers, a magnetostrictive shaker, ultrasonic transducers and a PC-based data acquisition/processing station with N-SCAN software. The ability of the system to detect contact-type defects was experimentally verified using specimens of simple and complex geometries made of steel, aluminum, composites and other structural materials. N-SCAN proved to be very effective for nondestructive testing of full-scale structures ranging from 24 foot-long gun barrels to stainless steel pipes used in nuclear power plants. Among advantages of the system are applicability for the wide range of structural materials and for structures with complex geometries, real time data processing, convenient interface for system operation, simplicity of interpretation of results, no need for sensor scanning along structure, onsite inspection of large structures at a fraction of time as compared with conventional techniques. This paper describes the basic principles of nonlinear vibro-modulation NDE technique, some theoretical background for nonlinear interaction and justification of signal processing algorithm. It is also presents examples of practical implementation and application of the technique.
---
paper_title: Dynamic nonlinear elasticity in geomaterials
paper_content:
This invention relates to a lens for presbyopia free from distortional aberration for use in correcting an old-age eyesight. In the lens for presbyopia with a front lens surface having a smaller radius of curvature than a rear lens face, a lens surface has a refractive power successively corrected as the lens surface extends radially outwardly away from a geometric center of the lens so that lateral magnifications for all principal rays always equal a lateral magnification for a paraxial range. This construction is entirely free from distortional aberration, and secures a greatly enlarged range of distinct vision.
---
paper_title: Peridynamic Theory and Its Applications
paper_content:
Introduction.- Peridynamic Theory.- Peridynamics for Local Interactions.- Peridynamics for Isotropic Materials.- Peridynamics for Laminated Composite Materials.- Damage Prediction.- Numerical Solution Methods.- Benchmark Problems.- Nonimpact Problems.- Impact Problems.- Coupling of the Peridynamic Theory and Finite Element Methods.- Peridynamic Thermal Diffusion.- Fully Coupled Peridynamic Thermomechanics.
---
paper_title: Time reversal and non-linear elastic wave spectroscopy (TR NEWS) techniques
paper_content:
Non-linear elastic wave spectroscopy (NEWS) has been shown to exhibit a high degree of sensitivity to both distributed and isolated nonlinear scatterers in solids. In the case of an isolated non-linear scatterer such as a crack, by combining the elastic energy localization of time reversal (TR) with NEWS, it is shown that one can isolate non-linear scatterers in solids. The experiments reviewed here present two distinct methods of combining TR and NEWS for this purpose. The techniques each have there own advantages and disadvantages, with respect to each other and other non-linear methods, which are discussed. 2008 Published by Elsevier Ltd.
---
paper_title: Nonlinear modulation technique for NDE with air-coupled ultrasound
paper_content:
Abstract The present study is aimed at expanding flexibility and application area of nonlinear acoustic modulation (NAM-) technique by combining the benefits of noncontact ultrasound excitation (remote locating and imaging of defects) with sensitivity of nonlinear methods in a new air-coupled NAM-version. A pair of focused air-coupled transducers was used to generate and receive (high-frequency) longitudinal or flexural waves in plate-like samples. Low-frequency (LF-) vibrations were excited with a shaker or a loudspeaker. Temporal and spectral analysis of the output signal revealed an extremely efficient nonlinear amplitude modulation and multiple frequency side-bands for sound transmission and flexural wave propagation through cracked defects. On the contrary, a negligible modulation was observed for large and medium scale inclusions and material inhomogeneities (linear defects). A new subharmonic mode of the NAM was observed at high excitation levels. It was also shown for the first time that nonlinear vibrations of cracks resulted in radiation of a very high-order harmonics (well above 100) of the driving excitation in air that enabled imaging of cracks remotely by registration their highly nonlinear “acoustic emission” with air-coupled transducers.
---
paper_title: Combined Photoacoustic–Acoustic Technique for Crack Imaging
paper_content:
Nonlinear imaging of a crack by combination of a common photoacoustic imaging technique with additional acoustic loading has been performed. Acoustic signals at two different fundamental frequencies were launched in the sample, one photoacoustically through heating of the sample surface by the intensity-modulated scanning laser beam and another by a piezoelectrical transducer. The acoustic signal at mixed frequencies, generated due to system nonlinearity, has been detected by an accelerometer. Different physical mechanisms of the nonlinearity contributing to the contrast in linear and nonlinear photoacoustic imaging of the crack are discussed.
---
paper_title: Nonlinear acoustic interaction on contact interfaces and its use for nondestructive testing
paper_content:
Recent theoretical and experimental studies demonstrated that a weakly or incompletely bonded interfaces exhibit highly nonlinear behavior. One of acoustic manifestations of such nonlinearity is the modulation of a probing high-frequency ultrasonic wave by low-frequency vibration. The vibration varies the contact area modulating the phase and amplitude of higher frequency probing wave passing through the interface. In the frequency domain, the result of this modulation manifests itself as side-band spectral components with respect to the frequency of the probing wave. This modulation effect has been observed experimentally for various materials (metals, composites, concrete, sandstone, glass) with various types of contact-type defects (interfaces): cracks, debondings, delaminations, and microstructural material damages. Study of this phenomenon revealed correlation between the developed modulation criterion and the quantitative characteristics of the interfaces, such as its size, loading condition, and bonding strength. These findings have been used for the development of an innovative nondestructive evaluation technique, namely Vibro-Acoustic Modulation Technique. Two modifications of this technique have been developed: Vibro-Modulation (VM) and Impact-Modulation (IM), employing CW and impact-induced vibrations, respectively. The examples of applications of these methods include crack detection in steel pipes, aircraft and auto parts, bonded composite plates etc. These methods also proved their effectiveness in the detection of cracks in concrete.
---
paper_title: Global crack detection using bispectral analysis
paper_content:
This paper describes a global non-destructive testing technique for detecting fatigue cracking in engineering components. The technique measures the mixing of two ultrasonic sinusoidal waves which are excited by a small piezoceramic disc bonded to the test structure. This input signal excites very high-order modes of vibration of the test structure within the ultrasonic frequency range. The response of the structure is measured by a second piezoceramic disc and the received waveform is analysed using the bispectrum signal processing technique. Frequency mixing occurs as a result of nonlinearities within the test structure and fatigue cracking is shown to produce a strong mixing effect. The bispectrum is shown to be particularly suitable for this application due to its known insensitivity to noise. Experimental results on steel beams are used to show that fatigue cracks, corresponding to a reduction in the beam section of 8%, can be detected. It is also shown that the bispectrum can be used to quantify the extent of the cracking. A simple nonlinear spring model is used to interpret the results and demonstrate the robustness of the bispectrum for this application.
---
paper_title: Non-contact nonlinear ultrasound scan of a CFRP plate with manufactured damages
paper_content:
A Carbon Fibre Reinforced Plastic plate was manufactured to have internal damages of different types. A nonlinear ultrasound technique was used to scan the plate. Non-contact transmitters of own design were used as transducer, and a contact sensor was used to measure the wave in the composite. Scanning was made perpendicularly with the sensor being on the same side as the transducer. The technique can be adapted in accuracy and speed.
---
paper_title: Impact damage detection in light composite sandwich panels using piezo-based nonlinear vibro-acoustic modulations
paper_content:
The nonlinear vibro-acoustic modulation technique is used for impact damage detection in light composite sandwich panels. The method utilizes piezo-based low-frequency vibration and high-frequency ultrasonic excitations. The work presented focuses on the analysis of modulation intensity. The results show that the method can be used for impact damage detection reliably separating damage-related from vibro-acoustic modulations from other intrinsic nonlinear modulations.
---
paper_title: Crack detection using nonlinear acoustics and piezoceramic transducers—instantaneous amplitude and frequency analysis
paper_content:
This paper investigates the nonlinear vibro-acoustic modulation technique for damage detection in metallic structures. Surface-bonded, low-profile piezoceramic actuators are used to introduce a high-frequency ultrasonic wave and low-frequency modal vibration into an aluminium specimen. The response of the vibro-acoustic interaction is monitored by a third low-profile piezoceramic transducer. In contrast to previous applications analysing the response in the frequency domain, current investigations focus on the instantaneous characteristics of the response using the Hilbert?Huang transform. The study shows that both modulations, i.e.?amplitude and frequency, are present in the acoustical responses when the aluminium plate is cracked. The intensity of amplitude modulation correlates far better with crack lengths than the intensity of frequency modulations.
---
paper_title: Vibro-acoustic modulation–based damage identification in a composite skin–stiffener structure
paper_content:
The vibro-acoustic modulation method is applied to a composite skin-stiffener structure to investigate the possibilities to utilise this method for damage identification in terms of detection, localisation and damage quantification. The research comprises a theoretical part and an experimental part. An impact load is applied to the skin-stiffener structure, resulting in a delamination underneath the stiffener. The structure is interrogated with a low frequency pump excitation and a high frequency carrier excitation. The analysis of the response in a frequency band around the carrier frequency is employed to assess the damage identification capabilities and to gain a better understanding of the modulations occurring and the underlying physical phenomena. Though vibro-acoustic is shown to be a sensitive method for damage identification, the complexity of the damage, combined with a high modal density, complicate the understanding of the relation between the physical phenomena and the modulations occurring.
---
paper_title: Nonlinear acoustics for fatigue crack detection – experimental investigations of vibro-acoustic wave modulations
paper_content:
Vibro-acoustic nonlinear wave modulations are investigated experimentally in a cracked aluminum plate. The focus is on the effect of low-frequency vibration excitation on modulation intensity and associated nonlinear wave interaction mechanisms. The study reveals that energy dissipation – not opening–closing crack action – is the major mechanism behind nonlinear modulations. The consequence is that relatively weak strain fields can be used for crack detection in metallic structures. A clear link between modulations and thermo-elastic coupling is also demonstrated, providing experimental evidence for the recently proposed non-classical, nonlinear vibro-acoustic wave interaction mechanism.
---
paper_title: Vibro-Acoustic Modulation Utilizing a Swept Probing Signal for Robust Crack Detection
paper_content:
One practical issue that must be addressed prior to the implementation of a vibration-based structural health monitoring system is the influence that variations in the structure’s environmental and boundary conditions can have on the vibration response of the structure. This issue is especially prominent in the structural health monitoring of aircraft, which operate in a wide variety of different environmental conditions and possess complex structural components connected through various boundary conditions. However, many types of damage introduce nonlinear stiffness and damping restoring forces, which may be used to detect damage even in the midst of these varying conditions. Vibro-acoustic modulation is a nondestructive evaluation technique that is highly sensitivity to the presence of nonlinearities. One factor that complicates the use of vibro-acoustic modulation as a structural health monitoring technique is that the amount of measured modulation has been shown to be dependent on the frequency of the...
---
paper_title: Nonlinear Elastic Wave NDE II. Nonlinear Wave Modulation Spectroscopy and Nonlinear Time Reversed Acoustics
paper_content:
This paper presents the second part of the review of Nonlinear Elastic Wave Spectroscopy (NEWS) in NDE, and describe two different methods of nonlinear NDE that provide not only damage detection but location as well. Nonlinear Wave Modulation Spectroscopy is based on the application of an ultrasonic probe signal modulated by a low frequency vibration. Damage location can be obtained by application of Impulse Modulation Techniques that exploit the modulation of a short pulse reflected from a damage feature (e.g. crack) by low frequency vibration. Nonlinear Time Reversed Acoustic methods provide the means to focus acoustic energy to any point in a solid. In combination, we are applying the focusing properties of TRA and the nonlinear properties of cracks to locate them.
---
paper_title: Noncontact fatigue crack visualization using nonlinear ultrasonic modulation
paper_content:
Abstract This paper presents a complete noncontact fatigue crack visualization technique based on nonlinear ultrasonic wave modulation and investigates the main source of nonlinear modulation generation. Two distinctive frequency input signals are created by two air-coupled transducers and the corresponding ultrasonic responses are scanned using a 3D laser Doppler vibrometer. The effectiveness of the proposed technique is tested using aluminum plates with different stages of fatigue crack formation such as micro and macro-cracks. Furthermore, the main source of nonlinear modulation is discussed based on the visualization results and the microscopic images.
---
paper_title: Impact damage detection in laminated composites by non-linear vibro-acoustic wave modulations
paper_content:
Abstract The paper presents an application of nonlinear acoustics for impact damage detection in composite laminates. Two composite plates were analysed. A low-velocity impact was used to damage one of the plates. Ultrasonic C-scan was applied to reveal the extent of barely visible impact damage. Finite element modelling was used to find vibration mode shapes of the plates and to estimate the local defect resonance frequency in the damaged plate. A delamination divergence study was performed to establish excitation parameters for nonlinear acoustics tests used for damage detection. Both composite plates were instrumented with surface-bonded, low-profile piezoceramic transducers that were used for the high-frequency ultrasonic excitation. Both an arbitrary frequency and a frequency corresponding to the local defect resonance were investigated. The low-frequency modal excitation was applied using an electromagnetic shaker. Scanning laser vibrometry was applied to acquire the vibro-acoustic responses from the plates. The study not only demonstrates that nonlinear vibro-acoustic modulations can successfully reveal the barely visible impact damage in composite plates, but also that the entire procedure can be enhanced when the ultrasonic excitation frequency corresponds to the resonant frequency of damage.
---
paper_title: N-SCAN: new vibromodulation system for detection and monitoring of cracks and other contact-type defects
paper_content:
In recent years, innovative vibro-modulation technique has been introduced for detection of contact-type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies on the contact area of the interface modulating passing through ultrasonic wave. The modulation manifests itself as additional side-band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for detection and differentiation of the contact-type defects from other structural and material inhomogeneities. Vibro-modulation technique has been implemented in N-SCAN damage detection system. The system consists of a digital synthesizer, high and low frequency amplifiers, a magnetostrictive shaker, ultrasonic transducers and a PC-based data acquisition/processing station with N-SCAN software. The ability of the system to detect contact-type defects was experimentally verified using specimens of simple and complex geometries made of steel, aluminum, composites and other structural materials. N-SCAN proved to be very effective for nondestructive testing of full-scale structures ranging from 24 foot-long gun barrels to stainless steel pipes used in nuclear power plants. Among advantages of the system are applicability for the wide range of structural materials and for structures with complex geometries, real time data processing, convenient interface for system operation, simplicity of interpretation of results, no need for sensor scanning along structure, onsite inspection of large structures at a fraction of time as compared with conventional techniques. This paper describes the basic principles of nonlinear vibro-modulation NDE technique, some theoretical background for nonlinear interaction and justification of signal processing algorithm. It is also presents examples of practical implementation and application of the technique.
---
paper_title: Structural Health Monitoring Using Guided Ultrasonic Waves
paper_content:
Maintenance of air, land and sea structures is an important engineering activity in a wide range of industries including transportation and Civil Engineering. Effective maintenance minimises not only the cost of ownership of structures but also improves safety and the perception of safety. Inspection for material/structural damage, such as fatigue cracks and corrosion in metallics or delamination in composites, is an essential part of maintenance.
---
paper_title: Vibro-acoustic modulation nondestructive evaluation technique
paper_content:
Nonlinear acoustic technique has been recently introduced as a new tool for nondestructive inspection and evaluation of fatigued, defective, and fractured materials. Various defects such as cracks, debonding, fatigue, etc. lead to anomalous high level of nonlinearity as compared with flawless structures. One of the acoustic manifestations of such nonlinearity is the modulation of ultrasound by low frequency vibration. Two methods employing the nonlinear interaction of ultrasound and vibration were developed, namely vibro-modulation (VM) and impact-modulation (IM) methods. VM method employs forced harmonic vibration of a structure tested, while IM method uses impact excitation of structure natural modes of vibration. The feasibility tests were carried out for different objects and demonstrated high sensitivity of the methods for detection of cracks in steel pipes and pins, bonding quality in titanium and thermoplastic plates used for airspace applications, cracks in combustion engine, adhesion flaws in bonded composite structures, and cracks and corrosion in reinforced concrete. The model of the crack allowing to describe the modulation of sound by vibration is discussed. The developed nonlinear technique demonstrated certain advantages as compared with the conventional linear acoustic technique, specifically discrimination capabilities, sensitivity, and applicability to highly inhomogeneous structures.
---
paper_title: Elastic-wave modulation approach to crack detection: Comparison of conventional modulation and higher-order interactions
paper_content:
Abstract Comparison of recent theoretical estimates with experiments has indicated that the ultimate sensitivity of the conventional modulation technique of crack detection is mainly determined by the background modulation produced by the quadratic component of the atomic nonlinearity of the matrix material. Much smaller level of masking nonlinear effects is typical of higher-order interactions due to cubic and higher-order components in the power-series expansion of the background nonlinearity of the solid. In contrast, the level of formally higher-order components originated due to nonlinearity of crack-like defects can be comparable with that of the first-order components. Such strongly increased efficiency of higher-order interactions is due to the fact that crack-like defects often demonstrate non-analytic (non power-law) nonlinearity even for moderate acoustic amplitudes. Besides the increased level, the higher-order components arisen due to non-analytic nonlinearity of cracks can demonstrate significantly different functional behavior compared to manifestations of the atomic nonlinearity. This difference can also help to discriminate the contributions of the defects and the background atomic nonlinearity. Here, we focus on the main differences between the modulation components arisen due to cubic terms in the power-series expansion of the atomic nonlinearity and similar components generated by clapping Hertzian nonlinearity of inner contacts in cracks. We also examine experimental examples of higher-order modulation interactions in damaged samples. These examples clearly indicate non-analytical character of the defects’ nonlinearity and demonstrate that the use of higher-order modulation effects can significantly improve the ultimate sensitivity and reliability of the modulation approach to detection of crack-like defects.
---
paper_title: Nonlinear Elastic Wave Spectroscopy (NEWS) Techniques to Discern Material Damage, Part I: Nonlinear Wave Modulation Spectroscopy (NWMS)
paper_content:
Abstract. The level of nonlinearity in the elastic response of materials containing structural damage is far greater than in materials with no structural damage. This is the basis for nonlinear wave diagnostics of damage, methods which are remarkably sensitive to the detection and progression of damage in materials. Nonlinear wave modulation spectroscopy (NWMS) is one exemplary method in this class of dynamic nondestructive evaluation techniques. The method focuses on the application of harmonics and sum and difference frequency to discern damage in materials. It consists of exciting a sample with continuous waves of two separate frequencies simultaneously, and inspecting the harmonics of the two waves, and their sum and difference frequencies (sidebands). Undamaged materials are essentially linear in their response to the two waves, while the same material, when damaged, becomes highly nonlinear, manifested by harmonics and sideband generation. We illustrate the method by experiments on uncracked and cracked Plexiglas and sandstone samples, and by applying it to intact and damaged engine components.
---
paper_title: Nonlinear acoustic interaction on contact interfaces and its use for nondestructive testing
paper_content:
Recent theoretical and experimental studies demonstrated that a weakly or incompletely bonded interfaces exhibit highly nonlinear behavior. One of acoustic manifestations of such nonlinearity is the modulation of a probing high-frequency ultrasonic wave by low-frequency vibration. The vibration varies the contact area modulating the phase and amplitude of higher frequency probing wave passing through the interface. In the frequency domain, the result of this modulation manifests itself as side-band spectral components with respect to the frequency of the probing wave. This modulation effect has been observed experimentally for various materials (metals, composites, concrete, sandstone, glass) with various types of contact-type defects (interfaces): cracks, debondings, delaminations, and microstructural material damages. Study of this phenomenon revealed correlation between the developed modulation criterion and the quantitative characteristics of the interfaces, such as its size, loading condition, and bonding strength. These findings have been used for the development of an innovative nondestructive evaluation technique, namely Vibro-Acoustic Modulation Technique. Two modifications of this technique have been developed: Vibro-Modulation (VM) and Impact-Modulation (IM), employing CW and impact-induced vibrations, respectively. The examples of applications of these methods include crack detection in steel pipes, aircraft and auto parts, bonded composite plates etc. These methods also proved their effectiveness in the detection of cracks in concrete.
---
paper_title: Nonlinear ultrasonic wave modulation for online fatigue crack detection
paper_content:
Abstract This study presents a fatigue crack detection technique using nonlinear ultrasonic wave modulation. Ultrasonic waves at two distinctive driving frequencies are generated and corresponding ultrasonic responses are measured using permanently installed lead zirconate titanate (PZT) transducers with a potential for continuous monitoring. Here, the input signal at the lower driving frequency is often referred to as a ‘pumping’ signal, and the higher frequency input is referred to as a ‘probing’ signal. The presence of a system nonlinearity, such as a crack formation, can provide a mechanism for nonlinear wave modulation, and create spectral sidebands around the frequency of the probing signal. A signal processing technique combining linear response subtraction (LRS) and synchronous demodulation (SD) is developed specifically to extract the crack-induced spectral sidebands. The proposed crack detection method is successfully applied to identify actual fatigue cracks grown in metallic plate and complex fitting-lug specimens. Finally, the effect of pumping and probing frequencies on the amplitude of the first spectral sideband is investigated using the first sideband spectrogram (FSS) obtained by sweeping both pumping and probing signals over specified frequency ranges.
---
paper_title: Nonlinear acoustics for fatigue crack detection – experimental investigations of vibro-acoustic wave modulations
paper_content:
Vibro-acoustic nonlinear wave modulations are investigated experimentally in a cracked aluminum plate. The focus is on the effect of low-frequency vibration excitation on modulation intensity and associated nonlinear wave interaction mechanisms. The study reveals that energy dissipation – not opening–closing crack action – is the major mechanism behind nonlinear modulations. The consequence is that relatively weak strain fields can be used for crack detection in metallic structures. A clear link between modulations and thermo-elastic coupling is also demonstrated, providing experimental evidence for the recently proposed non-classical, nonlinear vibro-acoustic wave interaction mechanism.
---
paper_title: Vibro-Acoustic Modulation Utilizing a Swept Probing Signal for Robust Crack Detection
paper_content:
One practical issue that must be addressed prior to the implementation of a vibration-based structural health monitoring system is the influence that variations in the structure’s environmental and boundary conditions can have on the vibration response of the structure. This issue is especially prominent in the structural health monitoring of aircraft, which operate in a wide variety of different environmental conditions and possess complex structural components connected through various boundary conditions. However, many types of damage introduce nonlinear stiffness and damping restoring forces, which may be used to detect damage even in the midst of these varying conditions. Vibro-acoustic modulation is a nondestructive evaluation technique that is highly sensitivity to the presence of nonlinearities. One factor that complicates the use of vibro-acoustic modulation as a structural health monitoring technique is that the amount of measured modulation has been shown to be dependent on the frequency of the...
---
paper_title: Nonlinear Elastic Wave NDE II. Nonlinear Wave Modulation Spectroscopy and Nonlinear Time Reversed Acoustics
paper_content:
This paper presents the second part of the review of Nonlinear Elastic Wave Spectroscopy (NEWS) in NDE, and describe two different methods of nonlinear NDE that provide not only damage detection but location as well. Nonlinear Wave Modulation Spectroscopy is based on the application of an ultrasonic probe signal modulated by a low frequency vibration. Damage location can be obtained by application of Impulse Modulation Techniques that exploit the modulation of a short pulse reflected from a damage feature (e.g. crack) by low frequency vibration. Nonlinear Time Reversed Acoustic methods provide the means to focus acoustic energy to any point in a solid. In combination, we are applying the focusing properties of TRA and the nonlinear properties of cracks to locate them.
---
paper_title: Nonlinear elastic wave spectroscopy identification of impact damage on a sandwich plate
paper_content:
Fragility of composite material to impact loading limits their application in aircraft structures. There is the need to develop reliable monitoring devices capable of localizing and assessing impact damage. At this aim, a new technique, non-linear elastic wave spectroscopy (NEWS), developed for geophysics applications, is investigated and presented in this paper. The NEWS damage detection based technique consists in the analysis of the spectrum of a signal acquired from the structure under investigation excited by a bi-harmonic signal. In pristine conditions, the signal spectrum presents two picks at the excitation frequencies. In presence of damages, the material starts to behave non-linearly and sidebands and harmonics of the excited frequencies are generated. In this study, experimental findings on a sandwich plate are presented in order to understand the capability of the NEWS technique to detect impact damages.
---
paper_title: Vibro-acoustic modulation technique for micro-crack detection in pipeline
paper_content:
For the poor sensitivity of traditional ultrasonic technique for micro crack detection, a kind of ::: nonlinear ultrasonic technique, vibro-acoustic modulation technique, is used for micro crack detection ::: in pipes. The influence of the frequency and amplitude of vibration on the modulation index (MI) is ::: experimentally investigated. The experiment results prove that the vibro-acoustic modulation technique ::: can be used for crack detection in pipes. The modulation index is influenced by the frequency and ::: amplitude of vibration, therefore it is important to choose the appropriate parameters in order to get ::: higher sensitivity in crack detection.
---
paper_title: Nonlinear acoustics with low-profile piezoceramic excitation for crack detection in metallic structures
paper_content:
Structural damage detection is one of the major maintenance activities in a wide range of industries. A variety of different methods have been developed for detection of fatigue cracks in metallic structures over the last few decades. This includes techniques based on stress/acoustic waves propagating in monitored structures. Classical ultrasonic techniques used in nondestructive testing and evaluation are based on linear amplitude and/or phase variations of reflected, transmitted or scattered waves. In recent years a range of different techniques utilizing nonlinear phenomena in vibration and acoustic signals have been developed. It appears that these techniques are more sensitive to damage alterations than other techniques used for damage detection based on linear behaviour. The paper explores the use of low-profile piezoceramic actuators with low-frequency excitation in nonlinear acoustics. The method is used to detect a fatigue crack in an aluminium plate. The results are compared with modal/vibration excitation performed with an electromagnetic shaker. The study shows that piezoelectric excitation with surface-bonded low-profile piezoceramic transducers is suitable for crack detection based on nonlinear acoustics.
---
paper_title: Comparison between a type of vibro-acoustic modulation and damping measurement as NDT techniques
paper_content:
The sensitivities of the conventional damping test and a particular implementation of the emerging vibro-acoustic modulation NDT technique have been compared on three types of cracked specimens: (1) a set of mild steel beams cracked in the laboratory, (2) a perspex beam also cracked in the laboratory and (3) an industrial component made of a nickel-based alloy. The latter was forged and cracked in the forging process. The comparison showed very similar performances on the specimens used. Both techniques work best for lightly damped specimens and in setups such that the influence of the support can be minimised. Their sensitivity is severely affected when these two conditions are not satisfied which significantly lower their appeal for many practical situations. (c) 2005 Published by Elsevier Ltd.
---
paper_title: N‐Scan®: New Vibro‐Modulation System for Crack Detection, Monitoring and Characterization
paper_content:
In recent years, an innovative vibro‐modulation technique has been introduced for the detection of contact‐type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies the contact area of the interface, modulating a passing ultrasonic wave. The modulation manifests itself as additional side‐band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for the detection and differentiation of the contact‐type defects from other structural and material inhomogeneities. The vibro‐modulation technique has been implemented in the N‐SCAN® damage detection system providing a cost effective solution for the complex NDT problems. N‐SCAN® proved to be very effective for damage detection and characterization in structures and structural components of simple and complex geometries made of steel, aluminum, ...
---
paper_title: A study of the vibro-acoustic modulation technique for the detection of cracks in metals
paper_content:
One implementation of the vibro-modulation technique involves monitoring the amplitude modulation of an ultrasonic vibration field transmitted through a cracked specimen undergoing an additional low frequency structural vibration. If the specimen is undamaged and appropriately supported, the two vibration fields do not interact. This phenomenon could be used as the basis for a nondestructive testing technique. In this paper, the sensitivity of the technique is investigated systematically on a set of mild steel beams with cracks of different sizes and shapes. A damage index was measured for each crack. The correlation obtained between the crack size and the strength of the modulation is fairly poor. The technique proved extremely sensitive to the initial state of opening and closing of the crack and to the setup due to the modulating effects of contacts between the specimens and the supports. A simple model is proposed which explains the main features observed and approximately predicts the level of sideband obtained experimentally.
---
paper_title: Vibro‐Acoustic Modulation NDE Technique. Part 2: Experimental Study
paper_content:
The crack detection capability of the Vibro‐Acoustic Modulation NDE technique is experimentally investigated over a broad range of ultrasonic frequencies. The generation of low frequency modes of the samples is performed using harmonic excitation applied by a shaker or by tapping. For both excitation methods the effect of different strain levels on the sideband activity is analyzed. The implementation of practical and reliable supports for testing the specimens is also considered in this study, as well as examples of results on industrial components.
---
paper_title: N-SCAN: new vibromodulation system for detection and monitoring of cracks and other contact-type defects
paper_content:
In recent years, innovative vibro-modulation technique has been introduced for detection of contact-type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies on the contact area of the interface modulating passing through ultrasonic wave. The modulation manifests itself as additional side-band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for detection and differentiation of the contact-type defects from other structural and material inhomogeneities. Vibro-modulation technique has been implemented in N-SCAN damage detection system. The system consists of a digital synthesizer, high and low frequency amplifiers, a magnetostrictive shaker, ultrasonic transducers and a PC-based data acquisition/processing station with N-SCAN software. The ability of the system to detect contact-type defects was experimentally verified using specimens of simple and complex geometries made of steel, aluminum, composites and other structural materials. N-SCAN proved to be very effective for nondestructive testing of full-scale structures ranging from 24 foot-long gun barrels to stainless steel pipes used in nuclear power plants. Among advantages of the system are applicability for the wide range of structural materials and for structures with complex geometries, real time data processing, convenient interface for system operation, simplicity of interpretation of results, no need for sensor scanning along structure, onsite inspection of large structures at a fraction of time as compared with conventional techniques. This paper describes the basic principles of nonlinear vibro-modulation NDE technique, some theoretical background for nonlinear interaction and justification of signal processing algorithm. It is also presents examples of practical implementation and application of the technique.
---
paper_title: Impact damage detection in composite laminates using nonlinear acoustics
paper_content:
The paper demonstrates the application of nonlinear acoustics for impact damage detection in composite laminates. A composite plate is monitored for damage resulting from a low-velocity impact. The plate is instrumented with bonded low-profile piezoceramic transducers. A high-frequency acoustic wave is introduced to one transducer and picked up by a different transducer. A low-frequency flexural modal excitation is introduced to the plate at the same time using an electromagnetic shaker. The damage induced by impact is exhibited in a power spectrum of the acoustic response by a pattern of sidebands around the main acoustic harmonic. The results show that the amplitude of sidebands is related to the severity of damage. The study investigates also the effect of boundary conditions on the results.
---
paper_title: Damage detection in an aircraft foam sandwich panel using nonlinear elastic wave spectroscopy
paper_content:
A novel damage detection technique, based on nonlinear elastic wave spectroscopy (NEWS) approach, is presented in this paper. This technique detects the presence of structural changes by monitoring the presence of harmonics and sidebands generated by the interaction between a low frequency and a high frequency harmonic excitation signal, due to the nonlinear material behaviour caused by the presence of damage. The proposed methodology was tested on a sandwich plate after being impacted by a foreign object under low velocity impact conditions. The high frequency signal was modulated in amplitude and the changes of the structural response, in terms of harmonic and sidebands amplitude, were recorded. The spectra and time frequency representation (TFR) were evaluated using wavelet transformations (WT). The experimental harmonic and sidebands amplitude resulted in close agreement with the theoretical behaviour of nonlinear behaviour of damaged materials. More specifically, the 3rd harmonics of the low frequency signal component showed a quadratic dependence with the low frequency response amplitude, as predicted by the theory for a hysteretic nonlinear material. Differently, the experimental second sidebands of the high frequency signal resulted bounded by curves representing the second sideband behaviour for pure hysteretic material (lower bound) and classical nonlinear material (upper bound). In particular, for small low frequency response amplitudes, the experimental second sidebands tended to be closer to the classical nonlinear material curve, while for the largest amplitudes investigated the response resembled a hysteretic material behaviour. The results showed that the proposed methodology was capable of detecting successfully the presence of impact damage and can be used as a first assessment of the presence of damage in aircraft structures where the presence of damage should be quickly estimated.
---
paper_title: Detecting barely visible impact damage detection on aircraft composites structures
paper_content:
Composites have many advantages as aircraft structural materials and for this reason their use is becoming increasingly widespread. Fragility of composite material to impact loading limits their application in aircraft structures. In particular, low velocity impacts can cause a significant amount of delamination, even though the only external indication of damage may be a very small surface indentation. This type of damage is often referred to as barely visible impact damage (BVID), and it can cause significant degradation of structural properties. If the damaged laminate is subjected to high compressive loading, buckling failure may occur. Therefore, there is the need to develop improved and more efficient means of detecting such damage. In this work a new NDT approach is presented, based on the monitoring of the nonlinear elastic material behaviour of damaged material. Two methods were investigated: a single-mode nonlinear resonance ultrasound (NRUS) and a nonlinear wave modulation spectroscopy (NWMS). The developed methods were tested on different composite plates with unknown mechanical properties and damage size and magnitude. The presence of the nonlinearities introduced by the damage was clearly identified using both techniques. The results showed that the proposed methodology appear to be highly sensitive to the presence of damage with very promising future applications.
---
paper_title: Crack detection technique for operating wind turbine blades using Vibro-Acoustic Modulation
paper_content:
This article presents a new technique for identifying cracks in wind turbine blades undergoing operational loads using the Vibro-Acoustic Modulation technique. Vibro-Acoustic Modulation utilizes a low-frequency pumping excitation signal in conjunction with a high-frequency probing excitation signal to create the modulation that is used to identify cracks. Wind turbines provide the ideal conditions in which Vibro-Acoustic Modulation can be utilized because wind turbines experience large low-frequency structural vibrations during operation which can serve as the low-frequency pumping excitation signal. In this article, the theory for the vibro-acoustic technique is described, and the proposed crack detection technique is demonstrated with Vibro-Acoustic Modulation experiments performed on a small Whisper 100 wind turbine in operation. The experimental results are also compared with two other conventional vibro-acoustic techniques in order to validate the new technique. Finally, a computational study is demo...
---
paper_title: Nonlinear elastic wave spectroscopy identification of impact damage on a sandwich plate
paper_content:
Fragility of composite material to impact loading limits their application in aircraft structures. There is the need to develop reliable monitoring devices capable of localizing and assessing impact damage. At this aim, a new technique, non-linear elastic wave spectroscopy (NEWS), developed for geophysics applications, is investigated and presented in this paper. The NEWS damage detection based technique consists in the analysis of the spectrum of a signal acquired from the structure under investigation excited by a bi-harmonic signal. In pristine conditions, the signal spectrum presents two picks at the excitation frequencies. In presence of damages, the material starts to behave non-linearly and sidebands and harmonics of the excited frequencies are generated. In this study, experimental findings on a sandwich plate are presented in order to understand the capability of the NEWS technique to detect impact damages.
---
paper_title: N‐Scan®: New Vibro‐Modulation System for Crack Detection, Monitoring and Characterization
paper_content:
In recent years, an innovative vibro‐modulation technique has been introduced for the detection of contact‐type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies the contact area of the interface, modulating a passing ultrasonic wave. The modulation manifests itself as additional side‐band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for the detection and differentiation of the contact‐type defects from other structural and material inhomogeneities. The vibro‐modulation technique has been implemented in the N‐SCAN® damage detection system providing a cost effective solution for the complex NDT problems. N‐SCAN® proved to be very effective for damage detection and characterization in structures and structural components of simple and complex geometries made of steel, aluminum, ...
---
paper_title: Delamination detection in composites using wave modulation spectroscopy with a novel active nonlinear acousto-ultrasonic piezoelectric sensor
paper_content:
A novel structural health monitoring (SHM) methodology, based on nonlinear wave modulation spectroscopy, is presented for the detection of delamination cracks in composites. The basic element is a novel active nonlinear acousto-ultrasonic piezoelectric sensor enabling low-cost and wide-frequency operational bandwidth. The active sensor configuration involves two piezoceramic wafer actuators, each one excited with a low- and high-frequency signal respectively, and a piezoceramic sensor, all permanently bonded on the tested structure. Experiments are conducted on two sets of composite strips containing delamination cracks of different sizes. Measured results illustrate first the efficiency of the nonlinear ultrasonics methodology to detect delamination cracks, as well as, the potential and benefits of the new active sensor. The sensitivity of the active sensor response to the crack size and the applied high-frequency carrier signals at the actuators, vary at various frequency and voltage levels indicating t...
---
paper_title: Impact Damage Detection in Composite Chiral Sandwich Panels
paper_content:
This paper demonstrates impact damage detection in a composite sandwich panel. The panel is built from a chiral honeycomb and two composite skins. Chiral structures are a subset of auxetic solids exhibiting counterintuitive deformation mechanism and rotative but not reflective symmetry. Damage detection is performed using nonlinear acoustics,involves combined vibro-acoustic interaction of high-frequency ultrasonic wave and low-frequency vibration excitation. High-and low-frequency excitations are introduced to the panel using a low-profile piezoceramic transducer and an electromagnetic shaker, respectively. Vibro-acoustic modulated responses are measured using laser vibrometry. The methods used for impact damage detection clearly reveal de-bonding in the composite panel. The high-frequency weak ultrasonic wave is also modulated by the low-frequency strong vibration wave when nonlinear acoustics is used for damage detection. As a result frequency sidebands can be observed around the main acoustic harmonic in the spectrum of the ultrasonic signal.
---
paper_title: Impact damage detection in composite chiral sandwich panels using nonlinear vibro-acoustic modulations
paper_content:
This paper reports an application of nonlinear acoustics to impact damage detection in a composite chiral sandwich panel. The panel is built from a chiral honeycomb and two composite skins. High-frequency ultrasonic excitation and low-frequency modal excitation were used to observe nonlinear modulations in ultrasonic waves due to structural damage. Low-profile, surface-bonded piezoceramic transducers were used for ultrasonic excitation. Non-contact laser vibrometry was applied for ultrasonic sensing. The work presented focuses on the analysis of the modulation intensities and damage-related nonlinearities. The paper demonstrates that the method can be used for impact damage detection in composite chiral sandwich panels.
---
paper_title: Nonlinear wave structural health monitoring method using an active nonlinear piezoceramic sensor for matrix cracking detection in composites
paper_content:
A structural health monitoring methodology, based on nonlinear wave modulation spectroscopy, is presented and aims to detect matrix cracks in composites. Experiments were conducted on cross-ply carbon/epoxy strips containing matrix cracks, induced via three-point bending. Damage in all tested specimens was categorized according to the acoustic emission hits recorded during loading. The nonlinear ultrasonics methodology is applied via an active nonlinear acousto-ultrasonic piezoelectric sensor, enabling low-cost and wide-frequency operational bandwidth. This active sensor configuration involves two piezoceramic wafer actuators, one excited with a low- and the other with a high-frequency signal, and a piezoceramic sensor, all permanently bonded on the tested structure. The sensitivity of the nonlinear active sensor response at specific high carrier frequencies is depicted and damage indices are proposed. The experimental results illustrate the effectiveness of the nonlinear ultrasonic wave mixing method in ...
---
paper_title: Impact damage detection in laminated composites by non-linear vibro-acoustic wave modulations
paper_content:
Abstract The paper presents an application of nonlinear acoustics for impact damage detection in composite laminates. Two composite plates were analysed. A low-velocity impact was used to damage one of the plates. Ultrasonic C-scan was applied to reveal the extent of barely visible impact damage. Finite element modelling was used to find vibration mode shapes of the plates and to estimate the local defect resonance frequency in the damaged plate. A delamination divergence study was performed to establish excitation parameters for nonlinear acoustics tests used for damage detection. Both composite plates were instrumented with surface-bonded, low-profile piezoceramic transducers that were used for the high-frequency ultrasonic excitation. Both an arbitrary frequency and a frequency corresponding to the local defect resonance were investigated. The low-frequency modal excitation was applied using an electromagnetic shaker. Scanning laser vibrometry was applied to acquire the vibro-acoustic responses from the plates. The study not only demonstrates that nonlinear vibro-acoustic modulations can successfully reveal the barely visible impact damage in composite plates, but also that the entire procedure can be enhanced when the ultrasonic excitation frequency corresponds to the resonant frequency of damage.
---
paper_title: Nonlinear Elastic Wave Spectroscopy (NEWS) Techniques to Discern Material Damage, Part I: Nonlinear Wave Modulation Spectroscopy (NWMS)
paper_content:
Abstract. The level of nonlinearity in the elastic response of materials containing structural damage is far greater than in materials with no structural damage. This is the basis for nonlinear wave diagnostics of damage, methods which are remarkably sensitive to the detection and progression of damage in materials. Nonlinear wave modulation spectroscopy (NWMS) is one exemplary method in this class of dynamic nondestructive evaluation techniques. The method focuses on the application of harmonics and sum and difference frequency to discern damage in materials. It consists of exciting a sample with continuous waves of two separate frequencies simultaneously, and inspecting the harmonics of the two waves, and their sum and difference frequencies (sidebands). Undamaged materials are essentially linear in their response to the two waves, while the same material, when damaged, becomes highly nonlinear, manifested by harmonics and sideband generation. We illustrate the method by experiments on uncracked and cracked Plexiglas and sandstone samples, and by applying it to intact and damaged engine components.
---
paper_title: Acoustic techniques for concrete evaluation: Improvements, comparisons and consistency
paper_content:
Abstract In civil engineering, the testing of structures allows a large number of chemical, mechanical and micro–macro structural parameters to be determined. Within the scope of ultrasonic Non-Destructive Testing (NDT), which is often used to evaluate concrete, many different techniques can be employed to generate, propagate, and receive the ultrasound waves, and to process the corresponding signals. The SENSO project, developed by nine partners, provided an opportunity to test a large number of principles related to the material’s characteristics and properties. Concerning the acoustic techniques, the first approach was based on linear propagation with bulk waves and surface waves, or with backscattered waves. The second method involved nonlinear analysis using wave modulation. One aim of the SENSO project was to compare these different techniques, in order to select that having the most promising performance in the laboratory, before transferring the measurement techniques to the field. The objectives were to evaluate the porosity, degree of saturation, modulus of elasticity and compressive strength of concrete. In the present paper, the measurement techniques and measured parameters are presented, together with the improvements developed within the context of the SENSO project. The methodology used to classify and select the ultrasonic techniques is described. It takes into account the sensitivity of the method to changes in the material and associates measurement uncertainties. Finally, the most significant results to evaluate porosity, mechanical characteristics and degree of saturation of various concretes, are developed. The consistency of the results of the measurements is analysed. The advantages and drawbacks of each technique are discussed.
---
paper_title: Observation of the “Luxemburg–Gorky effect” for elastic waves
paper_content:
An experimental observation of a new nonlinear-modulation effect for longitudinal elastic waves is reported. The phenomenon is a direct elastic wave analogy with the so-called Luxemburg–Gorky (L–G) effect known over 60 years for radio waves propagating in the ionosphere. The effect consists of the appearance of modulation of a weaker initially non-modulated wave propagating in a nonlinear medium in the presence of an amplitude-modulated stronger wave that produces perturbations in the medium properties on the scale of its modulation frequency. The reported transfer of modulation from one elastic wave to another was observed in a resonator cut of a glass rod containing a few small cracks. Presence of such a small damage drastically enhances the material nonlinearity compared to elastic atomic nonlinearity of homogeneous solids, so that the pronounced L– G type cross-modulation could be observed at strain magnitude in the stronger wave down to 10 7 and smaller. Main features of the effect are pointed out and physical mechanism of the observed phenomena is discussed. 2002 Elsevier Science B.V. All rights reserved.
---
paper_title: Crack detection in glass plates using nonlinear acoustics with low-profile piezoceramic transducers
paper_content:
The paper demonstrates application of nonlinear acoustics for crack detection in a glass plate. FE analysis is performed ::: to establish structural resonances of the glass plate. Simulation analysis and experimental tests are used to select ::: ultrasonic frequencies. Finally, nonlinear acoustic tests are performed to detect cracks. A high-frequency ultrasonic ::: signal is introduced to the glass plate. At the same time the plate is modally excited using selected resonance frequencies. ::: Surface-bonded, low-profile piezoceramic transducers are used for low- and high-frequency excitation. The experiments lead to vibro-acoustic wave modulations. The presence of modulation indicates damage and the intensity of modulation describes its severity.
---
paper_title: Nonlinear Elastic Wave NDE II. Nonlinear Wave Modulation Spectroscopy and Nonlinear Time Reversed Acoustics
paper_content:
This paper presents the second part of the review of Nonlinear Elastic Wave Spectroscopy (NEWS) in NDE, and describe two different methods of nonlinear NDE that provide not only damage detection but location as well. Nonlinear Wave Modulation Spectroscopy is based on the application of an ultrasonic probe signal modulated by a low frequency vibration. Damage location can be obtained by application of Impulse Modulation Techniques that exploit the modulation of a short pulse reflected from a damage feature (e.g. crack) by low frequency vibration. Nonlinear Time Reversed Acoustic methods provide the means to focus acoustic energy to any point in a solid. In combination, we are applying the focusing properties of TRA and the nonlinear properties of cracks to locate them.
---
paper_title: Micro-damage diagnostics using nonlinear elastic wave spectroscopy (NEWS)
paper_content:
Nonlinear elastic wave spectroscopy (NEWS) represents a class of powerful tools which explore the dynamic nonlinear stress–strain features in the compliant bond system of a micro-inhomogeneous material and link them to micro-scale damage. Hysteresis and nonlinearity in the constitutive relation (at the micro-strain level) result in acoustic and ultrasonic wave distortion, which gives rise to changes in the resonance frequencies as a function of drive amplitude, generation of accompanying harmonics, nonlinear attenuation, and multiplication of waves of different frequencies. The sensitivity of nonlinear methods to the detection of damage features (cracks, flaws, etc.) is far greater than can be obtained with linear acoustical methods (measures of wavespeed and wave dissipation). We illustrate two recently developed NEWS methods, and compare the results for both techniques on roofing tiles used in building construction.
---
paper_title: N‐Scan®: New Vibro‐Modulation System for Crack Detection, Monitoring and Characterization
paper_content:
In recent years, an innovative vibro‐modulation technique has been introduced for the detection of contact‐type interfaces such as cracks, debondings, and delaminations. The technique utilizes the effect of nonlinear interaction of ultrasound and vibrations at the interface of the defect. Vibration varies the contact area of the interface, modulating a passing ultrasonic wave. The modulation manifests itself as additional side‐band spectral components with the combination frequencies in the spectrum of the received signal. The presence of these components allows for the detection and differentiation of the contact‐type defects from other structural and material inhomogeneities. The vibro‐modulation technique has been implemented in the N‐SCAN® damage detection system providing a cost effective solution for the complex NDT problems. N‐SCAN® proved to be very effective for damage detection and characterization in structures and structural components of simple and complex geometries made of steel, aluminum, ...
---
paper_title: Potential of Nonlinear Ultrasonic Indicators for Nondestructive Testing of Concrete
paper_content:
In the context of a growing need for safety and reliability in Civil Engineering, acoustic methods of nondestructive testing provide answers to a real industrial need. Linear indicators (wave speed and attenuation) exhibit a limited sensitivity, unlike nonlinear ones which usually have a far greater dynamic range. This paper illustrates the potential of these indicators, and evaluates its potential for in situ applications. Concrete, a structurally heterogeneous and volumetrically, mechanically damaged material, is an example of a class of materials that exhibit strong multiple scattering as well as significant elastic nonlinear response. In the context of stress monitoring in pre-stressed structures, we show that intense scattering can be applied to robustly determine velocity changes at progressively increasing applied stress using coda wave interferometry and thereby extract nonlinear coefficients. In a second part, we demonstrate the high sensitivity of nonlinear parameters to thermal damage as regard with linear ones. Then, the influence of water content and porosity on these indicators is quantified allowing to uncouple the effect of damage from environmental or structural parameters.
---
paper_title: Dynamic nonlinear elasticity in geomaterials
paper_content:
This invention relates to a lens for presbyopia free from distortional aberration for use in correcting an old-age eyesight. In the lens for presbyopia with a front lens surface having a smaller radius of curvature than a rear lens face, a lens surface has a refractive power successively corrected as the lens surface extends radially outwardly away from a geometric center of the lens so that lateral magnifications for all principal rays always equal a lateral magnification for a paraxial range. This construction is entirely free from distortional aberration, and secures a greatly enlarged range of distinct vision.
---
paper_title: Impact damage detection in light composite sandwich panels using piezo-based nonlinear vibro-acoustic modulations
paper_content:
The nonlinear vibro-acoustic modulation technique is used for impact damage detection in light composite sandwich panels. The method utilizes piezo-based low-frequency vibration and high-frequency ultrasonic excitations. The work presented focuses on the analysis of modulation intensity. The results show that the method can be used for impact damage detection reliably separating damage-related from vibro-acoustic modulations from other intrinsic nonlinear modulations.
---
paper_title: Sensor validation for smart structures
paper_content:
Structures with a large number of embedded sensors are becoming more common, and this refined spatial information can be used to advantage in damage location and model validation. These sensors could be accelerometers, strain gauges, piezoceramic patches, PVDF film sensors, or optical fibre sensors. This approach requires that the sensors are functioning correctly, which on a smart structure operating in the field should be continuous and automatically monitored. This paper considers possible approaches to sensor validation, based on the assumption that a model of the structure is available. The aim is to make use of the natural data redundancy since there will more sensors than modes in the data. The validation approaches considered are based on hypothesis testing based on a number of techniques, such as modal filtering. The methods are demonstrated on simple examples that exercise their strengths and weaknesses.
---
paper_title: Sensor validation for structural systems with multiplicative sensor faults
paper_content:
Structures with a large number of sensors and actuators are becoming more common, with applications that vary from active control to damage location. This large amount of spatial information should be used to the advantage of the analyst, which requires, for correct functioning, the continuous in-operation monitoring of the sensors. Errors introduced by faulty sensors can cause a loss of performance and erroneous conclusions. Most of the currently available techniques seem to work well under the assumption of additive fault conditions, while seem to fail under the condition of multiplicative faults. In this work, we propose a new residual generation and evaluation technique for sensor validation that works for both additive and multiplicative faults. A correlation index is proposed which isolates the faulty sensors. The approach is experimentally validated on a subframe structure.
---
paper_title: Handbook of Nondestructive Evaluation
paper_content:
Chapter 1: Introduction to Nondestructive Testing Chapter 2: Dicontinuities - Origins and Classification Chapter 3: Visual Testing Chapter 4: Penetrant Testing Chapter 5: Magnetic Particle Testing Chapter 6: Radiographic Testing New - Digital Radiography Chapter 7: Ultrasonic Testing New - Phased Array Ultrasonics New - Guided Wave Ultrasonics Chapter 8: Eddy current Testing Chapter 9: Thermal Infrared Testing Chapter 10: Acoustic Emission Testing
---
| Title: Nonlinear vibroacoustic wave modulations for structural damage detection: an overview
Section 1: Introduction
Description 1: Introduce the importance of structural integrity, the role of nondestructive testing (NDT), the limitations of existing techniques, and the promising role of nonlinear vibroacoustic wave modulation (VAM) for structural damage detection.
Section 2: Background
Description 2: Provide historical context and the theoretical foundations of classical and nonclassical nonlinear acoustics that underscore the nonlinear VAM technique.
Section 3: Nonlinear Vibroacoustic Wave Modulations
Description 3: Explain the different experimental arrangements and the basic principles of the nonlinear VAM technique including the use of pump and probe waves.
Section 4: Modeling
Description 4: Discuss the theoretical and numerical modeling approaches used to understand and simulate nonlinear acoustics, focusing on VAM.
Section 5: Sensors and Actuators
Description 5: Describe the various types of excitation and sensing methods used in VAM experiments, detailing the configurations and specific actuators applied.
Section 6: Signal Processing
Description 6: Detail the signal processing techniques employed to analyze the response signals in VAM, along with commonly used damage indicators and their computation.
Section 7: Application Examples
Description 7: Present examples of VAM applications across various materials, geometries, and types of damage, showcasing the technique's versatility and effectiveness.
Section 8: Selected Application Case
Description 8: Provide a detailed case study demonstrating the VAM technique applied to composite sandwich panels, discussing experimental setup, results, and implications.
Section 9: Summary and Final Conclusions
Description 9: Summarize the findings, discuss the current state of the VAM technique, its potential, challenges, and future research directions. |
A review of learning vector quantization classifiers | 12 | ---
paper_title: A novel kernel prototype-based learning algorithm
paper_content:
We propose a novel kernel prototype-based learning algorithm, called kernel generalized learning vector quantization (KGLYQ) algorithm, which can significantly improve the classification performance of the original generalized learning vector quantization algorithm in complex pattern classification tasks. In addition, the KGLVQ can also serve as a good general kernel learning framework for further investigation.
---
paper_title: Classification of respiratory sounds based on wavelet packet decomposition and learning vector quantization
paper_content:
In this paper, a wavelet packet-based method is used for detection of abnormal respiratory sounds. The sound signal is divided into segments, and a feature vector for classification is formed using the results of the search for the best wavelet packet decomposition. The segments are classified as containing crackles, wheezes or normal lung sounds, using Learning Vector Quantization. The method is tested using a small set of real patient data which was also analysed by an expert observer. The preliminary results are promising, although not yet good enough for clinical use.
---
paper_title: Evaluation of learning vector quantization to classify cotton trash
paper_content:
The cotton industry needs a method to identify the type of trash (nonlint material (NLM)) in cotton samples; learning vector quanti- zation (LVQ) is evaluated as that method. LVQ is a classification tech- nique that defines reference vectors (group prototypes) in an N-dimensional feature space (R N ). Normalized trash object features ex- tracted from images of compressed cotton samples defineR N.A n un- known NLM object is given the label of the closest reference vector (as defined by Euclidean distance). Different normalized feature spaces and NLM classifications are evaluated and accuracies reported for correctly identifying the NLM type. LVQ is used to partition cotton trash into: (1) bark (B), leaf (L), pepper (P), or stick (S); (2) bark and nonbark (N); or (3) bark, combined leaf and pepper (LP), or stick. Percentage accuracy for correctly identifying 139 pieces of test trash placed on laboratory pre- pared samples for the three scenarios are (B:95, L:87, P:100, S:88), (B:100, N:97), and (B:95, LP:99, S:88), respectively. Also, LVQ results are compared to previous work using backpropagating neural networks. © 1997 Society of Photo-Optical Instrumentation Engineers.
---
paper_title: Kernel robust soft learning vector quantization
paper_content:
Prototype-based classification schemes offer very intuitive and flexible classifiers with the benefit of easy interpretability of the results and scalability of the model complexity. Recent prototype-based models such as robust soft learning vector quantization (RSLVQ) have the benefit of a solid mathematical foundation of the learning rule and decision boundaries in terms of probabilistic models and corresponding likelihood optimization. In its original form, they can be used for standard Euclidean vectors only. In this contribution, we extend RSLVQ towards a kernelized version which can be used for any positive semidefinite data matrix. We demonstrate the superior performance of the technique, kernel RSLVQ, in a variety of benchmarks where results competitive or even superior to state-of-the-art support vector machines are obtained.
---
paper_title: A supervised growing neural gas algorithm for cluster analysis
paper_content:
In this paper, a prototype-based supervised clustering algorithm is proposed. The proposed algorithm, called the Supervised Growing Neural Gas algorithm (SGNG), incorporates several techniques from some unsupervised GNG algorithms such as the adaptive learning rates and the cluster repulsion mechanisms of the Robust Growing Neural Gas algorithm, and the Type Two Learning Vector Quantization (LVQ2) technique. Furthermore, a new prototype insertion mechanism and a clustering validity index are proposed. These techniques are designed to utilize class labels of the training data to guide the clustering. The SGNG algorithm is capable of clustering adjacent regions of data objects labeled with different classes, formulating topological relationships among prototypes and automatically determining the optimal number of clusters using the proposed validity index. To evaluate the effectiveness of the SGNG algorithm, two experiments are conducted. The first experiment uses two synthetic data sets to graphically illustrate the potential with respect to growing ability, ability to cluster adjacent regions of different classes, and ability to determine the optimal number of prototypes. The second experiment evaluates the effectiveness using the UCI benchmark data sets. The results from the second experiment show that the SGNG algorithm performs better than other supervised clustering algorithms for both cluster impurities and total running times.
---
paper_title: A new generalized LVQ algorithm via harmonic to minimum distance measure transition
paper_content:
We present a novel generalized learning vector quantization (LVQ) framework called the harmonic to minimum generalized LVQ algorithm (H2M-GLVQ). Through incorporating the distance measure transition procedure from harmonic average distance to minimum distance, the H2M-GLVQ cost function is gradually changing from the soft model to the hard model. Our proposed method, at the early training stage, can effectively tackle the initialization sensitivity problem associated with the original generalized LVQ algorithm while the convergence of the algorithm can be ensured by the hard model in the later training stage. Experimental results have shown the superior performance of the H2M-GLVQ algorithm over the generalized LVQ and one of its variants on some artificial multi-modal datasets
---
paper_title: Entropy-constrained learning vector quantization algorithms and their application in image compression
paper_content:
This paper presents entropy-constrained learning vector quantization (ECLVQ) algorithms and their application in image compression. The development of these algorithms relies on reformulation, which is a powerful new methodology that essentially establishes a link between learning vector quantization and clustering algorithms developed using alternating optimization. ECLVQ algorithms are developed in this paper by reformulating entropyconstrained fuzzy clustering (ECFC) algorithms, which were developed by minimizing an objective function incorporating the partition entropy and the average distortion between the feature vectors and their prototypes. The proposed algorithms allow the gradual transition from a maximally fuzzy partition to a nearly crisp partition of the feature vectors during the learning process. This paper presents two alternative implementations of the proposed algorithms, which differ in terms of the strategy employed for updating the prototypes during learning. The proposed algorithms are tested and evaluated on the design of codebooks used for image data compression.
---
paper_title: An Axiomatic Approach to Soft Learning Vector Quantization and Clustering
paper_content:
This paper presents an axiomatic approach to soft learning vector quantization (LVQ) and clustering based on reformulation. The reformulation of the fuzzy c-means (FCM) algorithm provides the basis for reformulating entropy-constrained fuzzy clustering (ECFC) algorithms. According to the proposed approach, the development of specific algorithms reduces to the selection of a generator function. Linear generator functions lead to the FCM and fuzzy learning vector quantization algorithms while exponential generator functions lead to ECFC and entropy-constrained learning vector quantization algorithms. The reformulation of LVQ and clustering algorithms also provides the basis for developing uncertainty measures that can identify feature vectors equidistant from all prototypes. These measures are employed by a procedure developed to make soft LVQ and clustering algorithms capable of identifying outliers in the data set. This procedure is evaluated by testing the algorithms generated by linear and exponential generator functions on speech data.
---
paper_title: Soft Learning Vector Quantization
paper_content:
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function based on a likelihood ratio and derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "soft" LVQ algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.
---
paper_title: Generalized clustering networks and Kohonen's self-organizing scheme
paper_content:
The relationship between the sequential hard c-means (SHCM) and learning vector quantization (LVQ) clustering algorithms is discussed. The impact and interaction of these two families of methods with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method but often lends ideas to clustering algorithms, are considered. A generalization of LVQ that updates all nodes for a given input vector is proposed. The network attempts to find a minimum of a well-defined objective function. The learning rules depend on the degree of distance match to the winner node; the lesser the degree of match with the winner, the greater the impact on nonwinner nodes. Numerical results indicate that the terminal prototypes generated by this modification of LVQ are generally insensitive to initialization and independent of any choice of learning coefficient. IRIS data obtained by E. Anderson's (1939) is used to illustrate the proposed method. Results are compared with the standard LVQ approach. >
---
paper_title: Relevance LVQ versus SVM
paper_content:
The support vector machine (SVM) constitutes one of the most successful current learning algorithms with excellent classification accuracy in large real-life problems and strong theoretical background. However, a SVM solution is given by a not intuitive classification in terms of extreme values of the training set and the size of a SVM classifier scales with the number of training data. Generalized relevance learning vector quantization (GRLVQ) has recently been introduced as a simple though powerful expansion of basic LVQ. Unlike SVM, it provides a very intuitive classification in terms of prototypical vectors the number of which is independent of the size of the training set. Here, we discuss GRLVQ in comparison to the SVM and point out its beneficial theoretical properties which are similar to SVM whereby providing sparse and intuitive solutions. In addition, the competitive performance of GRLVQ is demonstrated in one experiment from computational biology.
---
paper_title: Identification of ECG beats from cross-spectrum information aided learning vector quantization
paper_content:
Abstract This work describes the development of a computerized medical diagnostic tool for heart beat categorization. The main objective is to achieve an accurate, timely detection of cardiac arrhythmia for providing appropriate medical attention to a patient. The proposed scheme employs a feature extractor coupled with an Artificial Neural Network (ANN) classifier. The feature extractor is based on cross-correlation approach, utilizing the cross-spectral density information in frequency domain. The ANN classifier uses a Learning Vector Quantization (LVQ) scheme which classifies the ECG beats into three categories: normal beats, Premature Ventricular Contraction (PVC) beats and other beats. To demonstrate the generalization capability of the scheme, this classifier is developed utilizing a small training dataset and then tested with a large testing dataset. Our proposed scheme was employed for 40 benchmark ECG files of the MIT/BIH database. The system could produce classification accuracy as high as 95.24% and could outperform several competing algorithms.
---
paper_title: Forecasting time-series by Kohonen classification
paper_content:
In this paper, we propose a generic non-linear approach for time series forecasting. The main feature of this approach is the use of a simple statistical forecasting in small regions of an input space adequately chosen and quantized. The partition of the space is achieved by the Kohonen algorithm. The method is then applied to a widely known time-series from the SantaFe competition, and the results are compared with the best ones published for this series.
---
paper_title: Generalized relevance learning vector quantization
paper_content:
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.
---
paper_title: Intelligent switching control of a pneumatic muscle robot arm using learning vector quantization neural network
paper_content:
Pneumatic cylinders are one of the low-cost actuation sources used in industrial and prosthetic application, since they have a high power/weight ratio, high-tension force and long durability. However, problems with the control, oscillatory motion and compliance of pneumatic systems have prevented their widespread use in advanced robotics. To overcome these shortcomings, a number of newer pneumatic actuators have been developed, such as the McKibben Muscle, Rubber Actuator and Pneumatic Artificial Muscle (PAM) Manipulators. In this paper, the solution for position control of a robot arm with slow motion driven by two pneumatic artificial muscles is presented. However, some limitations still exist, such as a deterioration of the performance of transient response due to the changes in the external load. To overcome this problem, a switching algorithm of the control parameter using a learning vector quantization neural network (LVQNN) is proposed in this paper. The LVQNN estimates the external load of the pneumatic artificial muscle manipulator. The effectiveness of the proposed control algorithm is demonstrated through experiments with different external working loads.
---
paper_title: Suppressed fuzzy-soft learning vector quantization for MRI segmentation
paper_content:
Objective: A self-organizing map (SOM) is a competitive artificial neural network with unsupervised learning. To increase the SOM learning effect, a fuzzy-soft learning vector quantization (FSLVQ) algorithm has been proposed in the literature, using fuzzy functions to approximate lateral neural interaction of the SOM. However, the computational performance of FSLVQ is still not good enough, especially for large data sets. In this paper, we propose a suppressed FSLVQ (S-FSLVQ) using suppression with a parameter learning schema. We then apply the S-FSLVQ to MRI segmentation and compare it with several existing methods. Methods and materials: The proposed S-FSLVQ algorithm and some existing methods, such as FSLVQ, generalized LVQ, revised generalized LVQ and alternative LVQ, are compared using numerical data and MRI images. The numerical data are generated by a mixture of normal distributions. The MRI data sets are from a 2-year-old female patient who was diagnosed with retinoblastoma of her left eye, a congenital malignant neoplasm of the retina with frequent metastasis beyond the lacrimal cribrosa. To evaluate the performance of these algorithms, two criteria for accuracy and computational efficiency are used. Results: Comparing S-FSLVQ with FSLVQ, generalized LVQ, revised generalized LVQ and alternative LVQ, the numerical results indicate that the S-FSLVQ algorithm is better than the other algorithms in accuracy and computational efficiency. Moreover, the proposed S-FSLVQ can reduce the computation time and increase accuracy compared to existing methods in segmenting these ophthalmological MRIs. Conclusions: The proposed S-FSLVQ is a good competitive learning algorithm that is very suitable for segmenting the ophthalmological MRI data sets. Therefore, the S-FSLVQ algorithm is highly recommended for use in MRI segmentation as an aid for supportive diagnoses.
---
paper_title: Fuzzy algorithms for learning vector quantization
paper_content:
This paper presents the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms are derived by minimizing the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of a competitive learning vector quantization (LVQ) network, which represent the prototypes. This formulation leads to competitive algorithms, which allow each input vector to attract all prototypes. The strength of attraction between each input and the prototypes is determined by a set of membership functions, which can be selected on the basis of specific criteria. A gradient-descent-based learning rule is derived for a general class of admissible membership functions which satisfy certain properties. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible membership functions with different properties. The proposed algorithms are tested and evaluated using the IRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization.
---
paper_title: Feature-based classification of time-series data
paper_content:
In this paper we propose the use of statistical features for time-series classification. The classification is performed with a multi-layer perceptron (MLP) neural network. The proposed method is examined in the context of Control Chart Pattern data, which are time series used in Statistical Process Control. Experimental results verify the efficiency of the feature-based classification method, compared to previous methods which classify time series based on the values of each time point. Moreover, the results show the robustness of the proposed method against noise and time-series length.
---
paper_title: Prediction of laser butt joint welding parameters using back propagation and learning vector quantization networks
paper_content:
Abstract Laser welding parameters include not only the laser power, focused spot size, welding speed, focused position, etc., but also the welding gap and the alignment of the laser beam with the center of the welding gap, these latter two parameters being critical for a butt joint. These parameters are controllable in the actual operation of laser welding, but are interconnected and extremely non-linear, such problems limit the industrial applicability of the laser welding for butt joints. The neural network technique is a useful tool for predicting the operation parameters of a non-linear model. Back propagation (BP) and learning vector quantization (LVQ) networks are presented in this paper to predict the laser welding parameters for butt joints. The input parameters of the network include workpiece thickness and welding gap, whilst the output parameters include optimal focused position, acceptable welding parameters of laser power and welding speed, and welding quality, including weld width, undercut and distortion for the associated power and speed used. The results of this research show a comprehensive and usable prediction of the laser welding parameters for butt joints using BP and LVQ networks. As a result, the industrial applicability of laser welding for butt joints can be expanded widely.
---
paper_title: Generalized Learning Vector Quantization
paper_content:
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function. The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.
---
paper_title: Initialization insensitive LVQ algorithm based on cost-function adaptation
paper_content:
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ) is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets. datasets.
---
paper_title: Prototype-based classification of dissimilarity data
paper_content:
Unlike many black-box algorithms in machine learning, prototype-based models offer an intuitive interface to given data sets, since prototypes can directly be inspected by experts in the field. Most techniques rely on Euclidean vectors such that their suitability for complex scenarios is limited. Recently, several unsupervised approaches have successfully been extended to general, possibly non-Euclidean data characterized by pairwise dissimilarities. In this paper, we shortly review a general approach to extend unsupervised prototype-based techniques to dissimilarities, and we transfer this approach to supervised prototypebased classification for general dissimilarity data. In particular, a new supervised prototype-based classification technique for dissimilarity data is proposed.
---
paper_title: Soft Nearest Prototype Classification
paper_content:
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.
---
paper_title: Supervised Neural Gas with General Similarity Measure
paper_content:
Prototype based classification offers intuitive and sparse models with excellent generalization ability. However, these models usually crucially depend on the underlying Euclidian metric; moreover, online variants likely suffer from the problem of local optima. We here propose a generalization of learning vector quantization with three additional features: (I) it directly integrates neighborhood cooperation, hence is less affected by local optima; (II) the method can be combined with any differentiable similarity measure whereby metric parameters such as relevance factors of the input dimensions can automatically be adapted according to the given data; (III) it obeys a gradient dynamics hence shows very robust behavior, and the chosen objective is related to margin optimization.
---
paper_title: Distance Learning in Discriminative Vector Quantization
paper_content:
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
---
paper_title: Detection of seizure activity in EEG by an artificial neural network: a preliminary study
paper_content:
Abstract Neural networks, inspired by the organizational principles of the human brain, have recently been used in various fields of application such as pattern recognition, identification, classification, speech, vision, signal processing, and control systems. In this study, a two-layered neural network has been trained for the recognition of temporal patterns of the electroencephalogram (EEG). This network is called a Learning Vector Quantization (LVQ) neural network since it learns the characteristics of the signal presented to it as a vector. The first layer is a competitive layer which learns to classify the input vectors. The second, linear, layer transforms the output of the competitive layer to target classes defined by the user. We have tested and evaluated the LVQ network. The network successfully detects epileptiform discharges (EDs) when trained using EEG records scored by a neurologist. Epochs of EEG containing EDs from one subject have been used for training the network, and EEGs of other subjects have been used for testing the network. The results demonstrate that the LVQ detector can generalize the learning to previously “unseen” records of subjects. This study shows that the LVQ network offers a practical solution for ED detection which is easily adjusted to an individual neurologist's style and is as sensitive and specific as an expert visual analysis.
---
paper_title: Two soft relatives of learning vector quantization
paper_content:
Abstract Learning vector quantization often requires extensive experimentation with the learning rate distribution and update neighborhood used during iteration towards good prototypes. A single winner prototype controls the updates. This paper discusses two soft relatives of LVQ: the soft competition scheme (SCS) of Yair et al. and fuzzy LVQ=FLVQ. These algorithms both extend the update neighborhood to all nodes in the network. SCS is a sequential, deterministic method with learning rates that are partially based on posterior probabilities. FLVQ is a batch algorithm whose learning rates are derived from fuzzy memberships. We show that SCS learning rates can be interpreted in terms of statistical decision theory, and derive several relationships between SCS and FLVQ. Limit analysis shows that the learning rates of these two algorithms have opposite tendencies. Numerical examples illustrate the difficulty of choosing good algorithmic parameters for SCS. Finally, we elaborate the relationship between FL VQ, Fuzzy c-Means, Hard c-Means, a batch version of LVQ and SCS.
---
paper_title: A Methodology for Constructing Fuzzy Algorithms for Learning Vector Quantization
paper_content:
This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). The design of specific FALVQ algorithms according to existing approaches reduces to the selection of the membership function assigned to the weight vectors of an LVQ competitive neural network, which represent the prototypes. The development of a broad variety of FALVQ algorithms can be accomplished by selecting the form of the interference function that determines the effect of the nonwinning prototypes on the attraction between the winning prototype and the input of the network. The proposed methodology provides the basis for extending the existing FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms. This paper also introduces two quantitative measures which establish a relationship between the formulation that led to FALVQ algorithms and the competition between the prototypes during the learning process. The proposed algorithms and competition measures are tested and evaluated using the IRIS data set. The significance of the proposed competition measure is illustrated using FALVQ algorithms to perform segmentation of magnetic resonance images of the brain.
---
paper_title: Learning Vector Quantization Neural Networks for LED Wafer Defect Inspection
paper_content:
Automatic visual inspection of defects plays an important role in industrial manufacturing with the benefits of low-cost and high accuracy. In light-emitting diode (LED) manufacturing, each die on the LED wafer must be inspected to determine whether it has defects or not. Therefore, detection of defective regions is a significant issue to discuss. In this paper, a new approach for inspection of LED wafer defects using the learning vector quantization (LVQ) neural network is presented. In the wafer image, each die image and the region of interest (ROI) in them to handle can be acquired. Then, by analyzing the properties of every ROI, we can extract specific geometric features and texture features. Using these features, the LVQ neural network is presented to classify these dies as either acceptable or not. The experimental results confirm the usefulness of the approach for LED wafer defect inspection.
---
paper_title: Fault diagnosis of stamping process based on empirical mode decomposition and learning vector quantization
paper_content:
Sheet metal stamping process is widely used in industry due to its high accuracy and productivity. However, monitoring the process is a difficult task since the monitoring signals are typically non-stationary transient signals. In this paper, empirical mode decomposition (EMD) is applied to extract the main features of the strain signals. First, the signal is decomposed by EMD into intrinsic mode functions (IMF). Then the signal energy and the Hilbert marginal spectrum, which reflects the working condition and the fault pattern of the process, are computed. Finally, to identify the faulty conditions of process, the learning vector quantization (LVQ) network is used as a classifier with the Hilbert marginal spectrum as the input vectors. The performance of this method is tested by 107 experiments derived from different conditions in the sheet metal stamping process. The artificially created defects can be detected with a success rate of 96.3%. The method seems to be useful to monitor a sheet metal stamping process in practice.
---
paper_title: An introduction to neural computing
paper_content:
Abstract This article contains a brief survey of the motivations, fundamentals, and applications of artificial neural networks, as well as some detailed analytical expressions for their theory.
---
paper_title: Improved versions of learning vector quantization
paper_content:
The author introduces a variant of (supervised) learning vector quantization (LVQ) and discusses practical problems associated with the application of the algorithms. The LVQ algorithms work explicitly in the input domain of the primary observation vectors, and their purpose is to approximate the theoretical Bayes decision borders using piecewise linear decision surfaces. This is done by purported optimal placement of the class codebook vectors in signal space. As the classification decision is based on the nearest-neighbor selection among the codebook vectors, its computation is very fast. It has turned out that the differences between the presented algorithms in regard to the remaining discretization error are not significant, and thus the choice of the algorithm may be based on secondary arguments, such as stability in learning, in which respect the variant introduced (LVQ2.1) seems to be superior to the others. A comparative study of several methods applied to speech recognition is included
---
paper_title: Margin Analysis of the LVQ Algorithm
paper_content:
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Neighbour (NN) classifiers. In this paper we discuss theoretical and algorithmical aspects of such algorithms. On the theory side, we present margin based generalization bounds that suggest that these kinds of classifiers can be more accurate then the 1-NN rule. Furthermore, we derived a training algorithm that selects a good set of prototypes using large margin principles. We also show that the 20 years old Learning Vector Quantization (LVQ) algorithm emerges naturally from our framework.
---
paper_title: Generalized Learning Vector Quantization
paper_content:
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function. The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.
---
paper_title: On the Generalization Ability of GRLVQ Networks
paper_content:
We derive a generalization bound for prototype-based classifiers with adaptive metric. The bound depends on the margin of the classifier and is independent of the dimensionality of the data. It holds for classifiers based on the Euclidean metric extended by adaptive relevance terms. In particular, the result holds for relevance learning vector quantization (RLVQ) [4] and generalized relevance learning vector quantization (GRLVQ) [19].
---
paper_title: Soft Learning Vector Quantization
paper_content:
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function based on a likelihood ratio and derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "soft" LVQ algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.
---
paper_title: Soft Nearest Prototype Classification
paper_content:
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.
---
paper_title: Generalized relevance learning vector quantization
paper_content:
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.
---
paper_title: Generalized Relevance LVQ for Time Series
paper_content:
An application of the recently proposed generalized relevance learning vector quantization (GRLVQ) to the analysis and modeling of time series data is presented. We use GRLVQ for two tasks: first, for obtaining a phase space embedding of a scalar time series, and second, for short term and long term data prediction. The proposed embedding method is tested with a signal from the wellknown Lorenz system. Afterwards, it is applied to daily lysimeter observations of water runoff. A one-stepp rediction of the runoff dynamic is obtained from the classification of high dimensional subseries data vectors, from which a promising technique for long term forecasts is derived.
---
paper_title: Adaptive Relevance Matrices in Learning Vector Quantization
paper_content:
We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototype-based classification algorithm, toward a general adaptive metric. By introducing a full matrix of relevance factors in the distance measure, correlations between different features and their importance for the classification scheme can be taken into account and automated, and general metric adaptation takes place during training. In comparison to the weighted Euclidean metric used in RLVQ and its variations, a full matrix is more powerful to represent the internal structure of the data appropriately. Large margin generalization bounds can be transferred to this case, leading to bounds that are independent of the input dimensionality. This also holds for local metrics attached to each prototype, which corresponds to piecewise quadratic decision boundaries. The algorithm is tested in comparison to alternative learning vector quantization schemes using an artificial data set, a benchmark multiclass problem from the UCI repository, and a problem from bioinformatics, the recognition of splice sites for C. elegans.
---
paper_title: Distance Learning in Discriminative Vector Quantization
paper_content:
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
---
paper_title: A novel kernel prototype-based learning algorithm
paper_content:
We propose a novel kernel prototype-based learning algorithm, called kernel generalized learning vector quantization (KGLYQ) algorithm, which can significantly improve the classification performance of the original generalized learning vector quantization algorithm in complex pattern classification tasks. In addition, the KGLVQ can also serve as a good general kernel learning framework for further investigation.
---
paper_title: Input space versus feature space in kernel-based methods
paper_content:
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.
---
paper_title: Efficient Kernelized prototype based classification.
paper_content:
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
---
paper_title: Relational extensions of learning vector quantization
paper_content:
Prototype-based models offer an intuitive interface to given data sets by means of an inspection of the model prototypes. Supervised classification can be achieved by popular techniques such as learning vector quantization (LVQ) and extensions derived from cost functions such as generalized LVQ (GLVQ) and robust soft LVQ (RSLVQ). These methods, however, are restricted to Euclidean vectors and they cannot be used if data are characterized by a general dissimilarity matrix. In this approach, we propose relational extensions of GLVQ and RSLVQ which can directly be applied to general possibly non-Euclidean data sets characterized by a symmetric dissimilarity matrix.
---
paper_title: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications
paper_content:
# Spaces # Characterization of Dissimilarities # Learning Approaches # Dissimilarity Measures # Visualization # Further Data Exploration # One-Class Classifiers # Classification # Combining # Representation Review and Recommendations # Conclusions and Open Problems
---
paper_title: Prototype-based classification of dissimilarity data
paper_content:
Unlike many black-box algorithms in machine learning, prototype-based models offer an intuitive interface to given data sets, since prototypes can directly be inspected by experts in the field. Most techniques rely on Euclidean vectors such that their suitability for complex scenarios is limited. Recently, several unsupervised approaches have successfully been extended to general, possibly non-Euclidean data characterized by pairwise dissimilarities. In this paper, we shortly review a general approach to extend unsupervised prototype-based techniques to dissimilarities, and we transfer this approach to supervised prototypebased classification for general dissimilarity data. In particular, a new supervised prototype-based classification technique for dissimilarity data is proposed.
---
paper_title: Improved versions of learning vector quantization
paper_content:
The author introduces a variant of (supervised) learning vector quantization (LVQ) and discusses practical problems associated with the application of the algorithms. The LVQ algorithms work explicitly in the input domain of the primary observation vectors, and their purpose is to approximate the theoretical Bayes decision borders using piecewise linear decision surfaces. This is done by purported optimal placement of the class codebook vectors in signal space. As the classification decision is based on the nearest-neighbor selection among the codebook vectors, its computation is very fast. It has turned out that the differences between the presented algorithms in regard to the remaining discretization error are not significant, and thus the choice of the algorithm may be based on secondary arguments, such as stability in learning, in which respect the variant introduced (LVQ2.1) seems to be superior to the others. A comparative study of several methods applied to speech recognition is included
---
paper_title: Vector quantization using information theoretic concepts
paper_content:
The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen self-organizing map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing elements, hence, the algorithm can be seen as a density matching method.
---
paper_title: Feature Extraction by Non-Parametric Mutual Information Maximization
paper_content:
We present a method for learning discriminative feature transforms using as criterion the mutual information between class labels and transformed features. Instead of a commonly used mutual information measure based on Kullback-Leibler divergence, we use a quadratic divergence measure, which allows us to make an efficient non-parametric implementation and requires no prior assumptions about class densities. In addition to linear transforms, we also discuss nonlinear transforms that are implemented as radial basis function networks. Extensions to reduce the computational complexity are also presented, and a comparison to greedy feature selection is made.
---
paper_title: A supervised growing neural gas algorithm for cluster analysis
paper_content:
In this paper, a prototype-based supervised clustering algorithm is proposed. The proposed algorithm, called the Supervised Growing Neural Gas algorithm (SGNG), incorporates several techniques from some unsupervised GNG algorithms such as the adaptive learning rates and the cluster repulsion mechanisms of the Robust Growing Neural Gas algorithm, and the Type Two Learning Vector Quantization (LVQ2) technique. Furthermore, a new prototype insertion mechanism and a clustering validity index are proposed. These techniques are designed to utilize class labels of the training data to guide the clustering. The SGNG algorithm is capable of clustering adjacent regions of data objects labeled with different classes, formulating topological relationships among prototypes and automatically determining the optimal number of clusters using the proposed validity index. To evaluate the effectiveness of the SGNG algorithm, two experiments are conducted. The first experiment uses two synthetic data sets to graphically illustrate the potential with respect to growing ability, ability to cluster adjacent regions of different classes, and ability to determine the optimal number of prototypes. The second experiment evaluates the effectiveness using the UCI benchmark data sets. The results from the second experiment show that the SGNG algorithm performs better than other supervised clustering algorithms for both cluster impurities and total running times.
---
paper_title: A new generalized LVQ algorithm via harmonic to minimum distance measure transition
paper_content:
We present a novel generalized learning vector quantization (LVQ) framework called the harmonic to minimum generalized LVQ algorithm (H2M-GLVQ). Through incorporating the distance measure transition procedure from harmonic average distance to minimum distance, the H2M-GLVQ cost function is gradually changing from the soft model to the hard model. Our proposed method, at the early training stage, can effectively tackle the initialization sensitivity problem associated with the original generalized LVQ algorithm while the convergence of the algorithm can be ensured by the hard model in the later training stage. Experimental results have shown the superior performance of the H2M-GLVQ algorithm over the generalized LVQ and one of its variants on some artificial multi-modal datasets
---
paper_title: Divergence-Based Vector Quantization
paper_content:
Supervised and unsupervised vector quantization methods for classification and clustering traditionally use dissimilarities, frequently taken as Euclidean distances. In this article, we investigate the applicability of divergences instead, focusing on online learning. We deduce the mathematical fundamentals for its utilization in gradient-based online vector quantization algorithms. It bears on the generalized derivatives of the divergences known as Fréchet derivatives in functional analysis, which reduces in finite-dimensional problems to partial derivatives in a natural way. We demonstrate the application of this methodology for widely applied supervised and unsupervised online vector quantization schemes, including self-organizing maps, neural gas, and learning vector quantization. Additionally, principles for hyperparameter optimization and relevance learning for parameterized divergences in the case of supervised vector quantization are given to achieve improved classification accuracy.
---
paper_title: Initialization insensitive LVQ algorithm based on cost-function adaptation
paper_content:
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ) is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets. datasets.
---
paper_title: Fuzzy classification using information theoretic learning vector quantization
paper_content:
In this article we extend the (recently published) unsupervised information theoretic vector quantization approach based on the Cauchy-Schwarz-divergence for matching data and prototype densities to supervised learning and classification. In particular, first we generalize the unsupervised method to more general metrics instead of the Euclidean, as it was used in the original algorithm. Thereafter, we extend the model to a supervised learning method resulting in a fuzzy classification algorithm. Thereby, we allow fuzzy labels for both, data and prototypes. Finally, we transfer the idea of relevance learning for metric adaptation known from learning vector quantization to the new approach. We show the abilities and the power of the method for exemplary and real-world medical applications.
---
paper_title: Supervised Neural Gas with General Similarity Measure
paper_content:
Prototype based classification offers intuitive and sparse models with excellent generalization ability. However, these models usually crucially depend on the underlying Euclidian metric; moreover, online variants likely suffer from the problem of local optima. We here propose a generalization of learning vector quantization with three additional features: (I) it directly integrates neighborhood cooperation, hence is less affected by local optima; (II) the method can be combined with any differentiable similarity measure whereby metric parameters such as relevance factors of the input dimensions can automatically be adapted according to the given data; (III) it obeys a gradient dynamics hence shows very robust behavior, and the chosen objective is related to margin optimization.
---
paper_title: A Growing Neural Gas Network Learns Topologies
paper_content:
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.
---
paper_title: ;Neural-gas' network for vector quantization and its application to time-series prediction.
paper_content:
A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks.
---
paper_title: Mutual Information in Learning Feature Transformations
paper_content:
We present feature transformations useful for exploratory data analysis or for pattern recognition. Transformations are learned from example data sets by maximizing the mutual information between transformed data and their class labels. We make use of Renyi’s quadratic entropy, and we extend the work of Principe et al. to mutual information between continuous multidimensional variables and discrete-valued class labels.
---
paper_title: Generalized relevance learning vector quantization
paper_content:
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.
---
paper_title: Distance Learning in Discriminative Vector Quantization
paper_content:
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
---
paper_title: A novel kernel prototype-based learning algorithm
paper_content:
We propose a novel kernel prototype-based learning algorithm, called kernel generalized learning vector quantization (KGLYQ) algorithm, which can significantly improve the classification performance of the original generalized learning vector quantization algorithm in complex pattern classification tasks. In addition, the KGLVQ can also serve as a good general kernel learning framework for further investigation.
---
paper_title: Kernel robust soft learning vector quantization
paper_content:
Prototype-based classification schemes offer very intuitive and flexible classifiers with the benefit of easy interpretability of the results and scalability of the model complexity. Recent prototype-based models such as robust soft learning vector quantization (RSLVQ) have the benefit of a solid mathematical foundation of the learning rule and decision boundaries in terms of probabilistic models and corresponding likelihood optimization. In its original form, they can be used for standard Euclidean vectors only. In this contribution, we extend RSLVQ towards a kernelized version which can be used for any positive semidefinite data matrix. We demonstrate the superior performance of the technique, kernel RSLVQ, in a variety of benchmarks where results competitive or even superior to state-of-the-art support vector machines are obtained.
---
paper_title: Efficient Kernelized prototype based classification.
paper_content:
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
---
paper_title: Relational extensions of learning vector quantization
paper_content:
Prototype-based models offer an intuitive interface to given data sets by means of an inspection of the model prototypes. Supervised classification can be achieved by popular techniques such as learning vector quantization (LVQ) and extensions derived from cost functions such as generalized LVQ (GLVQ) and robust soft LVQ (RSLVQ). These methods, however, are restricted to Euclidean vectors and they cannot be used if data are characterized by a general dissimilarity matrix. In this approach, we propose relational extensions of GLVQ and RSLVQ which can directly be applied to general possibly non-Euclidean data sets characterized by a symmetric dissimilarity matrix.
---
paper_title: Prototype-based classification of dissimilarity data
paper_content:
Unlike many black-box algorithms in machine learning, prototype-based models offer an intuitive interface to given data sets, since prototypes can directly be inspected by experts in the field. Most techniques rely on Euclidean vectors such that their suitability for complex scenarios is limited. Recently, several unsupervised approaches have successfully been extended to general, possibly non-Euclidean data characterized by pairwise dissimilarities. In this paper, we shortly review a general approach to extend unsupervised prototype-based techniques to dissimilarities, and we transfer this approach to supervised prototypebased classification for general dissimilarity data. In particular, a new supervised prototype-based classification technique for dissimilarity data is proposed.
---
paper_title: Soft Learning Vector Quantization
paper_content:
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function based on a likelihood ratio and derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "soft" LVQ algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.
---
paper_title: Soft Nearest Prototype Classification
paper_content:
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.
---
paper_title: Multiple Comparison Procedures
paper_content:
PROCEDURES BASED ON CLASSICAL APPROACHES FOR FIXED--EFFECTS LINEAR MODELS WITH NORMAL HOMOSCEDASTIC INDEPENDENT ERRORS. Some Theory of Multiple Comparisons Procedure Fixed--effects Linear Models. Single--step Procedures for Pairwise and More General Comparisons Among All Treatments. Stepwise Procedures for Pairwise and More General Comparisons Among All Treatments. Procedures for Some Other Nonhierarchical Finite Families of Comparisons. Designing Experiments for Multiple Comparisons. PROCEDURES FOR OTHER MODELS AND PROBLEMS, AND PROCEDURES BASED ON ALTERNATIVE APPROACHES. Procedures for One--way Layouts with Unequal Variances. Procedures for Some Mixed--effects Models. Distribution--free and Robust Procedures. Some Miscellaneous Multiple Comparison Problems. Optimal Procedures Using Decision--theoretic, Bayesian, and Other Approaches. Appendixes. Tables. References. Index.
---
paper_title: Initialization insensitive LVQ algorithm based on cost-function adaptation
paper_content:
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ) is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets. datasets.
---
paper_title: On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach
paper_content:
An important component of many data mining projects is finding a good classification algorithm, a process that requires very careful thought about experimental design. If not done very carefully, comparative studies of classification and other types of algorithms can easily result in statistically invalid conclusions. This is especially true when one is using data mining techniques to analyze very large databases, which inevitably contain some statistically unlikely data. This paper describes several phenomena that can, if ignored, invalidate an experimental comparison. These phenomena and the conclusions that follow apply not only to classification, but to computational experiments in almost any aspect of data mining. The paper also discusses why comparative analysis is more important in evaluating some types of algorithms than for others, and provides some suggestions about how to avoid the pitfalls suffered by many experimental studies.
---
paper_title: A probabilistic active support vector learning algorithm
paper_content:
The paper describes a probabilistic active learning strategy for support vector machine (SVM) design in large data applications. The learning strategy is motivated by the statistical query model. While most existing methods of active SVM learning query for points based on their proximity to the current separating hyperplane, the proposed method queries for a set of points according to a distribution as determined by the current separating hyperplane and a newly defined concept of an adaptive confidence factor. This enables the algorithm to have more robust and efficient learning capabilities. The confidence factor is estimated from local information using the k nearest neighbor principle. The effectiveness of the method is demonstrated on real-life data sets both in terms of generalization performance, query complexity, and training time.
---
paper_title: Margin based Active Learning for LVQ Networks
paper_content:
In this article, we extend a local prototype-based learning model by active learning, which gives the learner the capability to select training samples during the model adaptation procedure. The proposed active learning strategy aims on an improved generalization ability of the final model. This is achieved by usage of an adaptive query strategy which is more adequate for supervised learning than a simple random approach. Beside an improved generalization ability the method also improves the speed of the learning procedure which is especially beneficial for large data sets with multiple similar items. The algorithm is based on the idea of selecting a query on the borderline of the actual classification. This can be done by considering margins in an extension of learning vector quantization based on an appropriate cost function. The proposed active learning approach is analyzed for two kinds of learning vector quantizers the supervised relevance neural gas and the supervised nearest prototype classifier, but is applicable for a broader set of prototype-based learning approaches too. The performance of the query algorithm is demonstrated on synthetic and real life data taken from clinical proteomic studies. From the proteomic studies high-dimensional mass spectrometry measurements were calculated which are believed to contain features discriminating the different classes. Using the proposed active learning strategies, the generalization ability of the models could be kept or improved accompanied by a significantly improved learning speed. Both of these characteristics are important for the generation of predictive clinical models and were used in an initial biomarker discovery study.
---
paper_title: Efficient Approximations of Kernel Robust Soft LVQ
paper_content:
Robust soft learning vector quantization (RSLVQ) constitutes a probabilistic extension of learning vector quantization (LVQ) based on a labeled Gaussian mixture model of the data. Training optimizes the likelihood ratio of the model and recovers a variant similar to LVQ2.1 in the limit of small bandwidth. Recently, RSLVQ has been extended to a kernel version, thus opening the way towards more general data structures characterized in terms of a Gram matrix only. While leading to state of the art results, this extension has the drawback that models are no longer sparse, and quadratic training complexity is encountered. In this contribution, we investigate two approximation schemes which lead to sparse models: k-approximations of the prototypes and the Nystrom approximation of the Gram matrix. We investigate the behavior of these approximations in a couple of benchmarks.
---
paper_title: How to Visualize Large Data Sets
paper_content:
We address novel developments in the context of dimensionality reduction for data visualization. We consider nonlinear non-parametric techniques such as t-distributed stochastic neighbor embedding and discuss the difficulties which are encountered if large data sets are dealt with, in contrast to parametric approaches such as the self-organizing map. We focus on the following topics, which arise in this context: (i) how can dimensionality reduction be realized efficiently in at most linear time, (ii) how can nonparametric approaches be extended to provide an explicit mapping, (iii) how can techniques be extended to incorporate auxiliary information as provided by class labeling?
---
paper_title: Online Visualization of Prototypes and Receptive Fields Produced by LVQ Algorithms
paper_content:
A new approach is proposed to visualize online the training of learning vector quantization algorithms. The prototypes and data samples associated to each receptive field are projected onto a two-dimensional map by using a non-linear transformation of the input space. The mapping finds a set of projection vectors by minimizing a cost function, which preserves the local topology of the input space. The proposed visualization is tested on two datasets: image segmentation and pipeline. The usefulness of the method is demonstrated by studying the behavior of Generalized LVQ, Supervised Neural Gas and Harmonic to Minimum LVQ algorithms on high-dimensional datasets.
---
paper_title: Neural net algorithms that learn in polynomial time from examples and queries
paper_content:
An algorithm which trains networks using examples and queries is proposed. In a query, the algorithm supplies a y and is told t(y) by an oracle. Queries appear to be available in practice for most problems of interest, e.g. by appeal to a human expert. The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k >
---
paper_title: Using the Nyström Method to Speed Up Kernel Machines
paper_content:
A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nystrom method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using this approximation is O(m2n). We report experiments on the USPS and abalone data sets and show that we can set m ≪ n without any significant decrease in the accuracy of the solution.
---
| Title: A Review of Learning Vector Quantization Classifiers
Section 1: Introduction
Description 1: Introduce the scope and objectives of the paper, highlighting the significance and advantages of Learning Vector Quantization (LVQ) classifiers.
Section 2: Taxonomy of LVQ Classifiers
Description 2: Present a taxonomy of LVQ classifiers, describing the different families and their characteristics.
Section 3: Main LVQ Learning Rules
Description 3: Describe the main learning rules of LVQ algorithms, explaining the evolutionary improvements from the original LVQ to its modern variants.
Section 4: Margin Maximization
Description 4: Discuss LVQ methods based on margin maximization, detailing their cost functions and learning rules.
Section 5: Likelihood Ratio Maximization
Description 5: Explore LVQ methods based on likelihood ratio maximization, explaining their probabilistic approaches and cost functions.
Section 6: Distance Learning
Description 6: Describe distance learning approaches in LVQ, focusing on adaptive and generalized metrics.
Section 7: Kernelization
Description 7: Explain kernel approaches in LVQ, detailing how traditional LVQ algorithms are extended using kernel functions.
Section 8: Dis-/similarities
Description 8: Discuss methods based on dis-/similarities, focusing on how LVQ can be applied to non-Euclidean spaces using relational data.
Section 9: LVQ 2.1
Description 9: Provide a detailed description of LVQ 2.1, outlining its updating rules and improvements over earlier versions.
Section 10: Results
Description 10: Present experimental results comparing various LVQ algorithms on different datasets, summarizing the performance and sensitivity of each method.
Section 11: Open Problems
Description 11: Identify and discuss open problems and challenges in the field of LVQ classifiers, proposing areas for future research.
Section 12: Conclusions
Description 12: Summarize the key findings and contributions of the paper, highlighting the strengths and limitations of different LVQ methods. |
A Survey on optimization approaches to text document clustering | 16 | ---
paper_title: Combination of Fuzzy C-means and Harmony Search Algorithms for Clustering of Text Document
paper_content:
Document clustering, an important tool for document organization and browsing, has become an active research field in the machine learning community. Fuzzy c-means, an unsupervised clustering algorithm, has been widely used for categorization problems. However, as an optimization algorithm, it easily leads to local optimal clusters. To overcome this shortcoming, this paper introduces a hybrid approach of which combines the fuzzy c-means and harmony search algorithms for clustering of text document. First, we utilize harmony search algorithm to find near global optimal clusters. Then, we combine fuzzy c-means with harmony search algorithm to achieve better clustering. Experimental results on two commonly used data sets demonstrate the effectiveness and utility of this new approach.
---
paper_title: Evaluation of text document clustering approach based on particle swarm optimization
paper_content:
Abstract Clustering, an extremely important technique in Data Mining is an automatic learning technique aimed at grouping a set of objects into subsets or clusters. The goal is to create clusters that are coherent internally, but substantially different from each other. Text Document Clustering refers to the clustering of related text documents into groups based upon their content. It is a fundamental operation used in unsupervised document organization, text data mining, automatic topic extraction, and information retrieval. Fast and high-quality document clustering algorithms play an important role in effectively navigating, summarizing, and organizing information. The documents to be clustered can be web news articles, abstracts of research papers etc. This paper proposes two techniques for efficient document clustering involving the application of soft computing approach as an intelligent hybrid approach PSO algorithm. The proposed approach involves partitioning Fuzzy C-Means algorithm and K-Means algorithm each hybridized with Particle Swarm Optimization (PSO). The performance of these hybrid algorithms has been evaluated against traditional partitioning techniques (K-Means and Fuzzy C Means).
---
paper_title: Data clustering using particle swarm optimization
paper_content:
This paper proposes two new approaches to using PSO to cluster data. It is shown how PSO can be used to find the centroids of a user specified number of clusters. The algorithm is then extended to use K-means clustering to seed the initial swarm. This second algorithm basically uses PSO to refine the clusters formed by K-means. The new PSO algorithms are evaluated on six data sets, and compared to the performance of K-means clustering. Results show that both PSO clustering techniques have much potential.
---
paper_title: FCM: The fuzzy c-means clustering algorithm
paper_content:
Abstract This paper transmits a FORTRAN-IV coding of the fuzzy c -means (FCM) clustering program. The FCM program is applicable to a wide variety of geostatistical data analysis problems. This program generates fuzzy partitions and prototypes for any set of numerical data. These partitions are useful for corroborating known substructures or suggesting substructure in unexplored data. The clustering criterion used to aggregate subsets is a generalized least-squares objective function. Features of this program include a choice of three norms (Euclidean, Diagonal, or Mahalonobis), an adjustable weighting factor that essentially controls sensitivity to noise, acceptance of variable numbers of clusters, and outputs that include several measures of cluster validity.
---
paper_title: Data clustering: a review
paper_content:
Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.
---
paper_title: BIRCH: an efficient data clustering method for very large databases
paper_content:
Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I/O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle "noise" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time/space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.
---
paper_title: Hybrid PSO and GA models for Document Clustering
paper_content:
This paper presents Hybrid Particle Swarm Optimization (PSO) Genetic Algorithm (GA) approaches for the document clustering problem. To obtain an optimal solution using Genetic Algorithm, operation such as selection, reproduction, and mutation procedures are used to generate for the next generations. In this case, it is possible to obtain local solution because chromosomes or individuals which have only a close similarity can converge. In standard PSO the non-oscillatory route can quickly cause a particle to stagnate and also it may prematurely converge on suboptimal solutions that are not even guaranteed to local optimal solution. This work proposes hybrid models that enhance the search process by applying GA operations on stagnated particles and chromosomes. GA will be combined with PSO for improving the diversity, and the convergence toward the preferred solution for the document clustering problem. The approach efficiency is verified and tested using a set of document corpus. Our results indicate that the approaches are feasible alternative to solve document clustering problems.
---
paper_title: Principles of soft computing
paper_content:
The CD contains the following content. ::: 1. Power point presentations ::: * Presentations are given for Chapters 1–17, 19. ::: * MATLAB Soft Computing tools presentations are also included for easy reference of the readers to know the basic commands. ::: 2. Source Codes for Soft Computing Techniques in C ::: * Source codes are given for all the problems solved in Chapter 18. ::: * The programs are as *.txt files. ::: 3. MATLAB Source code programs ::: * MATLAB Source codes are given for problems solved in Chapter 19. ::: * The program files are given as per their problem numbers in their respective chapters. ::: 4. Copyright page ::: ::: Do install the required software before running the programs given.
---
paper_title: An improved GA and a novel PSO-GA-based hybrid algorithm
paper_content:
Inspired by the natural features of the variable size of the population, we present a variable population-size genetic algorithm (VPGA) by introducing the ''dying probability'' for the individuals and the ''war/disease process'' for the population. Based on the VPGA and the particle swarm optimization (PSO) algorithms, a novel PSO-GA-based hybrid algorithm (PGHA) is also proposed in this paper. Simulation results show that both VPGA and PGHA are effective for the optimization problems.
---
paper_title: The Bees Algorithm - A Novel Tool for Complex Optimisation Problems
paper_content:
Publisher Summary This chapter presents a new population-based search algorithm called the Bees Algorithm (BA). This algorithm mimics the food foraging behavior of swarms of honeybees. In its basic version, the algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization. This chapter focuses on the latter. Following a description of the algorithm, the chapter presents the results obtained for a number of benchmark problems demonstrating the efficiency and robustness of the new algorithm. The results show that the algorithm can reliably handle complex multi-model optimization problems without being trapped at local solutions. One of the drawbacks of the algorithm is the number of tunable parameters used. However, it is possible to set the parameter values by conducting a small number of trials.
---
paper_title: Ants for Document Clustering
paper_content:
The usage of computers for mass storage has become mandatory nowadays due to World Wide Web (WWW). This has placed many challenges to the Information Retrieval (IR) system. Clustering of documents available improves the efficiency of IR system. The problem of clustering has become a combinatorial optimization problem in IR system due to the exponential growth in information over WWW. In this paper, a hybrid algorithm that combines the basic Ant Colony Optimization with Tabu search has been proposed. The feasibility of the proposed algorithm is tested over a few standard benchmark datasets. The experimental results reveal that the proposed algorithm yields promising quality clusters compared to other ones produced by K-means algorithm.
---
paper_title: Modern Information Retrieval
paper_content:
From the Publisher: ::: This is a rigorous and complete textbook for a first course on information retrieval from the computer science (as opposed to a user-centred) perspective. The advent of the Internet and the enormous increase in volume of electronically stored information generally has led to substantial work on IR from the computer science perspective - this book provides an up-to-date student oriented treatment of the subject.
---
paper_title: A vector space model for automatic indexing
paper_content:
In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.
---
paper_title: Fuzzy logic, neural networks, and soft computing
paper_content:
In retrospect, the year 1990 may well be viewed as the beginning of a new trend in the design of household appliances, consumer electronics, cameras, and other types of widely used consumer products. The trend in question relates to a marked increase in what might be called the Machine Intelligence Quotient (MIQ) of such products compared to what it was before 1990. Today, we have microwave ovens and washing machines that can figure out on their own what settings to use to perform their tasks optimally; cameras that come close to professional photographers in picture-taking ability; and many other products that manifest an impressive capability to reason, make intelligent decisions, and learn from experience
---
paper_title: Hybrid PSO and GA models for Document Clustering
paper_content:
This paper presents Hybrid Particle Swarm Optimization (PSO) Genetic Algorithm (GA) approaches for the document clustering problem. To obtain an optimal solution using Genetic Algorithm, operation such as selection, reproduction, and mutation procedures are used to generate for the next generations. In this case, it is possible to obtain local solution because chromosomes or individuals which have only a close similarity can converge. In standard PSO the non-oscillatory route can quickly cause a particle to stagnate and also it may prematurely converge on suboptimal solutions that are not even guaranteed to local optimal solution. This work proposes hybrid models that enhance the search process by applying GA operations on stagnated particles and chromosomes. GA will be combined with PSO for improving the diversity, and the convergence toward the preferred solution for the document clustering problem. The approach efficiency is verified and tested using a set of document corpus. Our results indicate that the approaches are feasible alternative to solve document clustering problems.
---
paper_title: A Novel Document Clustering Algorithm Based On Ant Colony Optimization Algorithm
paper_content:
Document clustering based on ant colony optimization algorithm has lately attracted the attention of many scholars throughout the globe. The aim of document clustering is to place similar content in one group, and non-similar contents in separate groups. In this article, by changing the behavior model of ant movement, we attempt to upgrade the standard ant’s clustering algorithm. Ants’ movement is completely random in the standard clustering algorithm. On the one hand, we improve the algorithm’s efficiency by making ant movements purposeful, and on the other hand, by changing the rules of ant movement, we provide conditions so that the carrier ant moves to a location with intensive similarity with the carried component, and the noncarrier ant moves to a location where a component is surrounded by dissimilar components. We tested our proposed algorithm on a set of documents extracted from the 21578 Reuters Information Bank. Results show that the proposed algorithm on presents a better average performance compared to the standard ants clustering algorithm, and the K-means algorithm.
---
paper_title: Text clustering on latent semantic indexing with particle swarm optimization ( PSO ) algorithm
paper_content:
Most of web users use various search engines to get specific information. A key factor in the success of web search engines are their ability to rapidly find good quality results to the queries that are based on specific terms. This paper aims at retrieving more relevant documents from a huge corpus based on the required information. We propose a particle swarm optimization algorithm based on latent semantic indexing (PSO+LSI) for text clustering. PSO family of bio-inspired algorithms has recently successfully been applied to a number of real word clustering problems. We use an adaptive inertia weight (AIW) that do proper exploration and exploitation in search space. PSO can merge with LSI to achieve best clustering accuracy and efficiency. This framework provides more relevant documents to the user and reduces the irrelevant documents. It would be seen that for all numbers of dimensions, PSO+LSI are faster than PSO+Kmeans algorithms using vector space model (VSM). It takes 22.3 s for PSO+LSI method with 1000 terms to obtain its best performance on 150 dimensions.
---
| Title: A Survey on Optimization Approaches to Text Document Clustering
Section 1: INTRODUCTION
Description 1: Provide an overview of the importance of document clustering, challenges faced, and different clustering methodologies.
Section 2: SOFT COMPUTING TECHNIQUES
Description 2: Explain what soft computing is and its relevance to text document clustering, including a brief discussion of various soft computing techniques.
Section 3: Genetic Algorithm (GA)
Description 3: Discuss the Genetic Algorithm in text document clustering, detailing its operators like crossover, mutation, and selection.
Section 4: Bees Algorithm (BA)
Description 4: Provide an overview of the Bees Algorithm, its inspiration from natural foraging behavior, and its application to document clustering.
Section 5: Particle Swarm Optimization (PSO)
Description 5: Explain the Particle Swarm Optimization technique, its principles, and its application in clustering, including defining pbest and gbest.
Section 6: Ant Colony Optimization (ACO)
Description 6: Describe the Ant Colony Optimization algorithm, its inspiration from natural ant behavior, and its usage in document clustering.
Section 7: DOCUMENT CLUSTERING
Description 7: Discuss the process of document clustering and various steps involved, such as preprocessing, encoding, and evaluation.
Section 8: Preprocessing
Description 8: Explain the preprocessing steps involved in document clustering, including the removal of stop words and stemming.
Section 9: Text Document Encoding
Description 9: Describe how text documents are encoded into document term matrices (DTM) and the importance of tf-idf weighting.
Section 10: Dimension reduction techniques
Description 10: Discuss the need for dimension reduction in document clustering and describe techniques like feature selection and feature extraction.
Section 11: Latent Semantic Indexing (LSI)
Description 11: Provide an overview of Latent Semantic Indexing (LSI), its use of Singular Value Decomposition (SVD), and its benefits in text processing.
Section 12: Similarity Measurement
Description 12: Describe different methods for computing similarity or dissimilarity between documents, with a focus on distance measures.
Section 13: Evaluation of Text Clustering
Description 13: Outline the metrics used to evaluate the quality of clustering, such as F-measure, Purity, and Entropy.
Section 14: Datasets
Description 14: Introduce different datasets commonly used for evaluating document clustering algorithms, like 20NewsGroup and Hamshahri.
Section 15: RELATED WORKS
Description 15: Summarize previous research efforts in text document clustering, especially those involving soft computing techniques.
Section 16: CONCLUSION
Description 16: Conclude with a summary of the survey's findings and suggest areas for future research. |
Towards a Comprehensive Conceptual Framework of Active Travel Behavior: a Review and Synthesis of Published Frameworks | 8 | ---
paper_title: Bikeway Networks: A Review of Effects on Cycling
paper_content:
Research linking bikeway infrastructure and cycling levels has increased significantly over the last 20 years — with the strongest growth since 2010. The research has evolved from the study of lanes and paths, to include analyses of the role of intersection treatments, and finally to studies that attempt to measure the whole bike network. Most studies suggest a positive relationship between bikeway networks or aspects of the network and cycling levels. Stated and revealed-preference studies suggest a hierarchy of cyclist and non-cyclist preferences may exist, favoring separate paths and/or lanes over cycling on roadways with traffic — particularly with high volumes of fast-moving motorized traffic. Revealed- and stated-route-choice studies indicate that intersections have negative effects on the cycling experience, but that certain features can offset this. The research correlating link and node characteristics to cycling implies that networks of such facilities would have positive effects, though very few empirical studies link complex measures of the network to cycling levels. In spite of an increase in studies and general agreement among findings, several important research gaps remain, including empirical studies using comprehensive network measures and studies of specific facility designs and new types of facilities (including intersection treatments). Improved research methods are necessary, including better sampling, longitudinal studies, greater geographic diversity, and incorporating more control variables, including policies.
---
paper_title: Estimation of the determinants of bicycle mode share for the journey to work using census data
paper_content:
A model is presented that relates the proportion of bicycle journeys to work for English and Welsh electoral wards to relevant socio-economic, transport and physical variables. A number of previous studies have exploited existing disaggregate data sets. This study uses UK 2001 census data, is based on a logistic regression model and provides complementary evidence based on aggregate data for the determinants of cycle choice. It suggests a saturation level for bicycle use of 43%. Smaller proportions cycle in wards with more females and higher car ownership. The physical condition of the highway, rainfall and temperature each have an effect on the proportion that cycles to work, but the most significant physical variable is hilliness. The proportion of bicycle route that is off-road is shown to be significant, although it displays a low elasticity (+0.049) and this contrasts with more significant changes usually forecast by models constructed from stated preference based data. Forecasting shows the trend in car ownership has a significant effect on cycle use and offsets the positive effect of the provision of off-road routes for cycle traffic but only in districts that are moderately hilly or hilly. The provision of infrastructure alone appears insufficient to engender higher levels of cycling.
---
paper_title: Evaluating the travel, physical activity and carbon impacts of a ‘natural experiment’ in the provision of new walking and cycling infrastructure: methods for the core module of the iConnect study
paper_content:
INTRODUCTION: Improving infrastructure to support walking and cycling is often regarded as fundamental to encouraging their widespread uptake. However, there is little evidence that specific provision of this kind has led to a significant increase in walking or cycling in practice, let alone wider impacts such as changes in overall physical activity or carbon emissions. Connect2 is a major new project that aims to promote walking and cycling in the UK by improving local pedestrian and cycle routes. It therefore provides a useful opportunity to contribute new evidence in this field by means of a natural experimental study. METHODS AND ANALYSIS: iConnect is an independent study that aims to integrate the perspectives of public health and transport research on the measurement and evaluation of the travel, physical activity and carbon impacts of the Connect2 programme. In this paper, the authors report the study design and methods for the iConnect core module. This comprised a cohort study of residents living within 5 km of three case study Connect2 projects in Cardiff, Kenilworth and Southampton, supported by a programme of qualitative interviews with key informants about the projects. Participants were asked to complete postal questionnaires, repeated before and after the opening of the new infrastructure, which collected data on demographic and socioeconomic characteristics, travel, car fuel purchasing and physical activity, and potential psychosocial and environmental correlates and mediators of those behaviours. In the absence of suitable no-intervention control groups, the study design drew on heterogeneity in exposure both within and between case study samples to provide for a counterfactual. ETHICS AND DISSEMINATION: The study was approved by the University of Southampton Research Ethics Committee. The findings will be disseminated through academic presentations, peer-reviewed publications and the study website (http://www.iconnect.ac.uk) and by means of a national seminar at the end of the study.
---
paper_title: Health impact assessment of active transportation: A systematic review.
paper_content:
OBJECTIVE ::: Walking and cycling for transportation (i.e. active transportation, AT), provide substantial health benefits from increased physical activity (PA). However, risks of injury from exposure to motorized traffic and their emissions (i.e. air pollution) exist. The objective was to systematically review studies conducting health impact assessment (HIA) of a mode shift to AT on grounds of associated health benefits and risks. ::: ::: ::: METHODS ::: Systematic database searches of MEDLINE, Web of Science and Transportation Research International Documentation were performed by two independent researchers, augmented by bibliographic review, internet searches and expert consultation to identify peer-reviewed studies from inception to December 2014. ::: ::: ::: RESULTS ::: Thirty studies were included, originating predominantly from Europe, but also the United States, Australia and New Zealand. They compromised of mostly HIA approaches of comparative risk assessment and cost-benefit analysis. Estimated health benefit-risk or benefit-cost ratios of a mode shift to AT ranged between -2 and 360 (median=9). Effects of increased PA contributed the most to estimated health benefits, which strongly outweighed detrimental effects of traffic incidents and air pollution exposure on health. ::: ::: ::: CONCLUSION ::: Despite different HIA methodologies being applied with distinctive assumptions on key parameters, AT can provide substantial net health benefits, irrespective of geographical context.
---
paper_title: Similarities in Attitudes and Norms and the Effect on Bicycle Commuting: Evidence from the Bicycle Cities Davis and Delft
paper_content:
Owing to its beneficial effects, many governments encourage bicycle use for commuting. In search of effective strategies, they often study best practices from elsewhere. However, in order to assess the likely success of transferring measures from one city or country to another, an accurate comparison of the bicycling context is needed. This article explores the similarities and differences in attitudes and beliefs about the decision to commute by bicycle to work in two bicycling-oriented cities: Delft, the Netherlands, and Davis, California, in the U.S. Because bicycling conditions are good in both cities, it is possible to explore the role that attitudes play in the decision to cycle to work. Analyses indicate that beliefs about safety and the importance attached to environmental benefits differ between the cities. Social norms about cycling are important in both cities, but residents in Davis are more often confronted with negative reactions to cycling. Similarities are found in beliefs towards the health benefits of cycling. Strategies successful in one city in encouraging cycling by targeting or leveraging health therefore offer promise for the other city. This exploration provides an important starting point for large-sample comparative studies of attitudes towards bicycle commuting.
---
paper_title: Associations of health, physical activity and weight status with motorised travel and transport carbon dioxide emissions: a cross-sectional, observational study
paper_content:
Motorised travel and associated carbon dioxide (CO2) emissions generate substantial health costs; in the case of motorised travel, this may include contributing to rising obesity levels. Obesity has in turn been hypothesised to increase motorised travel and/or CO2 emissions, both because heavier people may use motorised travel more and because heavier people may choose larger and less fuel-efficient cars. These hypothesised associations have not been examined empirically, however, nor has previous research examined associations with other health characteristics. Our aim was therefore to examine how and why weight status, health, and physical activity are associated with transport CO2 emissions. 3463 adults completed questionnaires in the baseline iConnect survey at three study sites in the UK, reporting their health, weight, height and past-week physical activity. Seven-day recall instruments were used to assess travel behaviour and, together with data on car characteristics, were used to estimate CO2 emissions. We used path analysis to examine the extent to which active travel, motorised travel and car engine size explained associations between health characteristics and CO2 emissions. CO2 emissions were higher in overweight or obese participants (multivariable standardized probit coefficients 0.16, 95% CI 0.08 to 0.25 for overweight vs. normal weight; 0.16, 95% CI 0.04 to 0.28 for obese vs. normal weight). Lower active travel and, particularly for obesity, larger car engine size explained 19-31% of this effect, but most of the effect was directly explained by greater distance travelled by motor vehicles. Walking for recreation and leisure-time physical activity were associated with higher motorised travel distance and therefore higher CO2 emissions, while active travel was associated with lower CO2 emissions. Poor health and illness were not independently associated with CO2 emissions. Establishing the direction of causality between weight status and travel behaviour requires longitudinal data, but the association with engine size suggests that there may be at least some causal effect of obesity on CO2 emissions. More generally, transport CO2 emissions are associated in different ways with different health-related characteristics. These include associations between health goods and environmental harms (recreational physical activity and high emissions), indicating that environment-health ‘co-benefits’ cannot be assumed. Instead, attention should also be paid to identifying and mitigating potential areas of tension, for example by promoting low-carbon recreational physical activity.
---
paper_title: The impact of transportation infrastructure on bicycling injuries and crashes: a review of the literature
paper_content:
Background: Bicycling has the potential to improve fitness, diminish obesity, and reduce noise, air pollution, and greenhouse gases associated with travel. However, bicyclists incur a higher risk of injuries requiring hospitalization than motor vehicle occupants. Therefore, understanding ways of making bicycling safer and increasing rates of bicycling are important to improving population health. There is a growing body of research examining transportation infrastructure and the risk of injury to bicyclists. Methods: We reviewed studies of the impact of transportation infrastructure on bicyclist safety. The results were tabulated within two categories of infrastructure, namely that at intersections (e.g. roundabouts, traffic lights) or between intersections on "straightaways" (e.g. bike lanes or paths). To assess safety, studies examining the following outcomes were included: injuries; injury severity; and crashes (collisions and/or falls). Results: The literature to date on transportation infrastructure and cyclist safety is limited by the incomplete range of facilities studied and difficulties in controlling for exposure to risk. However, evidence from the 23 papers reviewed (eight that examined intersections and 15 that examined straightaways) suggests that infrastructure influences injury and crash risk. Intersection studies focused mainly on roundabouts. They found that multi-lane roundabouts can significantly increase risk to bicyclists unless a separated cycle track is included in the design. Studies of straightaways grouped facilities into few categories, such that facilities with potentially different risks may have been classified within a single category. Results to date suggest that sidewalks and multi-use trails pose the highest risk, major roads are more hazardous than minor roads, and the presence of bicycle facilities (e.g. on-road bike routes, on-road marked bike lanes, and off-road bike paths) was associated with the lowest risk. Conclusion: Evidence is beginning to accumulate that purpose-built bicycle-specific facilities reduce crashes and injuries among cyclists, providing the basis for initial transportation engineering guidelines for cyclist safety. Street lighting, paved surfaces, and low-angled grades are additional factors that appear to improve cyclist safety. Future research examining a greater variety of infrastructure would allow development of more detailed guidelines.
---
paper_title: Estimating Bicycling and Walking for Planning and Project Development: A Guidebook
paper_content:
This guidebook contains methods and tools for practitioners to estimate bicycling and walking demand as part of regional-, corridor-, or project-level analyses. The methods are sensitive to key planning factors, including bicycle and pedestrian infrastructure, land use and urban design, topography, and sociodemographic characteristics. The planning tools presented in this guidebook include some entirely new methods as well as some existing methods found to have useful properties for particular applications. The tools take advantage of existing data and the capabilities present in GIS methods to create realistic measures of accessibility which are a critical determinant of bicycle, pedestrian, and even transit mode choice. The publication includes a CD-ROM (CRP-CD-148) containing a GIS Walk Accessibility Model, spreadsheets, and the contractor’s final report, which documents the research and tools that operationalize the methods described in the guidebook. The CD-ROM is also available for download from TRB’s website as an ISO image. The guidebook should be of value to transportation practitioners either directly interested in forecasting bicycling or walking activity levels or accounting for the impact of bicycle or pedestrian activity in support of broader transportation and land use planning issues.
---
paper_title: Physical Activity through Sustainable Transport Approaches (PASTA): protocol for a multi-centre, longitudinal study
paper_content:
Physical inactivity is one of the leading risk factors for non-communicable diseases, yet many are not sufficiently active. The Physical Activity through Sustainable Transport Approaches (PASTA) study aims to better understand active mobility (walking and cycling for transport solely or in combination with public transport) as an innovative approach to integrate physical activity into individuals’ everyday lives. The PASTA study will collect data of multiple cities in a longitudinal cohort design to study correlates of active mobility, its effect on overall physical activity, crash risk and exposure to traffic-related air pollution. A set of online questionnaires incorporating gold standard approaches from the physical activity and transport fields have been developed, piloted and are now being deployed in a longitudinal study in seven European cities (Antwerp, Barcelona, London, Oerebro, Rome, Vienna, Zurich). In total, 14000 adults are being recruited (2000 in each city). A first questionnaire collects baseline information; follow-up questionnaires sent every 13 days collect prospective data on travel behaviour, levels of physical activity and traffic safety incidents. Self-reported data will be validated with objective data in subsamples using conventional and novel methods. Accelerometers, GPS and tracking apps record routes and activity. Air pollution and physical activity are measured to study their combined effects on health biomarkers. Exposure-adjusted crash risks will be calculated for active modes, and crash location audits are performed to study the role of the built environment. Ethics committees in all seven cities have given independent approval for the study. The PASTA study collects a wealth of subjective and objective data on active mobility and physical activity. This will allow the investigation of numerous correlates of active mobility and physical activity using a data set that advances previous efforts in its richness, geographical coverage and comprehensiveness. Results will inform new health impact assessment models and support efforts to promote and facilitate active mobility in cities.
---
paper_title: Pedestrians in Regional Travel Demand Forecasting Models: State of the Practice
paper_content:
It has been nearly 25 years since non-motorized modes and non-motorized-specific built environment measures were first included in the regional travel demand models of metropolitan planning organizations (MPOs). Such modeling practices have evolved considerably as data collection and analysis methods improve, decision-makers demand more policy-responsive travel forecasting tools, and walking and cycling grow in popularity. As MPOs look to enhance their models’ representations of pedestrian travel, the need to understand current and emerging practice is great. This paper presents a comprehensive review of the practice of representing walking in MPO travel models. Based on a review of model documentation, it was determined that – as of mid-2012 – 63% (30) of the 48 largest MPOs include non-motorized travel in their regional models, while 47% (14) of those also distinguish between walk and bicycle modes. The modeling frameworks, model structures, and variables used for pedestrian and non-motorized regional modeling are also described and discussed. A survey of lead MPO modelers revealed barriers to modeling non-motorized travel, including insufficient travel survey records, but also innovations being implemented, including smaller zones and non-motorized network assignment. Finally, best practices in representing pedestrians in regional travel demand forecasting models are presented and possible future advances are discussed.
---
paper_title: AN ECOLOGICAL APPROACH TO CREATING ACTIVE LIVING COMMUNITIES
paper_content:
▪ Abstract The thesis of this article is that multilevel interventions based on ecological models and targeting individuals, social environments, physical environments, and policies must be implemented to achieve population change in physical activity. A model is proposed that identifies potential environmental and policy influences on four domains of active living: recreation, transport, occupation, and household. Multilevel research and interventions require multiple disciplines to combine concepts and methods to create new transdisciplinary approaches. The contributions being made by a broad range of disciplines are summarized. Research to date supports a conclusion that there are multiple levels of influence on physical activity, and the active living domains are associated with different environmental variables. Continued research is needed to provide detailed findings that can inform improved designs of communities, transportation systems, and recreation facilities. Collaborations with policy resear...
---
paper_title: Evaluating the travel, physical activity and carbon impacts of a ‘natural experiment’ in the provision of new walking and cycling infrastructure: methods for the core module of the iConnect study
paper_content:
INTRODUCTION: Improving infrastructure to support walking and cycling is often regarded as fundamental to encouraging their widespread uptake. However, there is little evidence that specific provision of this kind has led to a significant increase in walking or cycling in practice, let alone wider impacts such as changes in overall physical activity or carbon emissions. Connect2 is a major new project that aims to promote walking and cycling in the UK by improving local pedestrian and cycle routes. It therefore provides a useful opportunity to contribute new evidence in this field by means of a natural experimental study. METHODS AND ANALYSIS: iConnect is an independent study that aims to integrate the perspectives of public health and transport research on the measurement and evaluation of the travel, physical activity and carbon impacts of the Connect2 programme. In this paper, the authors report the study design and methods for the iConnect core module. This comprised a cohort study of residents living within 5 km of three case study Connect2 projects in Cardiff, Kenilworth and Southampton, supported by a programme of qualitative interviews with key informants about the projects. Participants were asked to complete postal questionnaires, repeated before and after the opening of the new infrastructure, which collected data on demographic and socioeconomic characteristics, travel, car fuel purchasing and physical activity, and potential psychosocial and environmental correlates and mediators of those behaviours. In the absence of suitable no-intervention control groups, the study design drew on heterogeneity in exposure both within and between case study samples to provide for a counterfactual. ETHICS AND DISSEMINATION: The study was approved by the University of Southampton Research Ethics Committee. The findings will be disseminated through academic presentations, peer-reviewed publications and the study website (http://www.iconnect.ac.uk) and by means of a national seminar at the end of the study.
---
paper_title: The theory of planned behavior
paper_content:
Research dealing with various aspects of* the theory of planned behavior (Ajzen, 1985, 1987) is reviewed, and some unresolved issues are discussed. In broad terms, the theory is found to be well supported by empirical evidence. Intentions to perform behaviors of different kinds can be predicted with high accuracy from attitudes toward the behavior, subjective norms, and perceived behavioral control; and these intentions, together with perceptions of behavioral control, account for considerable variance in actual behavior. Attitudes, subjective norms, and perceived behavioral control are shown to be related to appropriate sets of salient behavioral, normative, and control beliefs about the behavior, but the exact nature of these relations is still uncertain. Expectancy— value formulations are found to be only partly successful in dealing with these relations. Optimal rescaling of expectancy and value measures is offered as a means of dealing with measurement limitations. Finally, inclusion of past behavior in the prediction equation is shown to provide a means of testing the theory*s sufficiency, another issue that remains unresolved. The limited available evidence concerning this question shows that the theory is predicting behavior quite well in comparison to the ceiling imposed by behavioral reliability. © 1991 Academic Press. Inc.
---
paper_title: Correlates of physical activity: why are some people physically active and others not?
paper_content:
Physical inactivity is an important contributor to non-communicable diseases in countries of high income, and increasingly so in those of low and middle income. Understanding why people are physically active or inactive contributes to evidence-based planning of public health interventions, because effective programmes will target factors known to cause inactivity. Research into correlates (factors associated with activity) or determinants (those with a causal relationship) has burgeoned in the past two decades, but has mostly focused on individual-level factors in high-income countries. It has shown that age, sex, health status, self-efficacy, and motivation are associated with physical activity. Ecological models take a broad view of health behaviour causation, with the social and physical environment included as contributors to physical inactivity, particularly those outside the health sector, such as urban planning, transportation systems, and parks and trails. New areas of determinants research have identified genetic factors contributing to the propensity to be physically active, and evolutionary factors and obesity that might predispose to inactivity, and have explored the longitudinal tracking of physical activity throughout life. An understanding of correlates and determinants, especially in countries of low and middle income, could reduce the eff ect of future epidemics of inactivity and contribute to effective global prevention of non-communicable diseases.
---
paper_title: The Model of Children’s Active Travel (M‐CAT): A conceptual framework for examining factors influencing children’s active travel
paper_content:
Background: The current decline in children's participation in physical activity has attracted the attention of those concerned with children's health and wellbeing. A sustainable approach to ensuring children engage in adequate amounts of physical activity is to support their involvement in incidental activity such as active travel (AT), which includes walking or riding a bicycle to or from local destinations, such as school or a park. Understanding how we can embed physical activity into children's everyday occupational roles is a way in which occupational therapists can contribute to this important health promotion agenda. Aims: To present a simple, coherent and comprehensive framework as a means of examining factors influencing children's AT. Methods: Based on current literature, this conceptual framework incorporates the observable environment, parents' perceptions and decisions regarding their children's AT, as well as children's own perceptions and decisions regarding AT within their family contexts across time. Conclusion: The Model of Children's Active Travel (M‐CAT) highlights the complex and dynamic nature of factors impacting the decision‐making process of parents and children in relation to children's AT. The M‐CAT offers a way forward for researchers to examine variables influencing active travel in a systematic manner. Future testing of the M‐CAT will consolidate understanding of the factors underlying the decision‐making process which occurs within families in the context of their communities.
---
paper_title: Environmental determinants of active travel in youth: A review and framework for future research
paper_content:
Background ::: Many youth fail to meet the recommended guidelines for physical activity. Walking and cycling, forms of active travel, have the potential to contribute significantly towards overall physical activity levels. Recent research examining the associations between physical activity and the environment has shown that environmental factors play a role in determining behaviour in children and adolescents. However, links between the environment and active travel have received less attention.
---
paper_title: Estimation of the determinants of bicycle mode share for the journey to work using census data
paper_content:
A model is presented that relates the proportion of bicycle journeys to work for English and Welsh electoral wards to relevant socio-economic, transport and physical variables. A number of previous studies have exploited existing disaggregate data sets. This study uses UK 2001 census data, is based on a logistic regression model and provides complementary evidence based on aggregate data for the determinants of cycle choice. It suggests a saturation level for bicycle use of 43%. Smaller proportions cycle in wards with more females and higher car ownership. The physical condition of the highway, rainfall and temperature each have an effect on the proportion that cycles to work, but the most significant physical variable is hilliness. The proportion of bicycle route that is off-road is shown to be significant, although it displays a low elasticity (+0.049) and this contrasts with more significant changes usually forecast by models constructed from stated preference based data. Forecasting shows the trend in car ownership has a significant effect on cycle use and offsets the positive effect of the provision of off-road routes for cycle traffic but only in districts that are moderately hilly or hilly. The provision of infrastructure alone appears insufficient to engender higher levels of cycling.
---
paper_title: ATTITUDES TO CYCLING: A QUALITATIVE STUDY AND CONCEPTUAL FRAMEWORK
paper_content:
It is known that attitudinal factors influence the willingness of individuals to use bicycles. A qualitative study was carried out to assess attitudes of cyclists and non-cyclists to cycling, with emphasis on exploring strategies that would encourage people to cycle. Thirteen focus groups were held, including two extended creativity sessions, in five towns. Projective techniques were used to de-layer responses and explore rationalisations and misperceptions. Stated preference exercises were also undertaken to verify and rank responses to cycle promotion strategies, including car restraint policies. A parallel study drew on experience from public health promotion and theories of motivation and behaviour change to devise a conceptual framework for interpreting and structuring the results from qualitative studies. Conclusions are drawn on the ways in which cycling should be promoted in the UK. (A)
---
paper_title: Economic interventions to promote physical activity: application of the SLOTH model.
paper_content:
Physical inactivity is responsible for major health and economic costs in the United States. Despite widespread recognition of the scope and importance of the problem of physical inactivity, only modest progress has been made in improving overall physical activity in the U.S. population. This paper applies a combined economic and public health perspective to better understand physical activity behavior and to guide a search for promising new economically oriented interventions to increase physical activity at the population level. This perspective is operationalized as the SLOTH model-a time-budget model incorporating Sleep, Leisure, Occupation, Transportation, and Home-based activities. Key economic forces that may influence individuals' choices about utilization of time and physical activity are identified. Potential interventions are proposed in response to each of the important forces and are evaluated on four criteria: (1) economic efficiency, (2) equity, (3) effectiveness, and (4) feasibility. The SLOTH model provides guidance regarding interventions that might increase physical activity in each of the four nonsleep domains. Economic intervention strategies are proposed and compared to economic and public health criteria. The results provide a starting point for selecting and evaluating potentially effective and feasible economic interventions that might be implemented as part of a larger effort to address the health crisis of inactive lifestyles and obesity.
---
paper_title: AN ECOLOGICAL APPROACH TO CREATING ACTIVE LIVING COMMUNITIES
paper_content:
▪ Abstract The thesis of this article is that multilevel interventions based on ecological models and targeting individuals, social environments, physical environments, and policies must be implemented to achieve population change in physical activity. A model is proposed that identifies potential environmental and policy influences on four domains of active living: recreation, transport, occupation, and household. Multilevel research and interventions require multiple disciplines to combine concepts and methods to create new transdisciplinary approaches. The contributions being made by a broad range of disciplines are summarized. Research to date supports a conclusion that there are multiple levels of influence on physical activity, and the active living domains are associated with different environmental variables. Continued research is needed to provide detailed findings that can inform improved designs of communities, transportation systems, and recreation facilities. Collaborations with policy resear...
---
paper_title: Evaluating the travel, physical activity and carbon impacts of a ‘natural experiment’ in the provision of new walking and cycling infrastructure: methods for the core module of the iConnect study
paper_content:
INTRODUCTION: Improving infrastructure to support walking and cycling is often regarded as fundamental to encouraging their widespread uptake. However, there is little evidence that specific provision of this kind has led to a significant increase in walking or cycling in practice, let alone wider impacts such as changes in overall physical activity or carbon emissions. Connect2 is a major new project that aims to promote walking and cycling in the UK by improving local pedestrian and cycle routes. It therefore provides a useful opportunity to contribute new evidence in this field by means of a natural experimental study. METHODS AND ANALYSIS: iConnect is an independent study that aims to integrate the perspectives of public health and transport research on the measurement and evaluation of the travel, physical activity and carbon impacts of the Connect2 programme. In this paper, the authors report the study design and methods for the iConnect core module. This comprised a cohort study of residents living within 5 km of three case study Connect2 projects in Cardiff, Kenilworth and Southampton, supported by a programme of qualitative interviews with key informants about the projects. Participants were asked to complete postal questionnaires, repeated before and after the opening of the new infrastructure, which collected data on demographic and socioeconomic characteristics, travel, car fuel purchasing and physical activity, and potential psychosocial and environmental correlates and mediators of those behaviours. In the absence of suitable no-intervention control groups, the study design drew on heterogeneity in exposure both within and between case study samples to provide for a counterfactual. ETHICS AND DISSEMINATION: The study was approved by the University of Southampton Research Ethics Committee. The findings will be disseminated through academic presentations, peer-reviewed publications and the study website (http://www.iconnect.ac.uk) and by means of a national seminar at the end of the study.
---
paper_title: Theory of routine mode choice decisions: An operational framework to increase sustainable transportation
paper_content:
A growing number of communities in the United States are seeking to improve the sustainability of their transportation systems by shifting routine automobile travel to walking and bicycling. In order to identify strategies that may be most effective at increasing pedestrian and bicycle transportation in a specific local context, practitioners need a greater understanding of the underlying thought process that people use to select travel modes. Previous research from the travel behavior and psychology fields provides the foundation for a five-step, operational Theory of Routine Mode Choice Decisions. Walking and bicycling could be promoted through each of the five steps: awareness and availability (e.g., offer individual marketing programs), basic safety and security (e.g., make pedestrian and bicycle facility improvements and increase education and enforcement efforts), convenience and cost (e.g., institute higher-density, mixed land uses, and limited, more expensive automobile parking), enjoyment (e.g., plant street trees and increase awareness of non-motorized transportation benefits), and habit (e.g., target information about sustainable transportation options to people making key life changes). The components of the theory are supported by in-depth interview responses from the San Francisco Bay Area.
---
paper_title: The theory of planned behavior
paper_content:
Research dealing with various aspects of* the theory of planned behavior (Ajzen, 1985, 1987) is reviewed, and some unresolved issues are discussed. In broad terms, the theory is found to be well supported by empirical evidence. Intentions to perform behaviors of different kinds can be predicted with high accuracy from attitudes toward the behavior, subjective norms, and perceived behavioral control; and these intentions, together with perceptions of behavioral control, account for considerable variance in actual behavior. Attitudes, subjective norms, and perceived behavioral control are shown to be related to appropriate sets of salient behavioral, normative, and control beliefs about the behavior, but the exact nature of these relations is still uncertain. Expectancy— value formulations are found to be only partly successful in dealing with these relations. Optimal rescaling of expectancy and value measures is offered as a means of dealing with measurement limitations. Finally, inclusion of past behavior in the prediction equation is shown to provide a means of testing the theory*s sufficiency, another issue that remains unresolved. The limited available evidence concerning this question shows that the theory is predicting behavior quite well in comparison to the ceiling imposed by behavioral reliability. © 1991 Academic Press. Inc.
---
paper_title: Evidence-based Policy: In Search of a Method:
paper_content:
Evaluation research is tortured by time constraints. The policy cycle revolves quicker than the research cycle, with the result that `real time' evaluations often have little influence on policy making. As a result, the quest for evidence-based policy has turned increasingly to systematic reviews of the results of previous inquiries in the relevant policy domain. However, this shifting of the temporal frame for evaluation is in itself no guarantee of success. Evidence, whether new or old, never speaks for itself. Accordingly, there is debate about the best strategy of marshalling bygone research results into the policy process. This article joins the imbroglio by examining the logic of the two main strategies of systematic review: `meta-analysis' and `narrative review'. Whilst they are often presented as diametrically opposed perspectives, this article argues that they share common limitations in their understanding of how to provide a template for impending policy decisions. This review provides the back...
---
paper_title: Choice of Travel Mode in the Theory of Planned Behavior: The Roles of Past Behavior, Habit, and Reasoned Action
paper_content:
Relying on the theory of planned behavior (Ajzen, 1991), a longitudinal study investigated the effects of an intervention-introduction of a prepaid bus ticket-on increased bus use among college students. In this context, the logic of the proposition that past behavior is the best predictor of later behavior was also examined. The intervention was found to influence attitudes toward bus use, subjective norms, and perceptions of behavioral control and, consistent with the theory, to affect intentions and behavior in the desired direction. Furthermore, the theory afforded accurate prediction of intention and behavior both before and after the intervention. In contrast, a measure of past behavior improved prediction of travel mode prior to the intervention, but lost its predictive utility for behavior following the intervention. In a test of the proposition that the effect of past on later behavior is due to habit formation, an independent measure of habit failed to mediate the effects of past on later behavi...
---
paper_title: The relative influence of urban form on a child's travel mode to school
paper_content:
Walking and bicycling to school has decreased in recent years, while private vehicle travel has increased. Policies and programs focusing on urban form improvements such as Safe Routes to School were created to address this mode shift and possible related children's health issues, despite minimal research showing the influence of urban form on children's travel and health. This research examined: (1) the influence of objectively measured urban form on travel mode to school and; (2) the magnitude of influence urban form and non-urban form factors have on children's travel behavior. The results of the analysis support the hypothesis that urban form is important but not the sole factor that influences school travel mode choice. Other factors may be equally important such as perceptions of neighborhood safety and traffic safety, household transportation options, and social/cultural norms. Odds ratios indicate that the magnitude of influence of these latter factors is greater than that of urban form; however, model improvement tests found that urban form contributed significantly to model fit. This research provides evidence that urban form is an influential factor in non-motorized travel behavior and therefore is a possible intervention to target through programs such as Safe Routes to School.
---
paper_title: To Walk or Not to Walk?: The Hierarchy of Walking Needs.
paper_content:
The multitude of quality of life problems associated with declining walking rates has impelled researchers from various disciplines to identify factors related to this behavior change. Currently, this body of research is in need of a transdisciplinary, multilevel theoretical model that can help explain howindividual, group, regional, and physical-environmental factors all affect physical activity behaviors. To address this gap, this article offers a social-ecological model of walking that presents a dynamic, causal model of the decision-making process. Within the model, a hierarchy of walking needs operates and organizes five levels of needs hierarchically and presents them as antecedents within the walking decision-making process. This model can (a) serve as a framework by which to understand the relative significance of the cornucopia of variables identified by existing research, (b) offer hypotheses for how these factors affect peoples' decision to walk, and (c) help to guide future research and practice.
---
paper_title: An intervention to promote walking amongst the general population based on an 'extended' theory of planned behaviour: A waiting list randomised controlled trial
paper_content:
Theory of planned behaviour (TPB) studies have identified perceived behavioural control (PBC) as the key determinant of walking intentions. The present study investigated whether an intervention designed to alter PBC and create walking plans increased TPB measures concerning walking more, planning and objectively measured walking. One hundred and thirty UK adults participated in a waiting-list randomised controlled trial. The intervention consisted of strategies to boost PBC, plus volitional strategies to enact walking intentions. All TPB constructs were measured, along with self-reported measures of action planning and walking, and an objective pedometer measure of time spent walking. The intervention increased PBC, attitudes, intentions and objectively measured walking from 20 to 32 min a day. The effects of the intervention on intentions and behaviour were mediated by PBC, although the effects on PBC were not mediated by control beliefs. At 6 weeks follow-up, participants maintained their increases in walking. The findings of this study partially support the proposed causal nature of the extended TPB as a framework for developing and evaluating health behaviour change interventions. This is the first study using the TPB to develop, design and evaluate the components of an intervention which increased objectively measured behaviour, with effects mediated by TPB variables.
---
paper_title: Developing a framework for assessment of the environmental determinants of walking and cycling.
paper_content:
The focus for interventions and research on physical activity has moved away from vigorous activity to moderate-intensity activities, such as walking. In addition, a social ecological approach to physical activity research and practice is recommended. This approach considers the influence of the environment and policies on physical activity. Although there is limited empirical published evidence related to the features of the physical environment that influence physical activity, urban planning and transport agencies have developed policies and strategies that have the potential to influence whether people walk or cycle in their neighbourhood. This paper presents the development of a framework of the potential environmental influences on walking and cycling based on published evidence and policy literature, interviews with experts and a Delphi study. The framework includes four features: functional, safety, aesthetic and destination; as well as the hypothesised factors that contribute to each of these features of the environment. In addition, the Delphi experts determined the perceived relative importance of these factors. Based on these factors, a data collection tool will be developed and the frameworks will be tested through the collection of environmental information on neighbourhoods, where data on the walking and cycling patterns have been collected previously. Identifying the environmental factors that influence walking and cycling will allow the inclusion of a public health perspective as well as those of urban planning and transport in the design of built environments.
---
paper_title: An Applied Ecological Framework for Evaluating Infrastructure to Promote Walking and Cycling: The iConnect Study
paper_content:
Improving infrastructure for walking and cycling is increasingly recommended as a means to promote physical activity, prevent obesity, and reduce traffic congestion and carbon emissions. However, limited evidence from intervention studies exists to support this approach. Drawing on classic epidemiological methods, psychological and ecological models of behavior change, and the principles of realistic evaluation, we have developed an applied ecological framework by which current theories about the behavioral effects of environmental change may be tested in heterogeneous and complex intervention settings. Our framework guides study design and analysis by specifying the most important data to be collected and relations to be tested to confirm or refute specific hypotheses and thereby refine the underlying theories.
---
paper_title: Change in active travel and changes in recreational and total physical activity in adults: longitudinal findings from the iConnect study
paper_content:
To better understand the health benefits of promoting active travel, it is important to understand the relationship between a change in active travel and changes in recreational and total physical activity. These analyses, carried out in April 2012, use longitudinal data from 1628 adult respondents (mean age 54 years; 47% male) in the UK-based iConnect study. Travel and recreational physical activity were measured using detailed seven-day recall instruments. Adjusted linear regression models were fitted with change in active travel defined as ‘decreased’ (<−15 min/week), ‘maintained’ (±15 min/week) or ‘increased’ (>15 min/week) as the primary exposure variable and changes in (a) recreational and (b) total physical activity (min/week) as the primary outcome variables. Active travel increased in 32% (n=529), was maintained in 33% (n=534) and decreased in 35% (n=565) of respondents. Recreational physical activity decreased in all groups but this decrease was not greater in those whose active travel increased. Conversely, changes in active travel were associated with commensurate changes in total physical activity. Compared with those whose active travel remained unchanged, total physical activity decreased by 176.9 min/week in those whose active travel had decreased (adjusted regression coefficient −154.9, 95% CI −195.3 to −114.5) and was 112.2 min/week greater among those whose active travel had increased (adjusted regression coefficient 135.1, 95% CI 94.3 to 175.9). An increase in active travel was associated with a commensurate increase in total physical activity and not a decrease in recreational physical activity.
---
paper_title: ACTIVE TRAVEL BEHAVIOR
paper_content:
AbstractPhysical inactivity has become a dominant feature of most American's lives over the past quarter century. This has spurred an entire research domain straddling several different disciplines. Although model development within the field of travel behavior as a whole continues today with more momentum than ever, the focus on active mode choice has largely been overlooked and left to a small fragment of transportation and public health researchers. Research regarding active mode choice has been primarily conducted outside the field of travel behavior and has utilized research methods designed for other purposes. This leads to results which address behavioral causality in a superficial way while also neglecting the role of residential self-selection. This paper provides an overview of existing travel behavior analysis regarding active mode choice, presents potential threats to validity in this type of research, and critiques existing intervention methodologies. Additionally, a conceptual model of activ...
---
paper_title: Motivation and Personality
paper_content:
Perspectives on Sexuality Sex Research - an Overview Part 1. Biological Perspectives: Sexual Anatomy 1. Sexual Physiology 2. Human Reproduction 3. Birth Control 4. Abortion Part 2. Developmental Perspectives: Childhood Sexuality 5. Adolescent Sexuality 6. Adult Sexuality 7. Gender Roles Part 3. Psychological Perspectives: Loving and Being Loved 8. Intimacy and Communication Skills 9. Enhancing your Sexual Relationships 10. Sexual Orientation 11. Sexual Behaviour 12. Sexual Variations 13. Coercive Sex - the Varieties of Sexual Assault Part 4. Sexual Health Perspectives: Sexually Transmitted Diseases and Sexual Infections 14. HIV Infection and AIDS 15. Sexual Dysfunctions and Sex Therapy 16. Sexual Disorders and Sexual Health Part 5 Cultural Perspectives: Sex and the Law 17. Religious and Ethical Perspectives and Sexuality
---
paper_title: MINDSPACE: influencing behaviour for public policy
paper_content:
New insights from science and behaviour change could lead to significantly improved outcomes, and at a lower cost, than the way many conventional policy tools are used. MINDSPACE explores how behaviour change theory can help meet current policy challenges, such as how to: * reduce crime * tackle obesity * ensure environmental sustainability. Today's policy makers are in the business of influencing behaviour - they need to understand the effects their policies may be having. The aim of MINDSPACE is to help them do this, and in doing so get better outcomes for the public and society.
---
paper_title: Environmental correlates of walking and cycling: Findings from the transportation, urban design, and planning literatures
paper_content:
Research in transportation, urban design, and planning has examined associations between physical environment variables and individuals’ walking and cycling for transport. Constructs, methods, and findings from these fields can be applied by physical activity and health researchers to improve understanding of environmental influences on physical activity. In this review, neighborhood environment characteristics proposed to be relevant to walking/cycling for transport are defined, including population density, connectivity, and land use mix. Neighborhood comparison and correlational studies with nonmotorized transport outcomes are considered, with evidence suggesting that residents from communities with higher density, greater connectivity, and more land use mix report higher rates of walking/cycling for utilitarian purposes than low-density, poorly connected, and single land use neighborhoods. Environmental variables appear to add to variance accounted for beyond sociodemographic predictors of walking/cycling for transport. Implications of the transportation literature for physical activity and related research are outlined. Future research directions are detailed for physical activity research to further examine the impact of neighborhood and other physical environment factors on physical activity and the potential interactive effects of psychosocial and environmental variables. The transportation, urban design, and planning literatures provide a valuable starting point for multidisciplinary research on environmental contributions to physical activity levels in the population.
---
paper_title: The Transtheoretical Model of Health Behavior Change
paper_content:
The transtheoretical model posits that health behavior change involves progress through six stages of change: precontemplation, contemplation, preparation, action, maintenance, and termination. Ten processes of change have been identified for producing progress along with decisional balance, self-efficacy, and temptations. Basic research has generated a rule of thumb for at-risk populations: 40% in precontemplation, 40% in contemplation, and 20% in preparation. Across 12 health behaviors, consistent patterns have been found between the pros and cons of changing and the stages of change. Applied research has demonstrated dramatic improvements in recruitment, retention, and progress using stage-matched interventions and proactive recruitment procedures. The most promising outcomes to date have been found with computer-based individualized and interactive interventions. The most promising enhancement to the computer-based programs are personalized counselors. One of the most striking results to date for stag...
---
paper_title: Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment.
paper_content:
Decision field theory provides for a mathematical foundation leading to a dynamic, stochastic theory of decision behavior in an uncertain environment. This theory is used to explain (a) violations of stochastic dominance, (b) violations of strong stochastic transitivity, (c) violations of independence between alternatives, (d) serial position effects on preference, (e) speed-accuracy trade-off effects in decision making, (f) the inverse relation between choice probability and decision time, (g) changes in the direction of preference under time pressure, (h) slower decision times for avoidance as compared with approach conflicts, and (i) preference reversals between choice and selling price measures of preference. The proposed theory is compared with 4 other theories of decision making under uncertainty
---
paper_title: Environmental determinants of active travel in youth: A review and framework for future research
paper_content:
Background ::: Many youth fail to meet the recommended guidelines for physical activity. Walking and cycling, forms of active travel, have the potential to contribute significantly towards overall physical activity levels. Recent research examining the associations between physical activity and the environment has shown that environmental factors play a role in determining behaviour in children and adolescents. However, links between the environment and active travel have received less attention.
---
paper_title: The theory of planned behavior
paper_content:
Research dealing with various aspects of* the theory of planned behavior (Ajzen, 1985, 1987) is reviewed, and some unresolved issues are discussed. In broad terms, the theory is found to be well supported by empirical evidence. Intentions to perform behaviors of different kinds can be predicted with high accuracy from attitudes toward the behavior, subjective norms, and perceived behavioral control; and these intentions, together with perceptions of behavioral control, account for considerable variance in actual behavior. Attitudes, subjective norms, and perceived behavioral control are shown to be related to appropriate sets of salient behavioral, normative, and control beliefs about the behavior, but the exact nature of these relations is still uncertain. Expectancy— value formulations are found to be only partly successful in dealing with these relations. Optimal rescaling of expectancy and value measures is offered as a means of dealing with measurement limitations. Finally, inclusion of past behavior in the prediction equation is shown to provide a means of testing the theory*s sufficiency, another issue that remains unresolved. The limited available evidence concerning this question shows that the theory is predicting behavior quite well in comparison to the ceiling imposed by behavioral reliability. © 1991 Academic Press. Inc.
---
paper_title: Choice of Travel Mode in the Theory of Planned Behavior: The Roles of Past Behavior, Habit, and Reasoned Action
paper_content:
Relying on the theory of planned behavior (Ajzen, 1991), a longitudinal study investigated the effects of an intervention-introduction of a prepaid bus ticket-on increased bus use among college students. In this context, the logic of the proposition that past behavior is the best predictor of later behavior was also examined. The intervention was found to influence attitudes toward bus use, subjective norms, and perceptions of behavioral control and, consistent with the theory, to affect intentions and behavior in the desired direction. Furthermore, the theory afforded accurate prediction of intention and behavior both before and after the intervention. In contrast, a measure of past behavior improved prediction of travel mode prior to the intervention, but lost its predictive utility for behavior following the intervention. In a test of the proposition that the effect of past on later behavior is due to habit formation, an independent measure of habit failed to mediate the effects of past on later behavi...
---
paper_title: To Walk or Not to Walk?: The Hierarchy of Walking Needs.
paper_content:
The multitude of quality of life problems associated with declining walking rates has impelled researchers from various disciplines to identify factors related to this behavior change. Currently, this body of research is in need of a transdisciplinary, multilevel theoretical model that can help explain howindividual, group, regional, and physical-environmental factors all affect physical activity behaviors. To address this gap, this article offers a social-ecological model of walking that presents a dynamic, causal model of the decision-making process. Within the model, a hierarchy of walking needs operates and organizes five levels of needs hierarchically and presents them as antecedents within the walking decision-making process. This model can (a) serve as a framework by which to understand the relative significance of the cornucopia of variables identified by existing research, (b) offer hypotheses for how these factors affect peoples' decision to walk, and (c) help to guide future research and practice.
---
paper_title: Similarities in Attitudes and Norms and the Effect on Bicycle Commuting: Evidence from the Bicycle Cities Davis and Delft
paper_content:
Owing to its beneficial effects, many governments encourage bicycle use for commuting. In search of effective strategies, they often study best practices from elsewhere. However, in order to assess the likely success of transferring measures from one city or country to another, an accurate comparison of the bicycling context is needed. This article explores the similarities and differences in attitudes and beliefs about the decision to commute by bicycle to work in two bicycling-oriented cities: Delft, the Netherlands, and Davis, California, in the U.S. Because bicycling conditions are good in both cities, it is possible to explore the role that attitudes play in the decision to cycle to work. Analyses indicate that beliefs about safety and the importance attached to environmental benefits differ between the cities. Social norms about cycling are important in both cities, but residents in Davis are more often confronted with negative reactions to cycling. Similarities are found in beliefs towards the health benefits of cycling. Strategies successful in one city in encouraging cycling by targeting or leveraging health therefore offer promise for the other city. This exploration provides an important starting point for large-sample comparative studies of attitudes towards bicycle commuting.
---
paper_title: Introduction: Habitual travel choice
paper_content:
In this introduction to the special issue on habitual travel choice, we provide a brief account of the role of habit in travel behaviour, discuss more generally what habitual choice is, and briefly review the issues addressed in the solicited papers. These issues include how habitual travel behaviour should be measured, how to model the learning process that makes travel choice habitual, and how to break and replace car-use habits.
---
paper_title: ACTIVE TRAVEL BEHAVIOR
paper_content:
AbstractPhysical inactivity has become a dominant feature of most American's lives over the past quarter century. This has spurred an entire research domain straddling several different disciplines. Although model development within the field of travel behavior as a whole continues today with more momentum than ever, the focus on active mode choice has largely been overlooked and left to a small fragment of transportation and public health researchers. Research regarding active mode choice has been primarily conducted outside the field of travel behavior and has utilized research methods designed for other purposes. This leads to results which address behavioral causality in a superficial way while also neglecting the role of residential self-selection. This paper provides an overview of existing travel behavior analysis regarding active mode choice, presents potential threats to validity in this type of research, and critiques existing intervention methodologies. Additionally, a conceptual model of activ...
---
paper_title: Attitudes and the environment as determinants of active travel in adults: What do and don't we know?
paper_content:
Background:Walking and cycling for transport, or ‘active travel,’ has the potential to contribute to overall physical activity levels. However, a wide range of factors are hypothesized to be associated with adult’s active travel behavior. This paper describes current knowledge of the psychological and environmental determinants of active travel in adults, and considers ways in which the 2 domains can be better integrated. Methods:Quantitative studies were reviewed which examined psychological and environmental influences on active travel in an adult population. Studies were classified according to whether they examined psychological, environmental or both types of factor. Results:Fourteen studies were identified which examined psychological correlates of active travel behavior in adults, and 36 which examined environmental correlates. Seven studies were identified which considered both domains, of which only 2 of explored the interactions between personal, social and environmental factors. The majority of...
---
paper_title: The Model of Children’s Active Travel (M‐CAT): A conceptual framework for examining factors influencing children’s active travel
paper_content:
Background: The current decline in children's participation in physical activity has attracted the attention of those concerned with children's health and wellbeing. A sustainable approach to ensuring children engage in adequate amounts of physical activity is to support their involvement in incidental activity such as active travel (AT), which includes walking or riding a bicycle to or from local destinations, such as school or a park. Understanding how we can embed physical activity into children's everyday occupational roles is a way in which occupational therapists can contribute to this important health promotion agenda. Aims: To present a simple, coherent and comprehensive framework as a means of examining factors influencing children's AT. Methods: Based on current literature, this conceptual framework incorporates the observable environment, parents' perceptions and decisions regarding their children's AT, as well as children's own perceptions and decisions regarding AT within their family contexts across time. Conclusion: The Model of Children's Active Travel (M‐CAT) highlights the complex and dynamic nature of factors impacting the decision‐making process of parents and children in relation to children's AT. The M‐CAT offers a way forward for researchers to examine variables influencing active travel in a systematic manner. Future testing of the M‐CAT will consolidate understanding of the factors underlying the decision‐making process which occurs within families in the context of their communities.
---
paper_title: ATTITUDES TO CYCLING: A QUALITATIVE STUDY AND CONCEPTUAL FRAMEWORK
paper_content:
It is known that attitudinal factors influence the willingness of individuals to use bicycles. A qualitative study was carried out to assess attitudes of cyclists and non-cyclists to cycling, with emphasis on exploring strategies that would encourage people to cycle. Thirteen focus groups were held, including two extended creativity sessions, in five towns. Projective techniques were used to de-layer responses and explore rationalisations and misperceptions. Stated preference exercises were also undertaken to verify and rank responses to cycle promotion strategies, including car restraint policies. A parallel study drew on experience from public health promotion and theories of motivation and behaviour change to devise a conceptual framework for interpreting and structuring the results from qualitative studies. Conclusions are drawn on the ways in which cycling should be promoted in the UK. (A)
---
paper_title: Developing a framework for assessment of the environmental determinants of walking and cycling.
paper_content:
The focus for interventions and research on physical activity has moved away from vigorous activity to moderate-intensity activities, such as walking. In addition, a social ecological approach to physical activity research and practice is recommended. This approach considers the influence of the environment and policies on physical activity. Although there is limited empirical published evidence related to the features of the physical environment that influence physical activity, urban planning and transport agencies have developed policies and strategies that have the potential to influence whether people walk or cycle in their neighbourhood. This paper presents the development of a framework of the potential environmental influences on walking and cycling based on published evidence and policy literature, interviews with experts and a Delphi study. The framework includes four features: functional, safety, aesthetic and destination; as well as the hypothesised factors that contribute to each of these features of the environment. In addition, the Delphi experts determined the perceived relative importance of these factors. Based on these factors, a data collection tool will be developed and the frameworks will be tested through the collection of environmental information on neighbourhoods, where data on the walking and cycling patterns have been collected previously. Identifying the environmental factors that influence walking and cycling will allow the inclusion of a public health perspective as well as those of urban planning and transport in the design of built environments.
---
paper_title: An Applied Ecological Framework for Evaluating Infrastructure to Promote Walking and Cycling: The iConnect Study
paper_content:
Improving infrastructure for walking and cycling is increasingly recommended as a means to promote physical activity, prevent obesity, and reduce traffic congestion and carbon emissions. However, limited evidence from intervention studies exists to support this approach. Drawing on classic epidemiological methods, psychological and ecological models of behavior change, and the principles of realistic evaluation, we have developed an applied ecological framework by which current theories about the behavioral effects of environmental change may be tested in heterogeneous and complex intervention settings. Our framework guides study design and analysis by specifying the most important data to be collected and relations to be tested to confirm or refute specific hypotheses and thereby refine the underlying theories.
---
paper_title: Change in active travel and changes in recreational and total physical activity in adults: longitudinal findings from the iConnect study
paper_content:
To better understand the health benefits of promoting active travel, it is important to understand the relationship between a change in active travel and changes in recreational and total physical activity. These analyses, carried out in April 2012, use longitudinal data from 1628 adult respondents (mean age 54 years; 47% male) in the UK-based iConnect study. Travel and recreational physical activity were measured using detailed seven-day recall instruments. Adjusted linear regression models were fitted with change in active travel defined as ‘decreased’ (<−15 min/week), ‘maintained’ (±15 min/week) or ‘increased’ (>15 min/week) as the primary exposure variable and changes in (a) recreational and (b) total physical activity (min/week) as the primary outcome variables. Active travel increased in 32% (n=529), was maintained in 33% (n=534) and decreased in 35% (n=565) of respondents. Recreational physical activity decreased in all groups but this decrease was not greater in those whose active travel increased. Conversely, changes in active travel were associated with commensurate changes in total physical activity. Compared with those whose active travel remained unchanged, total physical activity decreased by 176.9 min/week in those whose active travel had decreased (adjusted regression coefficient −154.9, 95% CI −195.3 to −114.5) and was 112.2 min/week greater among those whose active travel had increased (adjusted regression coefficient 135.1, 95% CI 94.3 to 175.9). An increase in active travel was associated with a commensurate increase in total physical activity and not a decrease in recreational physical activity.
---
paper_title: The Transtheoretical Model of Health Behavior Change
paper_content:
The transtheoretical model posits that health behavior change involves progress through six stages of change: precontemplation, contemplation, preparation, action, maintenance, and termination. Ten processes of change have been identified for producing progress along with decisional balance, self-efficacy, and temptations. Basic research has generated a rule of thumb for at-risk populations: 40% in precontemplation, 40% in contemplation, and 20% in preparation. Across 12 health behaviors, consistent patterns have been found between the pros and cons of changing and the stages of change. Applied research has demonstrated dramatic improvements in recruitment, retention, and progress using stage-matched interventions and proactive recruitment procedures. The most promising outcomes to date have been found with computer-based individualized and interactive interventions. The most promising enhancement to the computer-based programs are personalized counselors. One of the most striking results to date for stag...
---
paper_title: Behaviour theory and soft transport policy measures
paper_content:
The aim is to propose a theoretical grounding of soft transport policy measures that aim at promoting voluntary reduction of car use. A general conceptual framework is first presented to clarify how hard and soft transport policy measures impact on car-use reduction. Two different behavioural theories that have been used to account for car use and car-use reduction are then integrated in a self-regulation theory that identifies four stages of the process of voluntarily changing car use: setting a car-use reduction goal, forming a plan for achieving the goal, initiating and executing the plan, and evaluating the outcome of the plan execution. A number of techniques are described that facilitate the different stages of the process of voluntary car-use reduction and which should be used in personalized travel planning programs.
---
paper_title: The Transtheoretical Model of Health Behavior Change
paper_content:
The transtheoretical model posits that health behavior change involves progress through six stages of change: precontemplation, contemplation, preparation, action, maintenance, and termination. Ten processes of change have been identified for producing progress along with decisional balance, self-efficacy, and temptations. Basic research has generated a rule of thumb for at-risk populations: 40% in precontemplation, 40% in contemplation, and 20% in preparation. Across 12 health behaviors, consistent patterns have been found between the pros and cons of changing and the stages of change. Applied research has demonstrated dramatic improvements in recruitment, retention, and progress using stage-matched interventions and proactive recruitment procedures. The most promising outcomes to date have been found with computer-based individualized and interactive interventions. The most promising enhancement to the computer-based programs are personalized counselors. One of the most striking results to date for stag...
---
paper_title: Behaviour theory and soft transport policy measures
paper_content:
The aim is to propose a theoretical grounding of soft transport policy measures that aim at promoting voluntary reduction of car use. A general conceptual framework is first presented to clarify how hard and soft transport policy measures impact on car-use reduction. Two different behavioural theories that have been used to account for car use and car-use reduction are then integrated in a self-regulation theory that identifies four stages of the process of voluntarily changing car use: setting a car-use reduction goal, forming a plan for achieving the goal, initiating and executing the plan, and evaluating the outcome of the plan execution. A number of techniques are described that facilitate the different stages of the process of voluntary car-use reduction and which should be used in personalized travel planning programs.
---
| Title: Towards a Comprehensive Conceptual Framework of Active Travel Behavior: a Review and Synthesis of Published Frameworks
Section 1: Introduction
Description 1: This section introduces the paper, discusses the importance of active travel, and outlines the goals and structure of the research.
Section 2: Literature Review
Description 2: This section details the systematic approach taken to identify and review existing conceptual frameworks for active travel in the scientific literature.
Section 3: Synthesis of Reviewed Frameworks and Development of the PASTA Conceptual Framework for Active Travel
Description 3: This section explains the process of synthesizing the reviewed frameworks and developing the comprehensive PASTA framework for active travel behavior.
Section 4: Overview of the Identified Frameworks
Description 4: This section provides a detailed overview of the frameworks identified from the literature search and their scope.
Section 5: Review of Selected Frameworks
Description 5: This section reviews specific selected frameworks that contribute distinct features towards a comprehensive framework of active travel behavior.
Section 6: Synthesis of Results and Development of the PASTA Framework of Active Travel Behavior
Description 6: This section discusses the key features of the identified frameworks and theories, and describes how the authors developed the PASTA conceptual framework of active travel behavior.
Section 7: Practical Applications of the PASTA Framework in Active Travel Research
Description 7: This section illustrates the practical applications of the PASTA framework in active travel research, including its use in guiding survey contents, study design, and identifying causal pathways.
Section 8: Conclusions
Description 8: This section concludes the paper by discussing the value of the PASTA framework, its contributions to active travel research, and the importance of systematic development and use of conceptual frameworks in this field. |
A survey on low-cost RFID authentication protocols | 5 | ---
paper_title: Anonymous mutual authentication protocol for RFID tag without back-end database
paper_content:
RFID, as an emerging technology, has very huge potential in today's social and business developments. Security and Privacy are one of the important issues in the design of practical RFID protocols. In this paper, we focus on RFID authentication protocol. RFID mutual authentication is used to ensure that only an authorized RFID reader can access to the data of RFID tag while the RFID tag is confirmed that it releases data to the authenticated RFID reader. This paper will propose an anonymous mutual authentication protocol for RFID tag and reader. RFID tag is anonymous to RFID reader so that privacy can be preserved. In addition, mutual authentication does not need to rely on a back-end database.
---
paper_title: Anonymous mutual authentication protocol for RFID tag without back-end database
paper_content:
RFID, as an emerging technology, has very huge potential in today's social and business developments. Security and Privacy are one of the important issues in the design of practical RFID protocols. In this paper, we focus on RFID authentication protocol. RFID mutual authentication is used to ensure that only an authorized RFID reader can access to the data of RFID tag while the RFID tag is confirmed that it releases data to the authenticated RFID reader. This paper will propose an anonymous mutual authentication protocol for RFID tag and reader. RFID tag is anonymous to RFID reader so that privacy can be preserved. In addition, mutual authentication does not need to rely on a back-end database.
---
paper_title: Anonymous mutual authentication protocol for RFID tag without back-end database
paper_content:
RFID, as an emerging technology, has very huge potential in today's social and business developments. Security and Privacy are one of the important issues in the design of practical RFID protocols. In this paper, we focus on RFID authentication protocol. RFID mutual authentication is used to ensure that only an authorized RFID reader can access to the data of RFID tag while the RFID tag is confirmed that it releases data to the authenticated RFID reader. This paper will propose an anonymous mutual authentication protocol for RFID tag and reader. RFID tag is anonymous to RFID reader so that privacy can be preserved. In addition, mutual authentication does not need to rely on a back-end database.
---
paper_title: Mutual authentication protocol for RFID tags based on synchronized secret information with monitor
paper_content:
RFID, as an anti-conterfeiting technology, is pushing huge potential industrial, medical, business and social applications. RFID-based identification is an example of emerging technology which requires authentication. In this paper, we will propose a new mutual authentication protocol for RFID tags. The RFID reader and tag will carry out the authentication based on their synchronized secret information. The synchronized secret information will be monitored by a component of the database server. Our protocol also supports the low-cost non-volatile memory of RFID tags. This is desirable since non-volatile memory is an expensive unit in RFID tags.
---
paper_title: A study on low-cost RFID system management with mutual authentication scheme in ubiquitous
paper_content:
The RFID system is a core technology used in building a ubiquitous environment, and is considered an alternative to bar-code identification. The RFID system has become very popular, with various strengths such as fast recognition speed and non-touch detection. However, there are some problems remaining, as the low-cost tag can operate through queries, leading to information exposure and privacy encroachment. Various approaches have been used to increase the security of the system, but the low-cost tag, which has about 5K-10K gates, can only allocate 250-3K gates to security. Therefore, the current study provides a reciprocal authentication solution that can be used with low-cost RFID systems, by splitting 64 bit keys and minimizing calculations. Existing systems divided a 96 bit key into 4 parts. However, the proposed system reduces the key to 32 bits, and reduces communications from 7 down to. 5. To increase security, one additional random number is added to the two existing numbers. The previous system only provided XOR calculations, however in the proposed system an additional hash function was added. The added procedure does not increase effectiveness in terms of the XOR calculation, but provides more security to the RFID system, for better use over remote distances.
---
paper_title: Enabling ubiquitous sensing with RFID
paper_content:
Radio frequency identification has attracted considerable press attention in recent years, and for good reasons: RFID not only replaces traditional barcode technology, it also provides additional features and removes boundaries that limited the use of previous alternatives. Printed bar codes are typically read by a laser-based optical scanner that requires a direct line-of-sight to detect and extract information. With RFID, however, a scanner can read the encoded information even when the tag is concealed for either aesthetic or security reasons. In the future, RFID tags will likely be used as environmental sensors on an unprecedented scale.
---
paper_title: Practical Minimalist Cryptography for RFID Privacy
paper_content:
The fear of unauthorized, hidden readouts has dominated the radio frequency identification (RFID) privacy debate. Virtually all proposed privacy mechanisms so far require consumers to actively and explicitly protect read access to their tagged items-either by jamming rogue readers or by encrypting or pseudonymizing their tags. While this approach might work well for activists and highly concerned individuals, it is unlikely (and rather undesirable) that the average consumer should be outfitted with RFID jamming devices before stepping outside, or that anyone would bother pseudonymizing every can of soda they buy with a personal PIN code. Juels' ldquominimalist cryptographyrdquo offers a simple, yet effective, identification and tracking protection based on simple ID rotation, but it requires that the corresponding mappings (i.e., from pseudonyms to real IDs) are electronically exchanged whenever a product changes hands (e.g., for buying a pack of chewing gum at a kiosk)-a rather impractical requirement. Our work extends Juels' concept in order to alleviate the need for passing ID mapping tables. Using carefully assembled sets of IDs based on the cryptographic principle of secret shares, we can create RFID tags that yield virtually no information to casual ldquohit-and-runrdquo attackers, but only reveal their true ID after continuous and undisturbed reading from up-close-something that can hardly go unnoticed by an item's owner. This paper introduces the underlying mechanism of our extension to Juels' proposal, called ldquoShamir Tag,rdquo analyzes its tracking resistance and identification performance, and discusses deployment aspects.
---
paper_title: Password authentication with insecure communication
paper_content:
A method of user password authentication is described which is secure even if an intruder can read the system's data, and can tamper with or eavesdrop on the communication between the user and the system. The method assumes a secure one-way encryption function and can be implemented with a microcomputer in the user's terminal.
---
paper_title: Hash-based enhancement of location privacy for radio-frequency identification devices using varying identifiers
paper_content:
Radio-frequency identification devices (RFID) may emerge as one of the most pervasive computing technologies in history. On the one hand, with tags affixed to consumer items as well as letters, packets or vehicles costs in the supply chain can be greatly reduced and new applications introduced. On the other hand, unique means of identification in each tag like serial numbers enable effortless traceability of persons and goods. But data protection and privacy are worthwhile civil liberties. We introduce a simple scheme relying on one way hash-functions that greatly enhances location privacy by changing traceable identifiers on every read getting by with only a single, unreliable message exchange. Thereby the scheme is safe from many threats like eavesdropping, message interception, spoofing, and replay attacks.
---
paper_title: Password authentication with insecure communication
paper_content:
A method of user password authentication is described which is secure even if an intruder can read the system's data, and can tamper with or eavesdrop on the communication between the user and the system. The method assumes a secure one-way encryption function and can be implemented with a microcomputer in the user's terminal.
---
paper_title: Hash-based enhancement of location privacy for radio-frequency identification devices using varying identifiers
paper_content:
Radio-frequency identification devices (RFID) may emerge as one of the most pervasive computing technologies in history. On the one hand, with tags affixed to consumer items as well as letters, packets or vehicles costs in the supply chain can be greatly reduced and new applications introduced. On the other hand, unique means of identification in each tag like serial numbers enable effortless traceability of persons and goods. But data protection and privacy are worthwhile civil liberties. We introduce a simple scheme relying on one way hash-functions that greatly enhances location privacy by changing traceable identifiers on every read getting by with only a single, unreliable message exchange. Thereby the scheme is safe from many threats like eavesdropping, message interception, spoofing, and replay attacks.
---
paper_title: Practical Minimalist Cryptography for RFID Privacy
paper_content:
The fear of unauthorized, hidden readouts has dominated the radio frequency identification (RFID) privacy debate. Virtually all proposed privacy mechanisms so far require consumers to actively and explicitly protect read access to their tagged items-either by jamming rogue readers or by encrypting or pseudonymizing their tags. While this approach might work well for activists and highly concerned individuals, it is unlikely (and rather undesirable) that the average consumer should be outfitted with RFID jamming devices before stepping outside, or that anyone would bother pseudonymizing every can of soda they buy with a personal PIN code. Juels' ldquominimalist cryptographyrdquo offers a simple, yet effective, identification and tracking protection based on simple ID rotation, but it requires that the corresponding mappings (i.e., from pseudonyms to real IDs) are electronically exchanged whenever a product changes hands (e.g., for buying a pack of chewing gum at a kiosk)-a rather impractical requirement. Our work extends Juels' concept in order to alleviate the need for passing ID mapping tables. Using carefully assembled sets of IDs based on the cryptographic principle of secret shares, we can create RFID tags that yield virtually no information to casual ldquohit-and-runrdquo attackers, but only reveal their true ID after continuous and undisturbed reading from up-close-something that can hardly go unnoticed by an item's owner. This paper introduces the underlying mechanism of our extension to Juels' proposal, called ldquoShamir Tag,rdquo analyzes its tracking resistance and identification performance, and discusses deployment aspects.
---
paper_title: Password authentication with insecure communication
paper_content:
A method of user password authentication is described which is secure even if an intruder can read the system's data, and can tamper with or eavesdrop on the communication between the user and the system. The method assumes a secure one-way encryption function and can be implemented with a microcomputer in the user's terminal.
---
paper_title: A study on low-cost RFID system management with mutual authentication scheme in ubiquitous
paper_content:
The RFID system is a core technology used in building a ubiquitous environment, and is considered an alternative to bar-code identification. The RFID system has become very popular, with various strengths such as fast recognition speed and non-touch detection. However, there are some problems remaining, as the low-cost tag can operate through queries, leading to information exposure and privacy encroachment. Various approaches have been used to increase the security of the system, but the low-cost tag, which has about 5K-10K gates, can only allocate 250-3K gates to security. Therefore, the current study provides a reciprocal authentication solution that can be used with low-cost RFID systems, by splitting 64 bit keys and minimizing calculations. Existing systems divided a 96 bit key into 4 parts. However, the proposed system reduces the key to 32 bits, and reduces communications from 7 down to. 5. To increase security, one additional random number is added to the two existing numbers. The previous system only provided XOR calculations, however in the proposed system an additional hash function was added. The added procedure does not increase effectiveness in terms of the XOR calculation, but provides more security to the RFID system, for better use over remote distances.
---
paper_title: Enabling ubiquitous sensing with RFID
paper_content:
Radio frequency identification has attracted considerable press attention in recent years, and for good reasons: RFID not only replaces traditional barcode technology, it also provides additional features and removes boundaries that limited the use of previous alternatives. Printed bar codes are typically read by a laser-based optical scanner that requires a direct line-of-sight to detect and extract information. With RFID, however, a scanner can read the encoded information even when the tag is concealed for either aesthetic or security reasons. In the future, RFID tags will likely be used as environmental sensors on an unprecedented scale.
---
paper_title: A study on low-cost RFID system management with mutual authentication scheme in ubiquitous
paper_content:
The RFID system is a core technology used in building a ubiquitous environment, and is considered an alternative to bar-code identification. The RFID system has become very popular, with various strengths such as fast recognition speed and non-touch detection. However, there are some problems remaining, as the low-cost tag can operate through queries, leading to information exposure and privacy encroachment. Various approaches have been used to increase the security of the system, but the low-cost tag, which has about 5K-10K gates, can only allocate 250-3K gates to security. Therefore, the current study provides a reciprocal authentication solution that can be used with low-cost RFID systems, by splitting 64 bit keys and minimizing calculations. Existing systems divided a 96 bit key into 4 parts. However, the proposed system reduces the key to 32 bits, and reduces communications from 7 down to. 5. To increase security, one additional random number is added to the two existing numbers. The previous system only provided XOR calculations, however in the proposed system an additional hash function was added. The added procedure does not increase effectiveness in terms of the XOR calculation, but provides more security to the RFID system, for better use over remote distances.
---
paper_title: Practical Minimalist Cryptography for RFID Privacy
paper_content:
The fear of unauthorized, hidden readouts has dominated the radio frequency identification (RFID) privacy debate. Virtually all proposed privacy mechanisms so far require consumers to actively and explicitly protect read access to their tagged items-either by jamming rogue readers or by encrypting or pseudonymizing their tags. While this approach might work well for activists and highly concerned individuals, it is unlikely (and rather undesirable) that the average consumer should be outfitted with RFID jamming devices before stepping outside, or that anyone would bother pseudonymizing every can of soda they buy with a personal PIN code. Juels' ldquominimalist cryptographyrdquo offers a simple, yet effective, identification and tracking protection based on simple ID rotation, but it requires that the corresponding mappings (i.e., from pseudonyms to real IDs) are electronically exchanged whenever a product changes hands (e.g., for buying a pack of chewing gum at a kiosk)-a rather impractical requirement. Our work extends Juels' concept in order to alleviate the need for passing ID mapping tables. Using carefully assembled sets of IDs based on the cryptographic principle of secret shares, we can create RFID tags that yield virtually no information to casual ldquohit-and-runrdquo attackers, but only reveal their true ID after continuous and undisturbed reading from up-close-something that can hardly go unnoticed by an item's owner. This paper introduces the underlying mechanism of our extension to Juels' proposal, called ldquoShamir Tag,rdquo analyzes its tracking resistance and identification performance, and discusses deployment aspects.
---
paper_title: Password authentication with insecure communication
paper_content:
A method of user password authentication is described which is secure even if an intruder can read the system's data, and can tamper with or eavesdrop on the communication between the user and the system. The method assumes a secure one-way encryption function and can be implemented with a microcomputer in the user's terminal.
---
paper_title: Hash-based enhancement of location privacy for radio-frequency identification devices using varying identifiers
paper_content:
Radio-frequency identification devices (RFID) may emerge as one of the most pervasive computing technologies in history. On the one hand, with tags affixed to consumer items as well as letters, packets or vehicles costs in the supply chain can be greatly reduced and new applications introduced. On the other hand, unique means of identification in each tag like serial numbers enable effortless traceability of persons and goods. But data protection and privacy are worthwhile civil liberties. We introduce a simple scheme relying on one way hash-functions that greatly enhances location privacy by changing traceable identifiers on every read getting by with only a single, unreliable message exchange. Thereby the scheme is safe from many threats like eavesdropping, message interception, spoofing, and replay attacks.
---
| Title: A Survey on Low-Cost RFID Authentication Protocols
Section 1: INTRODUCTION
Description 1: In this section, provide a comprehensive overview of RFID technology, focusing on its components, applications, and the specific interest in passive tags. Discuss the importance of low-cost and security concerns in RFID systems.
Section 2: RFID ATTACKS
Description 2: Analyze various security threats and attacks on RFID systems, categorizing them by their points of attack: the air interface, readers, and systems. Discuss examples and implications of each type of attack.
Section 3: PROTOCOLS IN LOW-COST RFID ENVIRONMENT
Description 3: Present an overview of several protocols designed to enhance the security and privacy of low-cost RFID systems. Discuss their constraints and the specific security mechanisms they use.
Section 4: SURVEY OF THE LOW-COST RFID AUTHENTICATION PROTOCOLS
Description 4: Conduct a comparative study of various low-cost RFID authentication protocols, evaluating them in terms of data protection, tracking prevention, and forward security. Provide detailed explanations of different methods such as One-time Pad based on XOR, External Re-Encryption Scheme, Hash Chain-based Scheme, Blocker Tag, Extended Hash-lock Scheme, Hash-based Varying Identifier, Improved Hash-based Varying Identifier, Mutual Authentication, and Ultra Lightweight methods.
Section 5: CONCLUSIONS
Description 5: Summarize the findings of the comparative analysis. Discuss the effectiveness and limitations of each protocol regarding data protection, tracking prevention, and forward security. Offer insights into future research directions to improve RFID authentication protocols. |
AN OVERVIEW OF INDUSTRIAL PROCESS VALIDATION OF TABLETS | 12 | ---
paper_title: Pharmaceutics : the science of dosage form design
paper_content:
Design of dosage forms. PART ONE SCIENTIFIC PRINICIPLES OF FORMULATION SCIENCE Solutions and their properties. Rheology and flow. Surface and interfacial phenomena. Solubility and dissolution rate. Disperse systems. Kinetics and product stability. PART TWO PARTICLE SCIENCE AND POWDER TECHNOLOGY Solid-state properties of powders. Particle size analysis. Particle size reduction. Particle size separation. Mixing. Powder flow. PART THREE BIOPHARMACEUTICS PRINCIPLES OF DRUG DELIVERY Introduction to Biopharmaceutics. Factors influencing bioavailability. Assessment of bioavailability. Dosage Regimens. Sustained and extended release. PART FOUR DOSAGE FORM DESIGN AND MANUFACTURE Preformulation. Solutions, Suspensions and Emulsions. Filtration. Powders and granules. Granulation. Drying. Tablets and compaction. Tablet coating. Capsules and encapsulation. Pulmonary drug delivery. Nasal delivery. Rectal and vaginal delivery. Parenteral products. Transdermal drug delivery. Delivery of proteins/biotech products. Packs and packaging. Materials of fabrication and corrosion. Heat transfer and the properties of steam. PART FIVE PHARMACEUTICAL MICROBIOLOGY AND STERILIZATION Fundamentals of microbiology. The action of physical and chemical agents on micro-organisms. Microbiological contamination and preservation. Principles of sterilization. Sterilization practice. Design and operation of clean rooms.
---
paper_title: Pharmaceutical process scale-up
paper_content:
Dimensional Analysis and Scale-Up in Theory and Industrial Application Marko Zlokarnik Engineering Approaches for Pharmaceutical Process Scale-up, Validation, Optimization, and Control in the PAT Era Fernando Muzzio Understanding Scale Up and Quality Risks on the interface between Primary and Secondary Development Frans L. Muller, Kathryn A. Gray, and Graham E. Robinson Scale-up and Process Validation Steven Ostrove Parenteral Drug Scale-Up Igor Gorsky Non-Parenteral Liquids and Semisolids Lawrence H. Block Scale-Up Considerations for Biotechnology-Derived Products Marco A. Cacciuttolo, John Chon, and Greg Zarbis-Papastoitsis Powder Handling James K. Prescott Batch Size Increase in Dry Blending and Mixing Albert W. Alexander and Fernando J. Muzzio Scale Up Of Continuous Blending Aditya U. Vanarase, Yijie Gao, Atul Dubey, Marianthi G. Ierapetritou, and Fernando J. Muzzio Scale-Up in the Field of Granulation and Drying Hans Leuenberger and Gabriele Betz Batch Size Increase in Fluid Bed Granulation Dilip M. Parikh Roller Compaction Scale-Up Ronald Miller Scale-Up of Extrusion and Spheronization Raman Iyer, Harpreet K. Sandhu and Navnit Shah Scale-Up of Compaction and the Tableting Process Matthew P. Mullarney and Jeffrey Moriarty Dimensional Analysis of the Tableting Process Michael Levin and Marko Zlokarnik Practical Considerations in the Scale-Up of Powder-Filled Hard Shell Capsule Formulations Larry Augsburger Scale-Up of the Film-Coating Stuart Porter Virtual scale-up of manufacturing solid dosage forms Hans Leuenberger, Michael N. Leuenberger, and Maxim Puchkov Appendix A: Relevant FDA Guidance for Industry Appendix B: Relevant EU Directives, Regulations, and Gudelines Appendix C: Relevant ICH Documents - International Conference On Harmonisation Of Technical Requirements For Registration Of Pharmaceuticals For Human Use Internet link addresses
---
paper_title: Pharmaceutical process validation: An overview
paper_content:
AbstractDrugs are critical elements in health care. They must be manufactured to the highest quality levels. End-product testing by itself does not guarantee the quality of the product. Quality assurance techniques must be used. In the pharmaceutical industry, process validation performs this task, ensuring that the process does what it purports to do. It is also a regulatory requirement. This paper presents an introduction and general overview of this process, with special reference to the requirements stipulated by the US Food and Drug Administration (FDA).
---
paper_title: Optimization and validation of manufacturing processes
paper_content:
AbstractValidation in the definition adopted by a Joint FIP Committee means that every essential operation in the development, manufacture and control of pharmaceutical products is reliable, reproducible and capable of providing the desired product quality if stipulated production instructions and control procedures are followed.A pre-requisite for, and an integral part of process-validation is that people, premises and equipment should be qualified. In other words, validation activities rely upon the check of technical and physical parameters when measuring devices are being calibrated, equipment is being qualified as well as on chemical, physical and biological parameters when processes are being Validated.As we know, validation starts with planning a new plant, a new machine or equipment as well as with the development of a new or changed product.This way of development from the laboratory trial and the clinical trial until the production phase has to guarentee the manufacture of products conforming to...
---
paper_title: Pharmaceutical process validation: An overview
paper_content:
AbstractDrugs are critical elements in health care. They must be manufactured to the highest quality levels. End-product testing by itself does not guarantee the quality of the product. Quality assurance techniques must be used. In the pharmaceutical industry, process validation performs this task, ensuring that the process does what it purports to do. It is also a regulatory requirement. This paper presents an introduction and general overview of this process, with special reference to the requirements stipulated by the US Food and Drug Administration (FDA).
---
| Title: AN OVERVIEW OF INDUSTRIAL PROCESS VALIDATION OF TABLETS
Section 1: INTRODUCTION
Description 1: Introduce the purpose and importance of tablet manufacturing and the role of process validation in ensuring product quality and consistency.
Section 2: OBJECTIVES OF PROCESS VALIDATION
Description 2: Outline the goals of validating the manufacturing process and ensuring minimal variation in product quality.
Section 3: REASON FOR PROCESS VALIDATION
Description 3: Explain the various reasons for performing process validation, including changes in products, processes, and equipment.
Section 4: TYPES OF PROCESS VALIDATION
Description 4: Describe the four types of process validation: Prospective, Concurrent, Retrospective, and Revalidation.
Section 5: THE REGULATORY BASIS FOR PROCESS VALIDATION
Description 5: Discuss the regulatory history and requirements for process validation as per current good manufacturing practice (cGMP) regulations.
Section 6: STRATEGY FOR INDUSTRIAL PROCESS VALIDATION OF SOLID DOSAGE FORMS
Description 6: Provide strategies for validating solid dosage forms including raw materials, manufacturing conditions, and control variables.
Section 7: GUIDELINES FOR PROCESS VALIDATION OF SOLID DOSAGE FORMS
Description 7: Enumerate the factors to consider when developing and validating solid dosage forms with a broad set of guidelines.
Section 8: PROTOCOL FOR PROCESS VALIDATION
Description 8: Provide detailed steps involved in the protocol for process validation including tables and criteria.
Section 9: STEPS FOR VALIDATION AND ACCEPTANCE CRITERIA
Description 9: Outline the industry steps for validation of tablets, especially focusing on the wet granulation process.
Section 10: INDUSTRIAL PROCESS EVALUATION AND SELECTION FOR TABLETS
Description 10: Discuss the different unit operations needed for tablet manufacturing and the parameters to validate for each operation.
Section 11: ANNUAL PRODUCT REVIEW
Description 11: Explain the importance and process of an Annual Product Quality Review to ensure continuous manufacturing consistency and quality improvement.
Section 12: CONCLUSION
Description 12: Summarize the significance of process validation in ensuring the efficiency and robustness of tablet manufacturing, highlighting regulatory and quality requirements. |
Algorithms for the minimum sum coloring problem: a review | 9 | ---
paper_title: A one-to-one correspondence between potential solutions of the cluster deletion problem and the minimum sum coloring problem, and its application to P 4 -sparse graphs
paper_content:
In this note we show a one-to-one correspondence between potentially optimal solutions to the cluster deletion problem in a graph G and potentially optimal solutions for the minimum sum coloring problem in G ? (i.e. the complement graph of G). We apply this correspondence to polynomially solve the cluster deletion problem in a subclass of P 4 -sparse graphs that strictly includes P 4 -reducible graphs. We obtain a polynomial time algorithm for the cluster deletion for the family of P 4 -reducible graphs.We obtain a one-to-one correspondence between potentially optimal solutions to the cluster deletion (resp. minimum sum coloring) problem in a graph G (resp. G ? ).We apply such correspondence in order to solve the cluster deletion problem on almost all P 4 -sparse graphs.
---
paper_title: On a graph partition problem with application to VLSI layout
paper_content:
We discuss a graph partition problem with application to VLSI layout. It is not difficult to show that the general partition problem is NPComplete, so we restrict our attention to some special classes of graphs. A graph is called a circZe graph if the nodes of the graph represent the chords of a circle and there is an edge between the two nodes if the corresponding chords intersect. Supowit in [17] had posed the partition problem of circle graphs as an open problem. However, it is also not too difficult to show that the partition problem on circle graphs remains NP-Complete. We give an integer linear programming formulation for the general graph par-
---
paper_title: Approximation Algorithms for the Chromatic Sum
paper_content:
The chromatic sum of a graph G is the smallest total among all proper colorings of G using natural numbers. It was shown that computing the chromatic sum is NP-hard. In this article we prove that a simple greedy algorithm applied to sparse graphs gives a "good" approximation of the chromatic sum. For all graphs the existence of a polynomial time algorithm that approximates the chromatic sum with a linear function error implies P = NP.
---
paper_title: A tabu search approach for the sum coloring problem
paper_content:
Abstract The sum coloring problem has many applications in scheduling. It is a variant of the vertex coloring problem where the objective is to minimize the sum of colors used in coloring the vertices. In this paper, we use tabu search to solve the sum coloring problem. Experiments are performed on instances extracted from the second DIMACS challenge. Results show significative improvements on some chromatic sum bounds.
---
paper_title: Sum List Coloring Graphs
paper_content:
Let G=(V,E) be a graph with n vertices and e edges. The sum choice number of G is the smallest integer p such that there exist list sizes (f(v):v ∈ V) whose sum is p for which G has a proper coloring no matter which color lists of size f(v) are assigned to the vertices v. The sum choice number is bounded above by n+e. If the sum choice number of G equals n+e, then G is sum choice greedy. Complete graphs Kn are sum choice greedy as are trees. Based on a simple, but powerful, lemma we show that a graph each of whose blocks is sum choice greedy is also sum choice greedy. We also determine the sum choice number of K2,n, and we show that every tree on n vertices can be obtained from Kn by consecutively deleting single edges where all intermediate graphs are sc-greedy.
---
paper_title: Finding a Maximum Planar Subset of a Set of Nets in a Channel
paper_content:
An algorithm is presented that, given N two-pin nets in a channel, finds, in O(N/sup 2/) time, the largest subset that can be routed all on one layer
---
paper_title: Special issue on computational methods for graph coloring and its generalizations
paper_content:
A system for controlling the idling engine speed (output) of an internal combustion engine (controlled device) in an automotive vehicle in either open-loop or feedback control mode is disclosed. In the open-loop control mode, the control operation is based on the cooling water temperature of the engine, in the feedback control mode, the control operation being based on the deviation of an actual engine speed from the reference (input) engine speed. In some conventional systems, there is provided a time delay between the transfer of control from open-loop control to feedback control. However, according to the present invention there is provided a means for supplying instantaneously an additional intake air flow quantity to the engine in addition to the intake air quantity required in the feedback control mode at the instant when the control mode is transferred from the open-loop to feedback control. Consequently, the engine speed gradually settles at the reference engine speed without unfavorable hunting and overshooting so that an engine stalling due to lowered reference engine speed can be prevented.
---
paper_title: Lower Bounds for the Minimal Sum Coloring Problem
paper_content:
Abstract In this paper we present our study of the minimum sum coloring problem (MSCP). We propose a general lower bound for MSCP based on extraction of specific partial graphs. Also, we propose a lower bound using some decomposition into cliques. The experimental results show that our approach improves the results for most literature instances.
---
paper_title: The chromatic sum of a graph: history and recent developments
paper_content:
The chromatic sum of a graph is the ::: smallest sum of colors among all proper colorings with natural ::: numbers. The strength of a graph is the minimum number ::: of colors necessary to obtain its chromatic sum. A natural ::: generalization of chromatic sum is optimum cost chromatic ::: partition (OCCP) problem, where the costs of colors can be ::: arbitrary positive numbers. Existing results about chromatic sum, ::: strength of a graph, and OCCP problem are presented together with ::: some recent developments. The focus is on polynomial algorithms ::: for some families of graphs and NP-completeness issues.
---
paper_title: Greedy Algorithms for the Minimum Sum Coloring Problem
paper_content:
Greedy algorithms play an important role in the practical resolution of NP-hard problems. A greedy algorithm is a basic heuristic that builds a solution by iteratively adding the locally best element into the solution according to certain criteria. A greedy algorithm can either be used on its own to obtain a “good” solution, or it can be integrated into global optimization methods, for example, to limit the search space in branch and bound algorithms, or to generate initial solutions in metaheuristics. In this paper we are interested in greedy algorithms for the Minimum Sum Coloring Problem (MSCP). Since MSCP is closely related to the basic Graph Coloring Problem (GCP), we start our study with GCP and then turn to MSCP. Concerning GCP, although a lot of work has been reported in the literature, little of it concerns greedy algorithms. The most widely used greedy algorithms remain DSATUR and
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: K.: On sum coloring of graphs with parallel genetic algorithms
paper_content:
Chromatic number, chromatic sum and chromatic sum number are important graph coloring characteristics. The paper proves that a parallel metaheuristic like the parallel genetic algorithm (PGA) can be efficiently used for computing approximate sum colorings and finding upper bounds for chromatic sums and chromatic sum numbers for hard---to---color graphs. Suboptimal sum coloring with PGA gives usually much closer upper bounds then theoretical formulas known from the literature.
---
paper_title: Hybrid evolutionary search for the minimum sum coloring problem of graphs
paper_content:
Given a graph G, a proper k-coloring of G is an assignment of k colors { 1 , ? , k } to the vertices of G such that two adjacent vertices receive two different colors. The minimum sum coloring problem (MSCP) is to find a proper k-coloring while minimizing the sum of the colors assigned to the vertices. This paper presents a stochastic hybrid evolutionary search algorithm for computing upper and lower bounds of this NP-hard problem. The proposed algorithm relies on a joint use of two dedicated crossover operators to generate offspring solutions and an iterated double-phase tabu search procedure to improve offspring solutions. A distance-and-quality updating rule is used to maintain a healthy diversity of the population. We show extensive experimental results to demonstrate the effectiveness of the proposed algorithm and provide the first landscape analysis of MSCP to shed light on the behavior of the algorithm.
---
paper_title: A study of Breakout Local Search for the minimum sum coloring problem
paper_content:
Given an undirected graph G=(V,E), the minimum sum coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of G such that the total sum of the colors assigned to the vertices is minimized. In this paper, we present Breakout Local Search (BLS) for MSCP which combines some essential features of several well-established metaheuristics. BLS explores the search space by a joint use of local search and adaptive perturbation strategies. Tested on 27 commonly used benchmark instances, our algorithm shows competitive performance with respect to recently proposed heuristics and is able to find new record-breaking results for 4 instances.
---
paper_title: A local search heuristic for chromatic sum
paper_content:
A coloring of an undirected graph is a labelling of the vertices in the graph such that no two adjacent vertices receive the same label. The sum coloring problem asks to find a coloring, using natural numbers as labels, such that the total sum of the colors used is minimized. We design and test a local search algorithm, based on variable neighborhood search and iterated local search, that outperforms in several instances the currently existing benchmarks on this problem.
---
paper_title: A memetic algorithm for the minimum sum coloring problem
paper_content:
Given an undirected graph $G$, the Minimum Sum Coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of $G$ such that the total sum of the colors assigned to the vertices is minimized. This paper presents a memetic algorithm for MSCP based on a tabu search procedure with two neighborhoods and a multi-parent crossover operator. Experiments on a set of 77 well-known DIMACS and COLOR 2002-2004 benchmark instances show that the proposed algorithm achieves highly competitive results in comparison with five state-of-the-art algorithms. In particular, the proposed algorithm can improve the best known results for 17 instances. We also provide upper bounds for 18 additional instances for the first time.
---
paper_title: New Algorithm for the Sum Coloring Problem
paper_content:
In this paper we are interested in the elaboration of an approached solution to the sum coloring problem (MSCP), which is an NP-hard problem derived from the graphs coloring (GCP). The problem (MSCP) consists in minimizing the sum of colors in a graph. Our resolution approach is based on an hybridization of a genetic algorithm and a local heuristic based on an improvement of the maximal independent set algorithm given by F.Glover [4].
---
paper_title: A New Ant Colony Optimization Algorithm for the Lower Bound of Sum Coloring Problem
paper_content:
We consider an undirected graph G?=?(V, E), the minimum sum coloring problem (MSCP) asks to find a valid vertex coloring of G, using natural numbers (1,2,...), the aim is to minimize the total sum of colors. In this paper we are interested in the elaboration of an approximate solution for the minimum sum coloring problem (MSCP), more exactly we try to give a lower bound for MSCP by looking for a decomposition of the graph based on the metaheuristic of ant colony optimization (ACO). We test different instances to validate our approach.
---
paper_title: On a graph partition problem with application to VLSI layout
paper_content:
We discuss a graph partition problem with application to VLSI layout. It is not difficult to show that the general partition problem is NPComplete, so we restrict our attention to some special classes of graphs. A graph is called a circZe graph if the nodes of the graph represent the chords of a circle and there is an edge between the two nodes if the corresponding chords intersect. Supowit in [17] had posed the partition problem of circle graphs as an open problem. However, it is also not too difficult to show that the partition problem on circle graphs remains NP-Complete. We give an integer linear programming formulation for the general graph par-
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: Hybrid evolutionary search for the minimum sum coloring problem of graphs
paper_content:
Given a graph G, a proper k-coloring of G is an assignment of k colors { 1 , ? , k } to the vertices of G such that two adjacent vertices receive two different colors. The minimum sum coloring problem (MSCP) is to find a proper k-coloring while minimizing the sum of the colors assigned to the vertices. This paper presents a stochastic hybrid evolutionary search algorithm for computing upper and lower bounds of this NP-hard problem. The proposed algorithm relies on a joint use of two dedicated crossover operators to generate offspring solutions and an iterated double-phase tabu search procedure to improve offspring solutions. A distance-and-quality updating rule is used to maintain a healthy diversity of the population. We show extensive experimental results to demonstrate the effectiveness of the proposed algorithm and provide the first landscape analysis of MSCP to shed light on the behavior of the algorithm.
---
paper_title: On Sum Coloring of Graphs
paper_content:
The sum coloring problem asks to find a vertex coloring of a given graph G, using natural numbers, such that the total sum of the colors is minimized. A coloring which achieves this total sum is called an optimum coloring and the minimum number of colors needed in any optimum coloring of a graph is called the strength of the graph. We prove the NP-hardness of finding the vertex strength for graphs with Δ = 6. Polynomial time algorithms are presented for the sum coloring of chain bipartite graphs and k-split graphs. The edge sum coloring problem and the edge strength of a graph are defined similarly. We prove that the edge sum coloring and the edge strength problems are both NP-complete for k-regular graphs, k ≥ 3. Also we give a polynomial time algorithm to solve the edge sum coloring problem on trees.
---
paper_title: A one-to-one correspondence between potential solutions of the cluster deletion problem and the minimum sum coloring problem, and its application to P 4 -sparse graphs
paper_content:
In this note we show a one-to-one correspondence between potentially optimal solutions to the cluster deletion problem in a graph G and potentially optimal solutions for the minimum sum coloring problem in G ? (i.e. the complement graph of G). We apply this correspondence to polynomially solve the cluster deletion problem in a subclass of P 4 -sparse graphs that strictly includes P 4 -reducible graphs. We obtain a polynomial time algorithm for the cluster deletion for the family of P 4 -reducible graphs.We obtain a one-to-one correspondence between potentially optimal solutions to the cluster deletion (resp. minimum sum coloring) problem in a graph G (resp. G ? ).We apply such correspondence in order to solve the cluster deletion problem on almost all P 4 -sparse graphs.
---
paper_title: Minimum Color Sum of Bipartite Graphs
paper_content:
The problem ofminimum color sumof a graph is to color the vertices of the graph such that the sum (average) of all assigned colors is minimum. Recently it was shown that in general graphs this problem cannot be approximated withinn1??, for any ?0, unlessNP=ZPP(Bar-Noyet al., Information and Computation140(1998), 183?202). In the same paper, a 9/8-approximation algorithm was presented for bipartite graphs. The hardness question for this problem on bipartite graphs was left open. In this paper we show that the minimum color sum problem for bipartite graphs admits no polynomial approximation scheme, unlessP=NP. The proof is byL-reducing the problem of finding the maximum independent set in a graph whose maximum degree is four to this problem. This result indicates clearly that the minimum color sum problem is much harder than the traditional coloring problem, which is trivially solvable in bipartite graphs. As for the approximation ratio, we make a further step toward finding the precise threshold. We present a polynomial 10/9-approximation algorithm. Our algorithm uses a flow procedure in addition to the maximum independent set procedure used in previous solutions.
---
paper_title: An introduction to chromatic sums
paper_content:
We introduce the new concept of the chromatic sum of a graph G, the smallest possible total among all proper colorings of G using natural numbers. We show that computing the chromatic sum for arbitrary graphs is an NP-complete problem. Indeed, a polynomial algorithm for the chromatic sum would be easily modified to compute the chromatic number. Even for trees the chromatic sum is far from trivial. We construct a family of trees to demonstrate that for each k, some trees need k colors to achieve the minimum sum. In fact, we prove that our family gives the smallest trees with this property. Moreover, we show that asymptotically, for each value of k, almost all trees require more than k colors. Finally, we present a linear algorithm for computing the chromatic sum of an arbitrary tree.
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: Coloring of trees with minimum sum of colors
paper_content:
The chromatic sum of a graph is the smallest sum of colors among all proper colorings with natural numbers. The strength is the minimum number of colors needed to achieve the chromatic sum. We construct for each positive integer k a tree with strength k that has maximum degree only 2k - 2. The result is best possible. © 1999 John Wiley & Sons, Inc. J Graph Theory 32: 354–358, 1999
---
paper_title: A note on the strength and minimum color sum of bipartite graphs
paper_content:
The strength of a graph G is the smallest integer s such that there exists a minimum sum coloring of G using integers {1,...,s}, only. For bipartite graphs of maximum degree @D we show the following simple bound: s@?@?@D/2@?+1. As a consequence, there exists a quadratic time algorithm for determining the strength and minimum color sum of bipartite graphs of maximum degree @D@?4.
---
paper_title: On Sum Coloring and Sum Multi-Coloring for Restricted Families of Graphs
paper_content:
We consider the sum coloring (chromatic sum) problem and the sum multi-coloring problem for restricted families of graphs. In particular, we consider the graph classes of proper intersection graphs of axis-parallel rectangles, proper interval graphs, and unit disk graphs. All the above-mentioned graph classes belong to a more general graph class of (k+1)-clawfree graphs (respectively, for k=4,2,5). We prove that sum coloring is NP-hard for penny graphs and unit square graphs which implies NP-hardness for unit disk graphs and proper intersection graphs of axis-parallel rectangles. We show a 2-approximation algorithm for unit square graphs, with the assumption that the geometric representation of the graph is given. For sum multi-coloring, we confirm that the greedy first-fit coloring, after ordering vertices by their demands, achieves a k-approximation for the preemptive version of sum multi-coloring on (k+1)-clawfree graphs. Finally, we study priority algorithms as a model for greedy algorithms for the sum coloring problem and the sum multi-coloring problem. We show various inapproximation results under several natural input representations.
---
paper_title: Sum Coloring of Bipartite Graphs with Bounded Degree
paper_content:
Abstract ::: We consider the Chromatic Sum Problem on bipartite graphs which ::: appears to be much harder than the classical Chromatic Number Problem. We prove that the Chromatic Sum Problem is NP-complete on planar ::: bipartite graphs with $\Delta \leq 5$, but polynomial on bipartite graphs ::: with $\Delta \leq 3$, for which we construct an $O(n^{2})$-time algorithm. ::: Hence, we tighten the borderline of intractability for this problem on ::: bipartite graphs with bounded degree, namely: the case $\Delta =3$ is easy, $% ::: \Delta =5$ is hard. Moreover, we construct a $27/26$-approximation ::: algorithm for this problem thus improving the best known approximation ratio of ::: $10/9$.
---
paper_title: Greedy Algorithms for the Minimum Sum Coloring Problem
paper_content:
Greedy algorithms play an important role in the practical resolution of NP-hard problems. A greedy algorithm is a basic heuristic that builds a solution by iteratively adding the locally best element into the solution according to certain criteria. A greedy algorithm can either be used on its own to obtain a “good” solution, or it can be integrated into global optimization methods, for example, to limit the search space in branch and bound algorithms, or to generate initial solutions in metaheuristics. In this paper we are interested in greedy algorithms for the Minimum Sum Coloring Problem (MSCP). Since MSCP is closely related to the basic Graph Coloring Problem (GCP), we start our study with GCP and then turn to MSCP. Concerning GCP, although a lot of work has been reported in the literature, little of it concerns greedy algorithms. The most widely used greedy algorithms remain DSATUR and
---
paper_title: A memetic algorithm for the minimum sum coloring problem
paper_content:
Given an undirected graph $G$, the Minimum Sum Coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of $G$ such that the total sum of the colors assigned to the vertices is minimized. This paper presents a memetic algorithm for MSCP based on a tabu search procedure with two neighborhoods and a multi-parent crossover operator. Experiments on a set of 77 well-known DIMACS and COLOR 2002-2004 benchmark instances show that the proposed algorithm achieves highly competitive results in comparison with five state-of-the-art algorithms. In particular, the proposed algorithm can improve the best known results for 17 instances. We also provide upper bounds for 18 additional instances for the first time.
---
paper_title: A tabu search approach for the sum coloring problem
paper_content:
Abstract The sum coloring problem has many applications in scheduling. It is a variant of the vertex coloring problem where the objective is to minimize the sum of colors used in coloring the vertices. In this paper, we use tabu search to solve the sum coloring problem. Experiments are performed on instances extracted from the second DIMACS challenge. Results show significative improvements on some chromatic sum bounds.
---
paper_title: Hybrid Evolutionary Algorithms for Graph Coloring
paper_content:
A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.
---
paper_title: A study of Breakout Local Search for the minimum sum coloring problem
paper_content:
Given an undirected graph G=(V,E), the minimum sum coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of G such that the total sum of the colors assigned to the vertices is minimized. In this paper, we present Breakout Local Search (BLS) for MSCP which combines some essential features of several well-established metaheuristics. BLS explores the search space by a joint use of local search and adaptive perturbation strategies. Tested on 27 commonly used benchmark instances, our algorithm shows competitive performance with respect to recently proposed heuristics and is able to find new record-breaking results for 4 instances.
---
paper_title: Using tabu search techniques for graph coloring
paper_content:
Tabu search techniques are used for moving step by step towards the minimum value of a function. A tabu list of forbidden movements is updated during the iterations to avoid cycling and being trapped in local minima. Such techniques are adapted to graph coloring problems. We show that they provide almost optimal colorings of graphs having up to 1000 nodes and their efficiency is shown to be significantly superior to the famous simulated annealing.
---
paper_title: A local search heuristic for chromatic sum
paper_content:
A coloring of an undirected graph is a labelling of the vertices in the graph such that no two adjacent vertices receive the same label. The sum coloring problem asks to find a coloring, using natural numbers as labels, such that the total sum of the colors used is minimized. We design and test a local search algorithm, based on variable neighborhood search and iterated local search, that outperforms in several instances the currently existing benchmarks on this problem.
---
paper_title: Hybrid Evolutionary Algorithms for Graph Coloring
paper_content:
A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: K.: On sum coloring of graphs with parallel genetic algorithms
paper_content:
Chromatic number, chromatic sum and chromatic sum number are important graph coloring characteristics. The paper proves that a parallel metaheuristic like the parallel genetic algorithm (PGA) can be efficiently used for computing approximate sum colorings and finding upper bounds for chromatic sums and chromatic sum numbers for hard---to---color graphs. Suboptimal sum coloring with PGA gives usually much closer upper bounds then theoretical formulas known from the literature.
---
paper_title: Hybrid evolutionary search for the minimum sum coloring problem of graphs
paper_content:
Given a graph G, a proper k-coloring of G is an assignment of k colors { 1 , ? , k } to the vertices of G such that two adjacent vertices receive two different colors. The minimum sum coloring problem (MSCP) is to find a proper k-coloring while minimizing the sum of the colors assigned to the vertices. This paper presents a stochastic hybrid evolutionary search algorithm for computing upper and lower bounds of this NP-hard problem. The proposed algorithm relies on a joint use of two dedicated crossover operators to generate offspring solutions and an iterated double-phase tabu search procedure to improve offspring solutions. A distance-and-quality updating rule is used to maintain a healthy diversity of the population. We show extensive experimental results to demonstrate the effectiveness of the proposed algorithm and provide the first landscape analysis of MSCP to shed light on the behavior of the algorithm.
---
paper_title: A memetic algorithm for the minimum sum coloring problem
paper_content:
Given an undirected graph $G$, the Minimum Sum Coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of $G$ such that the total sum of the colors assigned to the vertices is minimized. This paper presents a memetic algorithm for MSCP based on a tabu search procedure with two neighborhoods and a multi-parent crossover operator. Experiments on a set of 77 well-known DIMACS and COLOR 2002-2004 benchmark instances show that the proposed algorithm achieves highly competitive results in comparison with five state-of-the-art algorithms. In particular, the proposed algorithm can improve the best known results for 17 instances. We also provide upper bounds for 18 additional instances for the first time.
---
paper_title: Tight Bounds on the Chromatic Sum of a Connected Graph
paper_content:
The chromatic sum of a graph is introduced in the dissertation of Ewa Kubicka. It is the smallest possible total among all proper colorings of G using natural numbers. In this article we determine tight bounds on the chromatic sum of a connected graph with e edges,
---
paper_title: K.: On sum coloring of graphs with parallel genetic algorithms
paper_content:
Chromatic number, chromatic sum and chromatic sum number are important graph coloring characteristics. The paper proves that a parallel metaheuristic like the parallel genetic algorithm (PGA) can be efficiently used for computing approximate sum colorings and finding upper bounds for chromatic sums and chromatic sum numbers for hard---to---color graphs. Suboptimal sum coloring with PGA gives usually much closer upper bounds then theoretical formulas known from the literature.
---
paper_title: Lower Bounds for the Minimal Sum Coloring Problem
paper_content:
Abstract In this paper we present our study of the minimum sum coloring problem (MSCP). We propose a general lower bound for MSCP based on extraction of specific partial graphs. Also, we propose a lower bound using some decomposition into cliques. The experimental results show that our approach improves the results for most literature instances.
---
paper_title: Hybrid evolutionary search for the minimum sum coloring problem of graphs
paper_content:
Given a graph G, a proper k-coloring of G is an assignment of k colors { 1 , ? , k } to the vertices of G such that two adjacent vertices receive two different colors. The minimum sum coloring problem (MSCP) is to find a proper k-coloring while minimizing the sum of the colors assigned to the vertices. This paper presents a stochastic hybrid evolutionary search algorithm for computing upper and lower bounds of this NP-hard problem. The proposed algorithm relies on a joint use of two dedicated crossover operators to generate offspring solutions and an iterated double-phase tabu search procedure to improve offspring solutions. A distance-and-quality updating rule is used to maintain a healthy diversity of the population. We show extensive experimental results to demonstrate the effectiveness of the proposed algorithm and provide the first landscape analysis of MSCP to shed light on the behavior of the algorithm.
---
paper_title: Improved Lower Bounds for Sum Coloring via Clique Decomposition
paper_content:
Given an undirected graph $G = (V,E)$ with a set $V$ of vertices and a set $E$ of edges, the minimum sum coloring problem (MSCP) is to find a legal vertex coloring of $G$, using colors represented by natural numbers $1, 2, . . .$ such that the total sum of the colors assigned to the vertices is minimized. This paper describes an approach based on the decomposition of the original graph into disjoint cliques for computing lower bounds for the MSCP. Basically, the proposed approach identifies and removes at each extraction iteration a maximum number of cliques of the same size (the largest possible) from the graph. Computational experiments show that this approach is able to improve on the current best lower bounds for 14 benchmark instances, and to prove optimality for the first time for 4 instances. We also report lower bounds for 24 more instances for which no such bounds are available in the literature. These new lower bounds are useful to estimate the quality of the upper bounds obtained with various heuristic approaches.
---
paper_title: A local search heuristic for chromatic sum
paper_content:
A coloring of an undirected graph is a labelling of the vertices in the graph such that no two adjacent vertices receive the same label. The sum coloring problem asks to find a coloring, using natural numbers as labels, such that the total sum of the colors used is minimized. We design and test a local search algorithm, based on variable neighborhood search and iterated local search, that outperforms in several instances the currently existing benchmarks on this problem.
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: Hybrid evolutionary search for the minimum sum coloring problem of graphs
paper_content:
Given a graph G, a proper k-coloring of G is an assignment of k colors { 1 , ? , k } to the vertices of G such that two adjacent vertices receive two different colors. The minimum sum coloring problem (MSCP) is to find a proper k-coloring while minimizing the sum of the colors assigned to the vertices. This paper presents a stochastic hybrid evolutionary search algorithm for computing upper and lower bounds of this NP-hard problem. The proposed algorithm relies on a joint use of two dedicated crossover operators to generate offspring solutions and an iterated double-phase tabu search procedure to improve offspring solutions. A distance-and-quality updating rule is used to maintain a healthy diversity of the population. We show extensive experimental results to demonstrate the effectiveness of the proposed algorithm and provide the first landscape analysis of MSCP to shed light on the behavior of the algorithm.
---
paper_title: A study of Breakout Local Search for the minimum sum coloring problem
paper_content:
Given an undirected graph G=(V,E), the minimum sum coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of G such that the total sum of the colors assigned to the vertices is minimized. In this paper, we present Breakout Local Search (BLS) for MSCP which combines some essential features of several well-established metaheuristics. BLS explores the search space by a joint use of local search and adaptive perturbation strategies. Tested on 27 commonly used benchmark instances, our algorithm shows competitive performance with respect to recently proposed heuristics and is able to find new record-breaking results for 4 instances.
---
paper_title: A memetic algorithm for the minimum sum coloring problem
paper_content:
Given an undirected graph $G$, the Minimum Sum Coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of $G$ such that the total sum of the colors assigned to the vertices is minimized. This paper presents a memetic algorithm for MSCP based on a tabu search procedure with two neighborhoods and a multi-parent crossover operator. Experiments on a set of 77 well-known DIMACS and COLOR 2002-2004 benchmark instances show that the proposed algorithm achieves highly competitive results in comparison with five state-of-the-art algorithms. In particular, the proposed algorithm can improve the best known results for 17 instances. We also provide upper bounds for 18 additional instances for the first time.
---
paper_title: Solving the minimum sum coloring problem via binary quadratic programming
paper_content:
In recent years, binary quadratic programming (BQP) has been successively applied to solve several combinatorial optimization problems. We consider in this paper a study of using the BQP model to solve the minimum sum coloring problem (MSCP). For this purpose, we recast the MSCP with a quadratic model which is then solved via a recently proposed Path Relinking (PR) algorithm designed for the general BQP. Based on a set of MSCP benchmark instances, we investigate the performance of this solution approach compared with existing methods.
---
paper_title: A local search heuristic for chromatic sum
paper_content:
A coloring of an undirected graph is a labelling of the vertices in the graph such that no two adjacent vertices receive the same label. The sum coloring problem asks to find a coloring, using natural numbers as labels, such that the total sum of the colors used is minimized. We design and test a local search algorithm, based on variable neighborhood search and iterated local search, that outperforms in several instances the currently existing benchmarks on this problem.
---
paper_title: A memetic algorithm for the minimum sum coloring problem
paper_content:
Given an undirected graph $G$, the Minimum Sum Coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of $G$ such that the total sum of the colors assigned to the vertices is minimized. This paper presents a memetic algorithm for MSCP based on a tabu search procedure with two neighborhoods and a multi-parent crossover operator. Experiments on a set of 77 well-known DIMACS and COLOR 2002-2004 benchmark instances show that the proposed algorithm achieves highly competitive results in comparison with five state-of-the-art algorithms. In particular, the proposed algorithm can improve the best known results for 17 instances. We also provide upper bounds for 18 additional instances for the first time.
---
| Title: Algorithms for the Minimum Sum Coloring Problem: A Review
Section 1: Definitions and formulation of MSCP
Description 1: This section introduces the basic definitions and mathematical formulation of the Minimum Sum Coloring Problem (MSCP), including its objective and related graph theoretical concepts.
Section 2: Polynomial-time and k-approximation algorithms for MSCP
Description 2: This section reviews various polynomial-time and k-approximation algorithms that have been developed for specific classes of graphs and highlights their main characteristics and effectiveness.
Section 3: Heuristics and metaheuristics for MSCP
Description 3: This section provides a comprehensive review of heuristic and metaheuristic algorithms proposed for MSCP, categorizing them into greedy algorithms, local search heuristics, and evolutionary algorithms, and discussing their key ingredients and performance.
Section 4: Greedy algorithms
Description 4: This section delves into the details of various greedy algorithms developed for MSCP, discussing their methodologies and how they are used to generate initial solutions for other more complex algorithms.
Section 5: Local search heuristics
Description 5: This section examines local search heuristics, describing different neighborhood structures and transformation operators used to improve solutions, and categorizing them into single and multi-neighborhood searches.
Section 6: Evolutionary algorithms
Description 6: This section discusses evolutionary algorithms and their hybrid variants, focusing on memetic algorithms that combine genetic operators with local search improvements.
Section 7: Bounds for MSCP
Description 7: This section introduces theoretical and computational bounds of MSCP, explaining how they are derived and their significance in the context of evaluating the performance of different algorithms.
Section 8: Benchmark and performance evaluation
Description 8: This section introduces common benchmark instances used for assessing MSCP algorithms and summarizes the performance of the reviewed algorithms based on these benchmarks.
Section 9: Perspectives and conclusion
Description 9: This section discusses future research directions and concludes the review, emphasizing the importance of MSCP and possible advancements in solution methodologies. |
On TCP Performance in a Heterogeneous Network: A Survey | 9 | ---
paper_title: Improving the start-up behavior of a congestion control scheme for TCP
paper_content:
Based on experiments conducted in a network simulator and over real networks, this paper proposes changes to the congestion control scheme in current TCP implementations to improve its behavior during the start-up period of a TCP connection.The scheme, which includes Slow-start, Fast Retransmit, and Fast Recovery algorithms, uses acknowledgments from a receiver to dynamically calculate reasonable operating values for a sender's TCP parameters governing when and how much a sender can pump into the network. During the start-up period, because a TCP sender starts with default parameters, it often ends up sending too many packets and too fast, leading to multiple losses of packets from the same window. This paper shows that recovery from losses during this start-up period is often unnecessarily time-consuming.In particular, using the current Fast Retransmit algorithm, when multiple packets in the same window are lost, only one of the packet losses may be recovered by each Fast Retransmit; the rest are often recovered by Slow-start after a usually lengthy retransmission timeout. Thus, this paper proposes changes to the Fast Retransmit algorithm so that it can quickly recover from multiple packet losses without waiting unnecessarily for the timeout. These changes, tested in the simulator and on the real networks, show significant performance improvements, especially for short TCP transfers. The paper also proposes other changes to help minimize the number of packets lost during the start-up period.
---
paper_title: Simulation-based comparisons of Tahoe, Reno and SACK TCP
paper_content:
This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.
---
paper_title: Transport Protocols for Internet-Compatible Satellite Networks
paper_content:
We address the question of how well end-to-end transport connections perform in a satellite environment composed of one or more satellites in geostationary orbit (GEO) or low-altitude Earth orbit (LEO), in which the connection may traverse a portion of the wired Internet. We first summarize the various ways in which latency and asymmetry can impair the performance of the Internet's transmission control protocol (TCP), and discuss extensions to standard TCP that alleviate some of these performance problems. Through analysis, simulation, and experiments, we quantify the performance of state-of-the-art TCP implementations in a satellite environment. A key part of the experimental method is the use of traffic models empirically derived from Internet traffic traces. We identify those TCP implementations that can be expected to perform reasonably well, and those that can suffer serious performance degradation. An important result is that, even with the best satellite-optimized TCP implementations, moderate levels of congestion in the wide-area Internet can seriously degrade performance for satellite connections. For scenarios in which TCP performance is poor, we investigate the potential improvement of using a satellite gateway, proxy, or Web cache to "split" transport connections in a manner transparent to end users. Finally, we describe a new transport protocol for use internally within a satellite network or as part of a split connection. This protocol, which we call the satellite transport protocol (STP), is optimized for challenging network impairments such as high latency, asymmetry, and high error rates. Among its chief benefits are up to an order of magnitude reduction in the bandwidth used in the reverse path, as compared to standard TCP, when conducting large file transfers. This is a particularly important attribute for the kind of asymmetric connectivity likely to dominate satellite-based Internet access.
---
paper_title: Random early detection gateways for congestion avoidance
paper_content:
The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >
---
paper_title: The Performance of TCP / IP for Networks with High Bandwidth-Delay Products and Random Loss
paper_content:
This paper examines the performance of TCP/IP, the Internet data transport protocol, over wide-area networks (WANs) in which data traffic could coexist with real-time traffic such as voice and video. Specifically, we attempt to develop a basic understanding, using analysis and simulation, of the properties of TCP/IP in a regime where: (1) the bandwidth-delay product of the network is high compared to the buffering in the network and (2) packets may incur random loss (e.g., due to transient congestion caused by fluctuations in real-time traffic, or wireless links in the path of the connection). The following key results are obtained. First, random loss leads to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one. Second, for multiple connections sharing a bottleneck link, TCP is grossly unfair toward connections with higher round-trip delays. This means that a simple first in first out (FIFO) queueing discipline might not suffice for data traffic in WANs. Finally, while the Reno version of TCP produces less bursty traffic than the original Tahoe version, it is less robust than the latter when successive losses are closely spaced. We conclude by indicating modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs.
---
paper_title: Connections with multiple congested gateways in packet-switched networks part 1: one-way traffic
paper_content:
In this paper we explore the bias in TCP/IP networks against connections with multiple congested gateways. We consider the interaction between the bias against connections with multiple congested gateways, the bias of the TCP window modification algorithm against connections with longer roundtrip times, and the bias of Drop Tail and Random Drop gateways against bursty traffic. Using simulations and a heuristic analysis, we show that in a network with the window modification algorithm in 4.3 tahoe BSD TCP and with Random Drop or Drop Tail gateways, a longer connection with multiple congested gateways can receive unacceptably low throughput. We show that in a network with no bias against connections with longer roundtrip times and with no bias against bursty traffic, a connection with multiple congested gateways can receive an acceptable level of throughput.We discuss the application of several current measures of fairness to networks with multiple congested gateways, and show that different measures of fairness have quite different implications. One view is that each connection should receive the same throughput in bytes/second, regardless of roundtrip times or numbers of congested gateways. Another view is that each connection should receive the same share of the network's scarce congested resources. In general, we believe that the fairness criteria for connections with multiple congested gateways requires further consideration.
---
paper_title: A comparison of mechanisms for improving TCP performance over wireless links
paper_content:
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.
---
paper_title: Analysis of TCP with several bottleneck nodes
paper_content:
Many works have studied the performance of TCP by modeling the network as a single bottleneck node. We present a more general model taking into account all the nodes on the path not only the main bottleneck. We show that, in addition to the main bottleneck, the other nodes can seriously affect the performance of TCP. They may cause an improvement in the performance by decreasing the burstiness of TCP traffic arriving at the main bottleneck. But, if the buffers in these nodes are not well dimensioned, the congestion may be shifted to them which deteriorates the performance even though they are faster than the main bottleneck. We conclude our analysis by guidelines for the dimensioning of network buffers so as to improve the performance of TCP.
---
paper_title: A comparison of mechanisms for improving TCP performance over wireless links
paper_content:
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.
---
| Title: On TCP Performance in a Heterogeneous Network: A Survey
Section 1: Introduction
Description 1: Introduce the motivation for studying TCP performance in heterogeneous networks and provide an overview of the challenges.
Section 2: Overview of TCP
Description 2: Present an overview of the Transmission Control Protocol (TCP), including its mechanisms and phases like Slow Start (SS) and Congestion Avoidance (CA).
Section 3: Large Bandwidth-Delay Product
Description 3: Discuss the impact of large bandwidth-delay products (BDP) on TCP performance and explore related issues and solutions.
Section 4: Round-Trip Time
Description 4: Analyze the effects of long round-trip times (RTTs) on TCP performance and explore possible improvements and solutions.
Section 5: Improving Slow Start performance
Description 5: Provide a discussion on various solutions to improve TCP's Slow Start performance, including TCP level, application level, and network level solutions.
Section 6: Solutions for unfairness
Description 6: Examine issues of TCP unfairness and discuss both TCP-level and network-level solutions aimed at improving fairness in network bandwidth allocation.
Section 7: Non-congestion losses
Description 7: Consider the impact of non-congestion losses on TCP performance and review solutions to mitigate these losses, including hiding non-congestion losses, link-level, TCP-level, and end-to-end solutions.
Section 8: Bandwidth Asymmetry
Description 8: Investigate the problems arising from bandwidth asymmetry in networks and explore solutions from both the receiver side and the sender side.
Section 9: Conclusions
Description 9: Summarize the main problems facing TCP in heterogeneous networks and highlight the key solutions proposed in the literature, emphasizing the issues of TCP burstiness and coupling between congestion detection and error control. |
Introduction to the Special Issue: The Literature Review in Information Systems | 6 | ---
paper_title: 10 Sociomateriality: Challenging the Separation of Technology, Work and Organization
paper_content:
Abstract We begin by juxtaposing the pervasive presence of technology in organizational work with its absence from the organization studies literature. Our analysis of four leading journals in the field confirms that over 95% of the articles published in top management research outlets do not take into account the role of technology in organizational life. We then examine the research that has been done on technology, and categorize this literature into two research streams according to their view of technology: discrete entities or mutually dependent ensembles. For each stream, we discuss three existing reviews spanning the last three decades of scholarship to highlight that while there have been many studies and approaches to studying organizational interactions and implications of technology, empirical research has produced mixed and often‐conflicting results. Going forward, we suggest that further work is needed to theorize the fusion of technology and work in organizations, and that additional perspe...
---
paper_title: Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques
paper_content:
Despite the fact that validating the measures of constructs is critical to building cumulative knowledge in MIS and the behavioral sciences, the process of scale development and validation continues to be a challenging activity. Undoubtedly, part of the problem is that many of the scale development procedures advocated in the literature are limited by the fact that they (1) fail to adequately discuss how to develop appropriate conceptual definitions of the focal construct, (2) often fail to properly specify the measurement model that relates the latent construct to its indicators, and (3) underutilize techniques that provide evidence that the set of items used to represent the focal construct actually measures what it purports to measure. Therefore, the purpose of the present paper is to integrate new and existing techniques into a comprehensive set of recommendations that can be used to give researchers in MIS and the behavioral sciences a framework for developing valid measures. First, we briefly elaborate upon some of the limitations of current scale development practices. Following this, we discuss each of the steps in the scale development process while paying particular attention to the differences that are required when one is attempting to develop scales for constructs with formative indicators as opposed to constructs with reflective indicators. Finally, we discuss several things that should be done after the initial development of a scale to examine its generalizability and to enhance its usefulness.
---
paper_title: Synthesizing information systems knowledge: A typology of literature reviews
paper_content:
Abstract In this article we develop a typology of review types and provide a descriptive insight into the most common reviews found in top IS journals. Our assessment reveals that the number of IS reviews has increased over the years. The majority of the 139 reviews are theoretical in nature, followed by narrative reviews, meta-analyses, descriptive reviews, hybrid reviews, critical reviews, and scoping reviews. Considering the calls for IS research to develop a cumulative tradition, we hope more review articles will be published in the future and encourage researchers who start a review to use our typology to position their contribution.
---
paper_title: Critical Discourse Analysis as a Review Methodology: An Empirical Example
paper_content:
Research disciplines and subdisciplines are steeped in epistemological beliefs and theoretical assumptions that guide and constrain research. These beliefs and assumptions both enable scientific inquiry and limit scientific progress. Theory and review papers tend to be a means for reproducing ideological assumptions. However, review papers can also challenge ideological assumptions by critically assessing taken-for-granted assumptions. Critical review methods are underdeveloped in the management disciplines. The information systems (IS) discipline must do more to improve the critical examination of its scientific discourse. In this paper, we present a method with guiding principles and steps for systematically conducting critical reviews of IS literature based on Habermasian strains of critical discourse analysis. We provide an empirical example of the method. The empirical example offers a critical review of behavioral information security research with a focus on employees’ security behaviors.
---
paper_title: Design science in information systems research
paper_content:
A silver halide photographic material which contains at least one blocked photographic reagent having in the molecule thereof at least one structure and capable of releasing a photogrpahically useful reagent by cleavage of the bond by a nucleophilic attack of a nucleophilic reagent on the structure and a subsequent intramolecular electron transfer reaction or an intramolecular nucleophilic reaction and which has at least one photosensitive silver halide emulsion layer. There is provided a photogrphic material which includes, in combination, a photographic reagent precursor capable of releasing a photographically useful reagent timely on photographic processing and a photosensitive silver halide emulsion layer.
---
paper_title: Efficacy of the Theory of Planned Behaviour: A meta-analytic review
paper_content:
The Theory of Planned Behaviour (TPB) has received considerable attention in the literature. The present study is a quantitative integration and review of that research. From a database of 185 independent studies published up to the end of 1997, the TPB accounted for 27% and 39% of the variance in behaviour and intention, respectively. The perceived behavioural control (PBC) construct accounted for significant amounts of variance in intention and behaviour, independent of theory of reasoned action variables. When behaviour measures were self-reports, the TPB accounted for 11% more of the variance in behaviour than when behaviour measures were objective or observed (R2s = .31 and .21, respectively). Attitude, subjective norm and PBC account for significantly more of the variance in individuals' desires than intentions or self-predictions, but intentions and self-predictions were better predictors of behaviour. The subjective norm construct is generally found to be a weak predictor of intentions. This is partly attributable to a combination of poor measurement and the need for expansion of the normative component. The discussion focuses on ways in which current TPB research can be taken forward in the light of the present review.
---
paper_title: Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review
paper_content:
Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. ::: ::: Drawing on Kuhn's notion of scientific paradigms, we developed a new method—meta-narrative review—for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding ‘storyline’ of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and ‘twists in the plot’. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of ‘diffusion of innovations’ in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the ‘same’ problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.
---
paper_title: Using Grounded Theory as a Method for Rigorously Reviewing Literature
paper_content:
This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously analyzing a carefully chosen set of studies; it assures solidly legitimized, in-depth analyses of empirical facts and related insights. This includes, the emergence of new themes, issues and opportunities; interrelationships and dependencies in or beyond a particular area; as well as inconsistencies. If carried out meticulously, reviewing a well-carved out piece of literature by following this guide is likely to lead to more integrated and fruitful theory emergence, something that would enrich many fields in the social sciences.
---
paper_title: Bayesian Structural Equation Models for Cumulative Theory Building in Information Systems―A Brief Tutorial Using BUGS and R
paper_content:
Structural equation models (SEM) are frequently used in information systems (IS) to analyze and test theoretical propositions. As IS researchers frequently reuse measurement instruments and adapt or extend theories, they frequently re-estimate regression relationships in their SEM that have been examined in previous studies. We advocate the use of Bayesian estimation of structural equation models as an aid to cumulative theory building; Bayesian statistics offer a statistically sound way to incorporate prior knowledge into SEM estimation, allowing researchers to keep a “running tally” of the best estimates of model parameters. ::: ::: This tutorial on the application of Bayesian principles to SEM estimation discusses when and why the use of Bayesian estimation should be considered by IS researchers, presents an illustrative example using best practices, and makes recommendations to guide IS researchers in the application of Bayesian SEM.
---
paper_title: BEYOND THE ‘MYTHICAL CENTRE’: AN AFFIRMATIVE POST-MODERN VIEW OF SERVQUAL RESEARCH IN INFORMATION SYSTEMS
paper_content:
Conventional approaches to literature reviews tend to perform a meta-analysis of previous literature, based on a modernist, normal science paradigm. These reviews typically seek a ‘mythical centre’ based on synthesis and consensus. This is not always appropriate for research areas that have been characterised by heterogeneous and inconsistent studies. In this paper, we examine the discourse associated with SERVQUAL research in information systems and electronic commerce. We identify seven distinct ‘storylines’ that have emerged. We conclude that a discourse-based approach to analysing literature gives a richer and more representative picture of the state of knowledge than conventional meta-analysis.
---
paper_title: Incorporating formative measures into covariance-based structural equation models
paper_content:
Formatively measured constructs have been increasingly used in information systems research. With few exceptions, however, extant studies have been relying on the partial least squares (PLS) approach to specify and estimate structural models involving constructs measured with formative indicators. This paper highlights the benefits of employing covariance structure analysis (CSA) when investigating such models and illustrates its application with the LISREL program. The aim is to provide practicing IS researchers with an understanding of key issues and potential problems associated with formatively measured constructs within a covariance-based modeling framework and encourage them to consider using CSA in their future research endeavors.
---
paper_title: Beyond synthesis: re-presenting heterogeneous research literature
paper_content:
This article examines the nature, role and function of the literature review in academic discourse. Researchers in information systems IS are often advised to espouse a neutral viewpoint and adapt the goal of synthesising previous literature when conducting a literature review. However, since research literature in many areas of IS is diverse and heterogeneous, this synthesis is not value neutral, but is a construction of the researchers. We suggest that other goals and viewpoints for reviewing and presenting previous literature are possible, and in some cases, desirable. Using the example of service quality literature, we use a lens of historical discourse, and techniques of soft systems analysis and rich pictures, to present previous research literature on ServQual-related research in IS and electronic commerce. We identify seven ‘stories’ from service quality research literature and analyse the clients, actors, transformations, world-view weltanschauung, owners and environment in each story. We conclude that alternative presentations of research literature can offer fresh insights, especially in areas where the research literature is diffuse, contradictory and heterogeneous.
---
paper_title: Toward a richer diversity of genres in information systems research: new categorization and guidelines
paper_content:
European Journal of Information Systems (2012) 21, 469–478. doi:10.1057/ejis.2012.38 In this editorial I would like to make a general and effective call for more diversity in information systems research genres. This is a general call that goes beyond a particular journal and attempts to provide some practical guidelines. One can be sympathetic to the vision of opening up to a new wider set of research and presentation genres such as the one developed in my former editorials (Rowe, 2010, 2011), but without some guidelines this call for increasing diversity in IS research genres might hang fire. In other words, to encourage potential authors to be more confident in these endeavors, we need to indicate which lampposts might be worth approaching when building their research. Journals play an important institutional role in signaling and promoting certain categories of research. Hence, the two objectives of this editorial are as follows:
---
paper_title: On being ‘systematic’ in literature reviews
paper_content:
General guidelines for conducting literature reviews often do not address the question of literature searches and dealing with a potentially large number of identified sources. These issues are specifically addressed by so-called systematic literature reviews (SLR) that propose a strict protocol for the search and appraisal of literature. Moreover, SLR are claimed to be a ‘standardized method’ for literature reviews, that is, replicable, transparent, objective, unbiased, and rigorous, and thus superior to other approaches for conducting literature reviews. These are significant and consequential claims that — 2014; despite increasing adoption of SLR — 2014; remained largely unnoticed in the information systems (IS) literature. The objective of this debate is to draw attention of the IS community to SLR’s claims, to question their justification and reveal potential risks of their adoption. This is achieved by first examining the origins of SLR and the prescribed systematic literature review process and then by critically assessing their claims and implications. In this debate, we show that SLR are applicable and useful for a very specific kind of literature review, a meta study that identifies and summarizes evidence from earlier research. We also demonstrate that the claims that SLR provide superior quality are not justified. More importantly, we argue that SLR as a general approach to conducting literature reviews is highly questionable, concealing significant perils. The paper cautions that SLR could undermine critical engagement with literature and what it means to be scholarly in academic work.
---
paper_title: Stylized Facts as an Instrument for Literature Review and Cumulative Information Systems Research
paper_content:
The accumulation of scientific knowledge is an important objective of information systems (IS) research. Although different review approaches exist in the continuum between narrative reviews and meta-analyses, most reviews in IS are narrative or descriptive—with all related drawbacks concerning objectivity and reliability—because available underlying sources in IS do typically not fulfil the requirements of formal approaches such as meta-analyses. To discuss how cumulative IS research can be effectively advanced using a more formalized approach fitting the current situation in IS research, in this paper, we point out the potential of stylized facts (SFs). SFs are interesting, sometimes counterintuitive patterns in empirical data that focus on the most relevant aspects of observable phenomena by abstracting from details (stylization). SFs originate from the field of economics and have been successfully used in different fields of research for years. In this paper, we discuss their potential and challenges for literature reviews in IS. We supplement our argumentation with an application example reporting our experience with SFs. Because SFs show considerable potential for cumulative research, they seem to be a promising instrument for literature reviews and especially for theory development in IS.
---
paper_title: Critical Discourse Analysis as a Review Methodology: An Empirical Example
paper_content:
Research disciplines and subdisciplines are steeped in epistemological beliefs and theoretical assumptions that guide and constrain research. These beliefs and assumptions both enable scientific inquiry and limit scientific progress. Theory and review papers tend to be a means for reproducing ideological assumptions. However, review papers can also challenge ideological assumptions by critically assessing taken-for-granted assumptions. Critical review methods are underdeveloped in the management disciplines. The information systems (IS) discipline must do more to improve the critical examination of its scientific discourse. In this paper, we present a method with guiding principles and steps for systematically conducting critical reviews of IS literature based on Habermasian strains of critical discourse analysis. We provide an empirical example of the method. The empirical example offers a critical review of behavioral information security research with a focus on employees’ security behaviors.
---
paper_title: Stylized Facts as an Instrument for Literature Review and Cumulative Information Systems Research
paper_content:
The accumulation of scientific knowledge is an important objective of information systems (IS) research. Although different review approaches exist in the continuum between narrative reviews and meta-analyses, most reviews in IS are narrative or descriptive—with all related drawbacks concerning objectivity and reliability—because available underlying sources in IS do typically not fulfil the requirements of formal approaches such as meta-analyses. To discuss how cumulative IS research can be effectively advanced using a more formalized approach fitting the current situation in IS research, in this paper, we point out the potential of stylized facts (SFs). SFs are interesting, sometimes counterintuitive patterns in empirical data that focus on the most relevant aspects of observable phenomena by abstracting from details (stylization). SFs originate from the field of economics and have been successfully used in different fields of research for years. In this paper, we discuss their potential and challenges for literature reviews in IS. We supplement our argumentation with an application example reporting our experience with SFs. Because SFs show considerable potential for cumulative research, they seem to be a promising instrument for literature reviews and especially for theory development in IS.
---
paper_title: Critical Discourse Analysis as a Review Methodology: An Empirical Example
paper_content:
Research disciplines and subdisciplines are steeped in epistemological beliefs and theoretical assumptions that guide and constrain research. These beliefs and assumptions both enable scientific inquiry and limit scientific progress. Theory and review papers tend to be a means for reproducing ideological assumptions. However, review papers can also challenge ideological assumptions by critically assessing taken-for-granted assumptions. Critical review methods are underdeveloped in the management disciplines. The information systems (IS) discipline must do more to improve the critical examination of its scientific discourse. In this paper, we present a method with guiding principles and steps for systematically conducting critical reviews of IS literature based on Habermasian strains of critical discourse analysis. We provide an empirical example of the method. The empirical example offers a critical review of behavioral information security research with a focus on employees’ security behaviors.
---
paper_title: VALIDATION GUIDELINES FOR IS POSITIVIST RESEARCH
paper_content:
The issue of whether IS positivist researchers were validating their instruments sufficiently was initially raised fifteen years ago. Rigor in IS research is still one of the critical scientific issues facing the field. Without solid validation of the instruments that are used to gather data on which findings and interpretations are based, the very scientific basis of the profession is threatened. This study builds on four prior retrospectives of IS research that conclude that IS positivist researchers continue to face major barriers in instrument, statistical, and other forms of validation. It goes beyond these studies by offering analyses of the state-of-the-art of research validities and deriving specific heuristics for research practice in the validities. Some of these heuristics will, no doubt, be controversial. But we believe that it is time for the IS academic profession to bring such issues into the open for community debate. This article is a first step in that direction. Based on our interpretation of the importance of a long list of validities, this paper suggests heuristics for reinvigorating the quest for validation in IS research via content/construct validity, reliability, manipulation validity, and statistical conclusion validity. New guidelines for validation and new research directions are offered.
---
paper_title: Qualitative data analysis : an expanded sourcebook
paper_content:
Matthew B. Miles, Qualitative Data Analysis A Methods Sourcebook, Third Edition. The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny Saldana, author of The Coding Manual for Qualitative Researchers. Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and comprehension. The Third Edition's presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting. Miles and Huberman's original research studies are profiled and accompanied with new examples from Saldana's recent qualitative work. The book's most celebrated chapter, "Drawing and Verifying Conclusions," is retained and revised, and the chapter on report writing has been greatly expanded, and is now called "Writing About Qualitative Research." Comprehensive and authoritative, Qualitative Data Analysis has been elegantly revised for a new generation of qualitative researchers. Johnny Saldana, The Coding Manual for Qualitative Researchers, Second Edition. The Second Edition of Johnny Saldana's international bestseller provides an in-depth guide to the multiple approaches available for coding qualitative data. Fully up-to-date, it includes new chapters, more coding techniques and an additional glossary. Clear, practical and authoritative, the book: describes how coding initiates qualitative data analysis; demonstrates the writing of analytic memos; discusses available analytic software; suggests how best to use the book for particular studies. In total, 32 coding methods are profiled that can be applied to a range of research genres from grounded theory to phenomenology to narrative inquiry. For each approach, Saldana discusses the method's origins, a description of the method, practical applications, and a clearly illustrated example with analytic follow-up. A unique and invaluable reference for students, teachers, and practitioners of qualitative inquiry, this book is essential reading across the social sciences. Stephanie D. H. Evergreen, Presenting Data Effectively Communicating Your Findings for Maximum Impact. This is a step-by-step guide to making the research results presented in reports, slideshows, posters, and data visualizations more interesting. Written in an easy, accessible manner, Presenting Data Effectively provides guiding principles for designing data presentations so that they are more likely to be heard, remembered, and used. The guidance in the book stems from the author's extensive study of research reporting, a solid review of the literature in graphic design and related fields, and the input of a panel of graphic design experts. Those concepts are then translated into language relevant to students, researchers, evaluators, and non-profit workers - anyone in a position to have to report on data to an outside audience. The book guides the reader through design choices related to four primary areas: graphics, type, color, and arrangement. As a result, readers can present data more effectively, with the clarity and professionalism that best represents their work.
---
| Title: Introduction to the Special Issue: The Literature Review in Information Systems
Section 1: A Renaissance of Literature Analysis as A Research Method
Description 1: Discuss the resurgence and increasing importance of literature review and research synthesis approaches and methods.
Section 2: The Motivation for Change
Description 2: Explain the reasons behind challenging the de-facto narrative literature review approach and the need for new methods.
Section 3: Introducing New Approaches
Description 3: Present new methods introduced in the special issue that offer innovative ways to conduct literature analysis.
Section 4: Challenges and Strategies
Description 4: Address the general challenges researchers face when conducting literature reviews and provide strategies to overcome them.
Section 5: Common Themes
Description 5: Highlight recurring themes across the papers in the special issue, emphasizing the importance of diversity, quality, and systematization in literature reviews.
Section 6: Conclusion
Description 6: Summarize the ongoing importance of literature review methods and suggest possible future developments in the field. |
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches | 11 | ---
paper_title: Multiparameter Receiver Operating Characteristic Analysis for Signal Detection and Classification
paper_content:
Receiver operating characteristic (ROC) analysis is a widely used evaluation tool in signal processing and communications, and medical diagnosis for performance analysis. It utilizes 2-D curves plotted by detection rate (P D) against false alarm rate (P F) to assess effectiveness of a detector, sensor/device for detection. However, P D and P F are actually dependent parameters resulting from a more crucial but implicit parameter hidden in the ROC curves, threshold ? , which is determined by the cost of implementing a detector or sensor/device, except only the case that when the Bayes theory is used for detection, ? is completely determined by the Bayes cost. This paper extends the traditional ROC analysis for single-signal detection to detection and classification of multiple signals. It also explores relationships among the three parameters, P D, P F, and ? , and further develops a new concept of multiparameter ROC analysis, which uses 3-D ROC curves plotted by three parameters, P D, P F, and ?, to evaluate effectiveness of detection performance based on interrelationship among P D, P F, and ?, rather then only P D and P F used by 2-D ROC analysis. As a result of a 3-D ROC curve, three 2-D ROC curves can be also derived: the conventional 2-D ROC curve plotted by P D versus P F and two new 2-D ROC curves plotted based on P D versus ? and P F versus ?. In order to demonstrate the utility of 3-D ROC analysis, four applications are considered: hyperspectral target detection, medical diagnosis, chemical/biological agent detection, and biometric recognition.
---
paper_title: Kernel Based Subspace Projection of Near Infrared Hyperspectral Images of Maize Kernels
paper_content:
In this paper we present an exploratory analysis of hyper-spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumination. In order to explore the hyperspectral data we compare a series of subspace projection methods including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so-called kernel version of the two afore-mentioned methods. The kernel methods implicitly transform the data to a higher dimensional space using non-linear transformations while retaining the computational complexity. Analysis on our data example illustrates that the proposed kernel maximum autocorrelation factor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.
---
paper_title: Earth system science related imaging spectroscopy - An assessment
paper_content:
The science of spectroscopy has existed for more than three centuries, and imaging spectroscopy for the Earth system for three decades. We first discuss the historical background of spectroscopy, followed by imaging spectroscopy, introducing a common definition for the latter. The relevance of imaging spectroscopy is then assessed using a comprehensive review of the cited literature. Instruments, technological advancements and (pre-)processing approaches are discussed to set the scene for application related advancements. We demonstrate these efforts using four examples that represent progress due to imaging spectroscopy, namely (i) bridging scaling gaps from molecules to ecosystems using coupled radiative transfer models (ii) assessing surface heterogeneity including clumping, (iii) physical based (inversion) modeling, and iv) assessing interaction of light with the Earth surface. Recent advances of imaging spectroscopy contributions to the Earth system sciences are discussed. We conclude by summarizing the achievements of thirty years of imaging spectroscopy and strongly recommend this community to increase its efforts to convince relevant stakeholders of the urgency to acquire the highest quality imaging spectrometer data for Earth observation from operational satellites capable of collecting consistent data for climatically-relevant periods of time.
---
paper_title: Anomaly detection from hyperspectral imagery
paper_content:
We develop anomaly detectors, i.e., detectors that do not presuppose a signature model of one or more dimensions, for three clutter models: the local normal model, the global normal mixture model, and the global linear mixture model. The local normal model treats the neighborhood of a pixel as having a normal probability distribution. The normal mixture model considers the observation from each pixel as arising from one of several possible classes such that each class has a normal probability distribution. The linear mixture model considers each observation to be a linear combination of fixed spectra, known as endmembers, that are, or may be, associated with materials in the scene, and the coefficients, interpreted as fractional abundance, are constrained to be nonnegative and sum to one. We show how the generalized likelihood ratio test (GLRT) may be used to derive anomaly detectors for the local normal and global normal mixture models. The anomaly detector applied with the linear mixture approach proceeds by identifying target like endmembers based on properties of the histogram of the abundance estimates and employing a matched filter in the space of abundance estimates. To overcome the limitations of the individual models, we develop a joint decision logic, based on a maximum entropy probability model and the GLRT, that utilizes multiple decision statistics, and we apply this approach using the detection statistics derived from the three clutter models. Examples demonstrate that the joint decision logic can improve detection performance in comparison with the individual anomaly detectors. We also describe the application of linear prediction filters to repeated images of the same area to detect changes that occur within the scene over time.
---
paper_title: Detection and Analysis of the Intestinal Ischemia Using Visible and Invisible Hyperspectral Imaging
paper_content:
Intestinal ischemia, or inadequate blood flow to the intestine, is caused by a variety of disorders and conditions. The quickness with which the problem is brought to medical attention for diagnosis and treatment has great effects on the outcome of ischemic injury. Recently, hyperspectral sensors have advanced and emerged as compact imaging tools that can be utilized in medical diagnostics. Hyperspectral imaging provides a powerful tool for noninvasive tissue analyses. In this paper, the hyperspectral camera, with visible and invisible wavelengths, has been evaluated for detection and analysis of intestinal ischemia during surgeries. This technique can help the surgeon to quickly find ischemic tissues. Two cameras, a visible-to-near-infrared camera (400-1000 nm) and an infrared camera (900-1700 nm) were used to capture the hyperspectral images. Vessels supplying an intestinal segment of a pig were clamped to simulate ischemic conditions. A key wavelength range that provides the best differentiation between normal and ischemic intestine was determined from all wavelengths that potentially reduces the amount of data collected in subsequent work. The data were classified using two filters that were designed to discriminate the ischemic intestinal regions.
---
paper_title: Quantitative determination of mineral types and abundances from reflectance spectra using principal components analysis
paper_content:
A procedure was developed for analyzing remote reflectance spectra, including multispectral images, that quantifies parameters such as types of mineral mixtures, the abundances of mixed minerals, and particle sizes. Principal components analysis (PCA) reduced the spectral dimensionality and allowed testing the uniqueness and validity of spectral mixing models. By analyzing variations in the overall spectral reflectance curves we identified the type of spectral mixture, quantified mineral abundances, and identified the effects of particle size. The results demonstrate an advantage in classification accuracy over classical forms of analysis that ignore effects of particle-size or mineral-mixture systematics on spectra. The approach is applicable to remote sensing data of planetary surfaces for quantitative determinations of mineral abundances.
---
paper_title: Pharmaceutical applications of vibrational chemical imaging and chemometrics: a review.
paper_content:
The emergence of chemical imaging (CI) has gifted spectroscopy an additional dimension. Chemical imaging systems complement chemical identification by acquiring spatially located spectra that enable visualization of chemical compound distributions. Such techniques are highly relevant to pharmaceutics in that the distribution of excipients and active pharmaceutical ingredient informs not only a product's behavior during manufacture but also its physical attributes (dissolution properties, stability, etc.). The rapid image acquisition made possible by the emergence of focal plane array detectors, combined with publication of the Food and Drug Administration guidelines for process analytical technology in 2001, has heightened interest in the pharmaceutical applications of CI, notably as a tool for enhancing drug quality and understanding process. Papers on the pharmaceutical applications of CI have been appearing in steadily increasing numbers since 2000. The aim of the present paper is to give an overview of infrared, near-infrared and Raman imaging in pharmaceutics. Sections 2 and 3 deal with the theory, device set-ups, mode of acquisition and processing techniques used to extract information of interest. Section 4 addresses the pharmaceutical applications.
---
paper_title: HYPERSPECTRAL REFLECTANCE AND FLUORESCENCE IMAGING SYSTEM FOR FOOD QUALITY AND SAFETY
paper_content:
This article presents a laboratory-based hyperspectral imaging system designed and developed by the Instrumentation and Sensing Laboratory, U.S. Department of Agriculture, Beltsville, Maryland. The spectral range is from 430 to 930 nm with spectral resolution of approximately 10 nm (full width at half maximum) and spatial resolution better than 1 mm. Our system is capable of reflectance and fluorescence measurements with the use of dual illumination sources where fluorescence emissions are measured with ultraviolet (UV-A) excitation. We present the calibrations and image-correction procedures for the system artifacts and heterogeneous responses caused by the optics, sensor, and lighting conditions throughout the spectrum region for reflectance and fluorescence. The results of the fluorescence correction method showed that the system responses throughout the spectrum region were normalized to within 0.5% error. The versatility of the hyperspectral imaging system was demonstrated with sample fluorescence and reflectance images of a normal apple and an apple with fungal contamination and bruised spots. The primary use of the imaging system in our laboratory is to conduct food safety and quality research. However, we envision that this unique system can be used in a number of scientific applications.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Blind Decomposition of Transmission Light Microscopic Hyperspectral Cube Using Sparse Representation
paper_content:
In this paper, we address the problem of fully automated decomposition of hyperspectral images for transmission light microscopy. The hyperspectral images are decomposed into spectrally homogeneous compounds. The resulting compounds are described by their spectral characteristics and optical density. We present the multiplicative physical model of image formation in transmission light microscopy, justify reduction of a hyperspectral image decomposition problem to a blind source separation problem, and provide method for hyperspectral restoration of separated compounds. In our approach, dimensionality reduction using principal component analysis (PCA) is followed by a blind source separation (BSS) algorithm. The BSS method is based on sparsifying transformation of observed images and relative Newton optimization procedure. The presented method was verified on hyperspectral images of biological tissues. The method was compared to the existing approach based on nonnegative matrix factorization. Experiments showed that the presented method is faster and better separates the biological compounds from imaging artifacts. The results obtained in this work may be used for improving automatic microscope hardware calibration and computer-aided diagnostics.
---
paper_title: Detection algorithms for hyperspectral imaging applications
paper_content:
We introduce key concepts and issues including the effects of atmospheric propagation upon the data, spectral variability, mixed pixels, and the distinction between classification and detection algorithms. Detection algorithms for full pixel targets are developed using the likelihood ratio approach. Subpixel target detection, which is more challenging due to background interference, is pursued using both statistical and subspace models for the description of spectral variability. Finally, we provide some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data. Furthermore, we illustrate the potential deviation of HSI data from normality and point to some distributions that may serve in the development of algorithms with better or more robust performance. We therefore focus on detection algorithms that assume multivariate normal distribution models for HSI data.
---
paper_title: Classification of Sound and Stained Wheat Grains Using Visible and near Infrared Hyperspectral Image Analysis
paper_content:
Near infrared hyperspectral image analysis has been used to classify individual wheat grains representing 24 different Australian varieties as sound or as being discoloured by one of the commercially important blackpoint, field fungi or pink stains. The study used a training set of 188 grains and a test set of 665 grains. The spectra were smoothed and then standardised by dividing each spectrum by its mean, so that the analysis was based solely on spectral shape. Penalised discriminant analysis was first used for pixel classification and then a simple rule for grain classification was developed. Overall classification accuracies of 95% were achieved over the 420–2500 nm wavelength range, as well as reduced ranges of 420–1000 nm and 420–700 nm.
---
paper_title: The airborne visible/infrared imaging spectrometer (AVIRIS)
paper_content:
Abstract The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) is a facility consisting of a flight system, a ground data system, a calibration facility, and a full-time operations team. The facility was developed by the Jet Propulsion Laboratory (JPL) under funding from the National Aeronautics and Space Administration (NASA). NASA also provides funding for operations and maintenance. The flight system is a whisk-broom imager that acquires data in 224 narrow, contiguous spectral bands covering the solar reflected portion of the electromagnetic spectrum. It is flown aboard the NASA high altitude ER-2 research aircraft. The ground data system is a facility dedicated to the processing and distribution of data acquired by AVIRIS. It operates year round at JPL. The calibration facility consists of a calibration laboratory at JPL and a suite of field instruments and procedures for performing inflight calibration of AVIRIS. A small team of engineers, technicians and scientists supports a yearly operations schedule that includes 6 months of flight operations, 6 months of routine ground maintenance of the flight system, and year-round data processing and distribution. Details of the AVIRIS system, its performance history, and future plans are described.
---
paper_title: NIR spectrometry for counterfeit drug detection: A feasibility study
paper_content:
Express-methods for detection of counterfeit drugs are of vital necessity. Visual control, dissociating tests or simple color reaction tests reveal only very rough forgeries. The feasibility of information-rich NIR-measurements as an analytical method together with multivariate calibration for mathematical data processing for false drugs detection is demonstrated. Also, multivariate hyperspectral image analysis is applied providing additional diagnostic information. Hyperspectral imaging is becoming a useful diagnostic tool for identifying non-homogeneous spatial regions of drug formulation. Two types of drugs are used to demonstrate the applicability of these approaches.
---
paper_title: Hyperspectral imaging – an emerging process analytical tool for food quality and safety control
paper_content:
Hyperspectral imaging (HSI) is an emerging platform technology that integrates conventional imaging and spectroscopy to attain both spatial and spectral information from an object. Although HSI was originally developed for remote sensing, it has recently emerged as a powerful process analytical tool for non-destructive food analysis. This paper provides an introduction to hyperspectral imaging: HSI equipment, image acquisition and processing are described; current limitations and likely future applications are discussed. In addition, recent advances in the application of HSI to food safety and quality assessment are reviewed, such as contaminant detection, defect identification, constituent analysis and quality evaluation.
---
paper_title: Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)
paper_content:
Abstract Imaging spectroscopy is of growing interest as a new approach to Earth remote sensing. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) was the first imaging sensor to measure the solar reflected spectrum from 400 nm to 2500 nm at 10 nm intervals. The calibration accuracy and signal-to-noise of AVIRIS remain unique. The AVIRIS system as well as the science research and applications have evolved significantly in recent years. The initial design and upgraded characteristics of the AVIRIS system are described in terms of the sensor, calibration, data system, and flight operation. This update on the characteristics of AVIRIS provides the context for the science research and applications that use AVIRIS data acquired in the past several years. Recent science research and applications are reviewed spanning investigations of atmospheric correction, ecology and vegetation, geology and soils, inland and coastal waters, the atmosphere, snow and ice hydrology, biomass burning, environmental hazards, satellite simulation and calibration, commercial applications, spectral algorithms, human infrastructure, as well as spectral modeling.
---
paper_title: Signal processing for hyperspectral image exploitation
paper_content:
Electro-optical remote sensing involves the acquisition of information about an object or scene without coming into physical contact with it. This is achieved by exploiting the fact that the materials comprising the various objects in a scene reflect, absorb, and emit electromagnetic radiation in ways characteristic of their molecular composition and shape. If the radiation arriving at the sensor is measured at each wavelength over a sufficiently broad spectral band, the resulting spectral signature, or simply spectrum, can be used (in principle) to uniquely characterize and identify any given material. An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification. This dimensionality reduction is implemented in a scene-dependent (adaptive) manner and may be implemented as a distinct step in the processing or as an integral part of the overall algorithm. The most widely used algorithm for dimensionality reduction is principal component analysis (PCA) or, equivalently, Karhunen-Loeve transformation.
---
paper_title: Recent Advances in Techniques for Hyperspectral Image Processing
paper_content:
Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than thirty years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspec- tral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the high-dimensional nature of the data, and to integrate the spa- tial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific appli- cations, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of- the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms.
---
paper_title: Hyperspectral image data analysis
paper_content:
The fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the Earth's surface and, in particular, from the spatial, spectral, and temporal variations in that field. Rather than focusing on the spatial variations, which imagery perhaps best conveys, why not move on to look at how the spectral variations might be used. The idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated. The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.
---
paper_title: Fuzzy Spectral and Spatial Feature Integration for Classification of Nonferrous Materials in Hyperspectral Data
paper_content:
Hyperspectral data allows the construction of more elaborate models to sample the properties of the nonferrous materials than the standard RGB color representation. In this paper, the nonferrous waste materials are studied as they cannot be sorted by classical procedures due to their color, weight and shape similarities. The experimental results presented in this paper reveal that factors such as the various levels of oxidization of the waste materials and the slight differences in their chemical composition preclude the use of the spectral features in a simplistic manner for robust material classification. To address these problems, the proposed FUSSER (fuzzy spectral and spatial classifier) algorithm detailed in this paper merges the spectral and spatial features to obtain a combined feature vector that is able to better sample the properties of the nonferrous materials than the single pixel spectral features when applied to the construction of multivariate Gaussian distributions. This approach allows the implementation of statistical region merging techniques in order to increase the performance of the classification process. To achieve an efficient implementation, the dimensionality of the hyperspectral data is reduced by constructing bio-inspired spectral fuzzy sets that minimize the amount of redundant information contained in adjacent hyperspectral bands. The experimental results indicate that the proposed algorithm increased the overall classification rate from 44% using RGB data up to 98% when the spectral-spatial features are used for nonferrous material classification.
---
paper_title: Spectral mixture modeling - A new analysis of rock and soil types at the Viking Lander 1 site. [on Mars]
paper_content:
A Viking Lander 1 image was modeled as mixtures of reflectance spectra of palagonite dust, gray andesitelike rock, and a coarse rocklike soil. The rocks are covered to varying degrees by dust but otherwise appear unweathered. Rocklike soil occurs as lag deposits in deflation zones around stones and on top of a drift and as a layer in a trench dug by the lander. This soil probably is derived from the rocks by wind abrasion and/or spallation. Dust is the major component of the soil and covers most of the surface. The dust is unrelated spectrally to the rock but is equivalent to the global-scale dust observed telescopically. A new method was developed to model a multispectral image as mixtures of end-member spectra and to compare image spectra directly with laboratory reference spectra. The method for the first time uses shade and secondary illumination effects as spectral end-members; thus the effects of topography and illumination on all scales can be isolated or removed. The image was calibrated absolutely from the laboratory spectra, in close agreement with direct calibrations. The method has broad applications to interpreting multispectral images, including satellite images.
---
paper_title: Forensic analysis of bioagents by X-ray and TOF-SIMS hyperspectral imaging.
paper_content:
Hyperspectral imaging combined with multivariate statistics is an approach to microanalysis that makes the maximum use of the large amount of data potentially collected in forensics analysis. This study examines the efficacy of using hyperspectral imaging-enabled microscopies to identify chemical signatures in simulated bioagent materials. This approach allowed for the ready discrimination between all samples in the test. In particular, the hyperspectral imaging approach allowed for the identification of particles with trace elements that would have been missed with a more traditional approach to forensic microanalysis. The importance of combining signals from multiple length scales and analytical sensitivities is discussed.
---
paper_title: Algorithm taxonomy for hyperspectral unmixing
paper_content:
In this paper, we introduce a set of taxonomies that hierarchically organize and specify algorithms associated with hyperspectral unmixing. Our motivation is to collectively organize and relate algorithms in order to assess the current state-of-the-art in the field and to facilitate objective comparisons between methods. The hyperspectral sensing community is populated by investigators with disparate scientific backgrounds and, speaking in their respective languages, efforts in spectral unmixing developed within disparate communities have inevitably led to duplication. We hope our analysis removes this ambiguity and redundancy by using a standard vocabulary, and that the presentation we provide clearly summarizes what has and has not been done. As we shall see, the framework for the taxonomies derives its organization from the fundamental, philosophical assumptions imposed on the problem, rather than the common calculations they perform, or the similar outputs they might yield.
---
paper_title: Optimal linear spectral unmixing
paper_content:
The optimal estimate of ground cover components of a linearly mixed spectral pixel in remote-sensing imagery is investigated. The problem is formulated as two consecutive constrained least-squares (LS) problems: the first problem concerns the estimation of the end-member spectra (EMS), and the second concerns the estimate, within each mixed pixel, of ground cover class proportions (CCPs) given the estimated EMS. For the EMS estimation problem, the authors propose a total least-squares (TLS) solution as an alternative to the conventional LS approach. The authors pose the CCP estimation problem as a constrained LS optimization problem. Then, they solve for exact solution using a quadratic programming (QP) method, as opposed to the Lagrange multiplier (LM)-based approximated solution proposed by Settle and Drake (1993). Preliminary computer experiments indicated that the TLS-estimated EMS always leads to better estimates of CCP than that of the LS-estimated EMS.
---
paper_title: Survey of geometric and statistical unmixing algorithms for hyperspectral images
paper_content:
Spectral mixture analysis (also called spectral unmixing) has been an alluring exploitation goal since the earliest days of imaging spectroscopy. No matter the spatial resolution, the spectral signatures collected in natural environments are invariably a mixture of the signatures of the various materials found within the spatial extent of the ground instantaneous field view of the imaging instrument. In this paper, we give a comprehensive enumeration of the unmixing methods used in practice, because of their implementation in widely used software packages, and those published in the literature. We have structured the review according to the basic computational approach followed by the algorithms, with particular attention to those based on the computational geometry formulation, and statistical approaches with a probabilistic foundation. The quantitative assessment of some available techniques in both categories provides an opportunity to review recent advances and to anticipate future developments.
---
paper_title: Reflectance spectroscopy: Quantitative analysis techniques for remote sensing applications
paper_content:
Several methods for the analysis of remotely sensed reflectance data are compared, including empirical methods and scattering theories, both of which are important for solving remote sensing problems. The concept of the photon mean optical path length and the implications for use in modeling reflectance spectra are presented. It is shown that the mean optical path length in a particulate surface is in rough inverse proportion to the square root of the absorption coefficient. Thus, the stronger absorber a material is, the less photons will penetrate into the surface. The concept of apparent absorbance (-In reflectance) is presented, and it is shown that absorption bands, which are Gaussian in shape when plotted as absorption coefficient (true absorbance) versus photon energy, are also Gaussians in apparent absorbance. However, the Gaussians in apparent absorbance have a smaller intensity and a width which is a factor of √2 larger. An apparent continuum in a reflectance spectrum is modeled as a mathematical function used to isolate a particular absorption feature for analysis. It is shown that a continuum should be removed by dividing it into the reflectance spectrum or subtracting it from the apparent absorbance and that the fitting of Gaussians to absorption features should be done using apparent absorbance versus photon energy. Kubelka-Munk theory is only valid for materials with small total absorption and for bihemispherical reflectance, which are rarely encountered in geologic remote sensing. It is shown that the recently advocated bidirectional reflectance theories have the potential for use in deriving mineral abundance from a reflectance spectrum.
---
paper_title: Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach
paper_content:
Most applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volume/dimensionality, without loss of critical information, so that it can be processed efficiently and assimilated by a human analyst. The authors describe a technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel vector onto a subspace which is orthogonal to the undesired signatures. This operation is an optimal interference suppression process in the least squares sense. Once the interfering signatures have been nulled, projecting the residual onto the signature of interest maximizes the signal-to-noise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k-signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously. The approach is applicable to both spectrally pure as well as mixed pixels. >
---
paper_title: Least squares subspace projection approach to mixed pixel classification for hyperspectral images
paper_content:
An orthogonal subspace projection (OSP) method using linear mixture modeling was recently explored in hyperspectral image classification and has shown promise in signature detection, discrimination, and classification. In this paper, the OSP is revisited and extended by three unconstrained least squares subspace projection approaches, called signature space OSP, target signature space OSP, and oblique subspace projection, where the abundances of spectral signatures are not known a priori but need to be estimated, a situation to which the OSP cannot be directly applied. The proposed three subspace projection methods can be used not only to estimate signature abundance, but also to classify a target signature at subpixel scale so as to achieve subpixel detection. As a result, they can be viewed as a posteriori OSP as opposed to OSP, which can be thought of as a priori OSP. In order to evaluate these three approaches, their associated least squares estimation errors are cast as a signal detection problem ill the framework of the Neyman-Pearson detection theory so that the effectiveness of their generated classifiers can be measured by receiver operating characteristics (ROC) analysis. All results are demonstrated by computer simulations and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data.
---
paper_title: A quantitative and comparative analysis of endmember extraction algorithms from hyperspectral data
paper_content:
Linear spectral unmixing is a commonly accepted approach to mixed-pixel classification in hyperspectral imagery. This approach involves two steps. First, to find spectrally unique signatures of pure ground components, usually known as endmembers, and, second, to express mixed pixels as linear combinations of endmember materials. Over the past years, several algorithms have been developed for autonomous and supervised endmember extraction from hyperspectral data. Due to a lack of commonly accepted data and quantitative approaches to substantiate new algorithms, available methods have not been rigorously compared by using a unified scheme. In this paper, we present a comparative study of standard endmember extraction algorithms using a custom-designed quantitative and comparative framework that involves both the spectral and spatial information. The algorithms considered in this study represent substantially different design choices. A database formed by simulated and real hyperspectral data collected by the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) is used to investigate the impact of noise, mixture complexity, and use of radiance/reflectance data on algorithm performance. The results obtained indicate that endmember selection and subsequent mixed-pixel interpretation by a linear mixture model are more successful when methods combining spatial and spectral information are applied.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Mixed pixels classification
paper_content:
There are two major approaches in spectral unmixing: linear and non-linear ones. They are appropriate for different types of mixture, namely checkerboard mixtures and intimate mixtures. The two approaches are briefly reviewed. Then in a carefully controlled laboratory experiment, the limitations and applicability of two of the methods (a linear and a non- linear one) are compared, in the context of unmixing an intimate mixture.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Hyperspectral unmixing: geometrical, statistical, and sparse regression-based approaches
paper_content:
Hyperspectral instruments acquire electromagnetic energy scattered within their ground instantaneous field view ::: in hundreds of spectral channels with high spectral resolution. Very often, however, owing to low spatial resolution ::: of the scanner or to the presence of intimate mixtures (mixing of the materials at a very small scale) in the scene, ::: the spectral vectors (collection of signals acquired at different spectral bands from a given pixel) acquired by the ::: hyperspectral scanners are actually mixtures of the spectral signatures of the materials present in the scene. ::: Given a set of mixed spectral vectors, spectral mixture analysis (or spectral unmixing) aims at estimating the ::: number of reference materials, also called endmembers, their spectral signatures, and their fractional abundances. ::: Spectral unmixing is, thus, a source separation problem where, under a linear mixing model, the sources are the ::: fractional abundances and the endmember spectral signatures are the columns of the mixing matrix. As such, ::: the independent component analysis (ICA) framework came naturally to mind to unmix spectral data. However, ::: the ICA crux assumption of source statistical independence is not satisfied in spectral applications, since the ::: sources are fractions and, thus, non-negative and sum to one. As a consequence, ICA-based algorithms have ::: severe limitations in the area of spectral unmixing, and this has fostered new unmixing research directions taking ::: into account geometric and statistical characteristics of hyperspectral sources. ::: This paper presents an overview of the principal research directions in hyperspectral unmixing. The presentations ::: is organized into four main topics: i) mixing models, ii) signal subspace identification, iii) geometrical-based ::: spectral unmixing, (iv) statistical-based spectral unmixing, and (v) sparse regression-based unmixing. In each ::: topic, we describe what physical or mathematical problems are involved and summarize state-of-the-art algorithms ::: to address these problems.
---
paper_title: Image processing software for imaging spectrometry data analysis
paper_content:
Abstract The advent of a new generation of remote sensing instruments, called imaging spectrometers, promises to provide scientists a greatly enhanced capability for detailed observations of the earth's surface. These instruments collect image data in literally hundreds of spectral channels simultaneously from the near ultraviolet through the short wavelength infrared, and are capable in many cases of providing direct surface materials identification in a manner similar to that used in laboratory reflectance spectroscopy. The volume and complexity of data produced by these instruments offers a significant challenge to traditional multispectral image analysis methods, and in fact requires the development of new approaches to efficiently manage and analyze these data sets. This paper describes a software system specifically designed to provide the science user with a powerful set of tools for carrying out exploratory analysis of imaging spectrometer data utilizing only modest computational resources.
---
paper_title: Theory of Reflectance and Emittance Spectroscopy
paper_content:
Acknowledgements 1. Introduction 2. Electromagnetic wave propagation 3. The absorption of light 4. Specular reflection 5. Single particle scattering: perfect spheres 6. Single particle scattering: irregular particles 7. Propagation in a nonuniform medium: the equation of radiative transfer 8. The bidirectional reflectance of a semi-infinite medium 9. The opposition effect 10. A miscellany of bidirectional reflectances and related quantities 11. Integrated reflectances and planetary photometry 12. Photometric effects of large scale roughness 13. Polarization 14. Reflectance spectroscopy 15. Thermal emission and emittance spectroscopy 16. Simultaneous transport of energy by radiation and conduction Appendix A. A brief review of vector calculus Appendix B. Functions of a complex variable Appendix C. The wave equation in spherical coordinates Appendix D. Fraunhoffer diffraction by a circular hole Appendix E. Table of symbols Bibliography Index.
---
paper_title: On the relationship between spectral unmixing and subspace projection
paper_content:
A linear transformation was recently recommended for application on hyperspectral imagery. This note shows that the method is completely equivalent to extracting proportional ground cover by standard means, but is less efficient than the more usual methods of spectral unmixing.
---
paper_title: Nonlinear spectral mixing models for vegetative and soil surfaces
paper_content:
Abstract In this article we apply an analytical solution of the radiosity equation to compute vegetation indices, reflectance spectra, and the spectral bidirectional reflectance distribution function for simple canopy geometries. We show that nonlinear spectral mixing occurs due to multiple reflection and transmission from surfaces. We compare radiosity-derived spectra with single scattering or linear mixing models. We also develop a simple model to predict the reflectance spectrum of binary and ternary mineral mixtures of faceted surfaces. The two facet model is validated by measurements of the reflectance.
---
paper_title: Confidence in linear spectral unmixing of single pixels
paper_content:
The authors propose a method that estimates the membership of a mixed pixel in various possible mixture proportions. The method relies on the creation of model mixtures and the estimation of their local density in the vicinity of the pixel under consideration.
---
paper_title: Spectral Mixture Analysis of Hyperspectral Scenes Using Intelligently Selected Training Samples
paper_content:
In this letter, we address the use of artificial neural networks for spectral mixture analysis of hyperspectral scenes. We specifically focus on the issue of how to effectively train neural network architectures in the context of spectral mixture analysis applications. To address this issue, a multilayer perceptron neural architecture is combined with techniques for intelligent selection and labeling of training samples directly obtained from the input data, thus maximizing the information that can be obtained from those samples while reducing the need for a priori information about the scene. The proposed approach is compared to unconstrained and fully constrained linear mixture models using hyperspectral data sets acquired (in the laboratory) from artificial forest scenes, using the compact airborne spectrographic imaging system. The Spreading of Photons for Radiation INTerception (SPRINT) canopy model, which assumes detailed knowledge about object geometry, was employed to evaluate the results obtained by the different methods. Our results show that the proposed approach, when trained with both pure and mixed training samples (generated automatically without prior information) can provide similar results to those provided by SPRINT, using very few labeled training samples. An application to real airborne data using a set of hyperspectral images collected at different altitudes by the digital airborne imaging spectrometer 7915 and the reflective optics system imaging spectrometer, operating simultaneously at multiple spatial resolutions, is also presented and discussed.
---
paper_title: The Kubelka-Munk Diffuse Reflectance Formula Revisited
paper_content:
Abstract We use an integral equation approach to finding the Kubelka-Munk (KM) diffuse reflectance formula and extend the result by finding the apparent path length and total intensity distribution inside an infinite, homogeneous, diffusely reflecting medium with isotropic scattering. We then expand the approach to three dimensions to show that the KM formula is correct for total diffuse reflectance when scattering, excitation, and detection are all isotropic. We obtain simple and exact results for the angular distribution of diffuse reflection and for the total diffuse reflectance when the incident light has an isotropic angular distribution, when it strikes at a single angle of elevation, and when it has the steady-state angular distribution. This work includes some results that employ Chandrasekhar's H function, so we also provide a program for the rapid evaluation of H. “[Supplementary material is available for this article. Go to the publisher's online edition of Applied Spectroscopy Reviews for the ...
---
paper_title: N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data
paper_content:
The analysis of hyperspectral data sets requires the determination of certain basis spectra called 'end-members.' Once these spectra are found, the image cube can be 'unmixed' into the fractional abundance of each material in each pixel. There exist several techniques for accomplishing the determination of the end-members, most of which involve the intervention of a trained geologist. Often these-end-members are assumed to be present in the image, in the form of pure, or unmixed, pixels. In this paper a method based upon the geometry of convex sets is proposed to find a unique set of purest pixels in an image. The technique is based on the fact that in N spectral dimensions, the N-volume contained by a simplex formed of the purest pixels is larger than any other volume formed from any other combination of pixels. The algorithm works by 'inflating' a simplex inside the data, beginning with a random set of pixels. For each pixel and each end-member, the end-member is replaced with the spectrum of the pixel and the volume is recalculated. If it increases, the spectrum of the new pixel replaces that end-member. This procedure is repeated until no more replacements are done. This algorithm successfully derives end-members in a synthetic data set, and appears robust with less than perfect data. Spectral end-members have been extracted for the AVIRIS Cuprite data set which closely match reference spectra, and resulting abundance maps match published mineral maps.
---
paper_title: Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles
paper_content:
Morphological profiles (MPs) have been proposed in recent literature as aiding tools to achieve better results for classification of remotely sensed data. MPs are in general built using features containing most of the information content of the data, such as the components derived from principal component analysis (PCA). Recently, nonlinear PCA (NLPCA), performed by autoassociative neural network, has emerged as a good unsupervised technique to fit the information content of hyperspectral data into few components. The aim of this letter is to investigate the classification accuracies obtained using extended MPs built from the features of NPCA. A comparison of the two approaches has been validated on two different data sets having different spatial and spectral resolutions/coverages, over the same ground truth, and also using two different classification algorithms. The results show that NLPCA permits one to obtain better classification accuracies than using linear PCA.
---
paper_title: A comparison of kernel functions for intimate mixture models
paper_content:
In previous work, kernel methods were introduced as a way to generalize the linear mixing model. This work led to a new set of algorithms that performed the unmixing of hyperspectral imagery in a reproducing kernel Hilbert space. By processing the imagery in this space different types of unmixing could be introduced - including an approximation of intimate mixtures. Whereas previous research focused on developing the mathematical foundation for kernel unmixing, this paper focuses on the selection of the kernel function. Experiments are conducted on real-world hyperspectral data using a linear, a radial-basis function, a polynomial, and a proposed physicsbased kernel. Results show which kernels provide the best ability to perform intimate unmixing.
---
paper_title: Mapping intimate mixtures using an adaptive kernel-based technique
paper_content:
In previous work, kernel methods were introduced as a way to generalize the linear mixing model for hyperspectral data. This work led to a new adaptive kernel unmixing method that both identified and unmixed linearly and intimately mixed pixels. However, the results from this previous research was limited to lab-based data where the endmembers were known a-priori and atmospheric effects were absent. This paper documents the results of the adaptive kernel-based unmixing techniques on real-world hyperspectral data collected over Smith Island, Virginia, USA. The results show that the adaptive kernel unmixing method can readily identify where nonlinear mixtures exist in the image even when perfect knowledge of the endmembers and the reflectance cannot be known.
---
paper_title: On the use of small training sets for neural network-based characterization of mixed pixels in remotely sensed hyperspectral images
paper_content:
In this work, neural network-based models involved in hyperspectral image spectra separation are considered. Focus is on how to select the most highly informative samples for effectively training the neural architecture. This issue is addressed here by several new algorithms for intelligent selection of training samples: (1) a border-training algorithm (BTA) which selects training samples located in the vicinity of the hyperplanes that can optimally separate the classes; (2) a mixed-signature algorithm (MSA) which selects the most spectrally mixed pixels in the hyperspectral data as training samples; and (3) a morphological-erosion algorithm (MEA) which incorporates spatial information (via mathematical morphology concepts) to select spectrally mixed training samples located in spatially homogeneous regions. These algorithms, along with other standard techniques based on orthogonal projections and a simple Maximin-distance algorithm, are used to train a multi-layer perceptron (MLP), selected in this work as a representative neural architecture for spectral mixture analysis. Experimental results are provided using both a database of nonlinear mixed spectra with absolute ground truth and a set of real hyperspectral images, collected at different altitudes by the digital airborne imaging spectrometer (DAIS 7915) and reflective optics system imaging spectrometer (ROSIS) operating simultaneously at multiple spatial resolutions.
---
paper_title: Pixel Unmixing in Hyperspectral Data by Means of Neural Networks
paper_content:
Neural networks (NNs) are recognized as very effective techniques when facing complex retrieval tasks in remote sensing. In this paper, the potential of NNs has been applied in solving the unmixing problem in hyperspectral data. In its complete form, the processing scheme uses an NN architecture consisting of two stages: the first stage reduces the dimension of the input vector, while the second stage performs the mapping from the reduced input vector to the abundance percentages. The dimensionality reduction is performed by the so-called autoassociative NNs, which yield a nonlinear principal component analysis of the data. The evaluation of the whole performance is carried out for different sets of experimental data. The first one is provided by the Airborne Hyperspectral Scanner. The second set consists of images from the Compact High-Resolution Imaging Spectrometer on board the Project for On-Board Autonomy satellite, and it includes multiangle and multitemporal acquisitions. The third set is represented by Airborne Visible/InfraRed Imaging Spectrometer measurements. A quantitative performance analysis has been carried out in terms of effectiveness in the dimensionality reduction phase and in terms of the accuracy in the final estimation. The results obtained, when compared with those produced by appropriate benchmark techniques, show the advantages of this approach.
---
paper_title: A generalized kernel for areal and intimate mixtures
paper_content:
In previous work, kernel methods were introduced as a way to generalize the linear mixing model for hyperspectral data. This work led to a new physics-based kernel that allowed accurate unmixing of intimate mixtures. Unfortunately, the new physics-based kernel did not perform well on linear mixtures; thus, different kernels had to be used for different mixtures. Ideally, a single unified kernel that can perform both unmixing of areal and intimate mixtures would be desirable. This paper presents such a kernel that can automatically identify the underlying mixture type from the data and perform the correct unmixing method. Results on real-world, ground-truthed intimate and linear mixtures demonstrate the ability of this new data-driven kernel to perform generalized unmixing of hyperspectral data.
---
paper_title: Bilinear models for nonlinear unmixing of hyperspectral images
paper_content:
This paper compares several nonlinear models recently introduced for hyperspectral image unmixing. All these models consist of bilinear models that have shown interesting properties for hyperspectral images subjected to multipath effects. The first part of this paper presents different algorithms allowing the parameters of these models to be estimated. The relevance and flexibility of these models for spectral unmixing are then investigated by comparing the reconstruction errors and spectral angle mappers computed from synthetic and real dataset. This kind of study is important to determine which mixture model should be used in practical applications for hyper-spectral image unmixing.
---
paper_title: A Model of Spectral Albedo of Particulate Surfaces: Implications for Optical Properties of the Moon
paper_content:
Abstract A simple one-dimensional geometrical-optics model for spectral albedo of powdered surfaces, in particular of lunar regolith, is presented. As distinct from, e.g., the Kubelka–Munk formula, which deals with two effective parameters of a medium, the suggested model uses spectra of optical constants of the medium materials. Besides, our model is invertible, i.e., allows estimations of spectral absorption using albedo spectrum, ifa prioridata on the real part of refractive index and surface porosity are known. The model has been applied to interpret optical properties of the Moon. In particular, it has been shown that: (1) both color indices and depth of absorption bands for regolith-like surfaces depend on particle size, which should be taken into account when correlations between these optical characteristics and abundance of Fe and Ti in the lunar regolith are studied; (2) fine-grained reduced iron occurring in regolith particles affects band minima positions in reflectance spectra of lunar pyroxenes and, consequently, affects the result of determination of pyroxene types and Fe abundance by Adams' method.
---
paper_title: A quantitative and comparative analysis of linear and nonlinear spectral mixture models using radial basis function neural networks
paper_content:
A radial basis function neural network (RBFNN) is developed to examine two mixing models, linear and nonlinear spectral mixtures, which describe the spectra collected by both airborne and laboratory-based spectrometers. The authors examine the possibility that there may be naturally occurring situations where the typically used linear model may not provide the most accurate resultant spectral description. Under such a circumstance, a nonlinear model may better describe the mixing mechanism.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Calculation of Geodesic Distances in Nonlinear Mixing Models: Application to the Generalized Bilinear Model
paper_content:
Recently, several nonlinear techniques have been proposed in hyperspectral image processing for classification and unmixing applications. A popular data-driven approach for treating nonlinear problems employs the geodesic distances on the data manifold as property of interest. These geodesic distances are approximated by the shortest path distances in a nearest neighbor graph constructed in the data cloud. Although this approach often works well in practical applications, the graph-based approximation of these geodesic distances often fails to capture correctly the true nonlinear structure of the manifold, causing deviations in the subsequent algorithms. On the other hand, several model-based nonlinear techniques have been introduced as well and have the advantage that one can, in theory, calculate the geodesic distances analytically. In this letter, we demonstrate how one can calculate the true geodesics, and their lengths, on any manifold induced by a nonlinear hyperspectral mixing model. We introduce the required techniques from differential geometry, show how the constraints on the abundances can be integrated in these techniques, and present a numerical method for finding a solution of the geodesic equations. We demonstrate this technique on the recently developed generalized bilinear model, which is a flexible model for the nonlinearities introduced by secondary reflections. As an application of the technique, we demonstrate that multidimensional scaling applied to these geodesic distances can be used as a preprocessing step to linear unmixing, yielding better unmixing results on nonlinear data when compared to principal component analysis and outperforming ISOMAP.
---
paper_title: Nonlinear unmixing of hyperspectral images using radial basis functions and orthogonal least squares
paper_content:
This paper studies a linear radial basis function network (RBFN) for unmixing hyperspectral images. The proposed RBFN assumes that the observed pixel reflectances are nonlinear mixtures of known end-members (extracted from a spectral library or estimated with an end-member extraction algorithm), with unknown proportions (usually referred to as abundances). We propose to estimate the model abundances using a linear combination of radial basis functions whose weights are estimated using training samples. The main contribution of this paper is to study an orthogonal least squares algorithm which allows the number of RBFN centers involved in the abundance estimation to be significantly reduced. The resulting abundance estimator is combined with a fully constrained estimation procedure ensuring positivity and sum-to-one constraints for the abundances. The performance of the nonlinear unmixing strategy is evaluated with simulations conducted on synthetic and real data.
---
paper_title: Kernel fully constrained least squares abundance estimates
paper_content:
A critical step for fitting a linear mixing model to hyperspectral imagery is the estimation of the abundances. The abundances are the percentage of each end member within a given pixel; therefore, they should be non-negative and sum to one. With the advent of kernel based algorithms for hyperspectral imagery, kernel based abundance estimates have become necessary. This paper presents such an algorithm that estimates the abundances in the kernel feature space while maintaining the non-negativity and sum-to-one constraints. The usefulness of the algorithm is shown using the AVIRIS Cuprite, Nevada image.
---
paper_title: Quantitative abundance estimates from bidirectional reflectance measurements
paper_content:
A simplified approach to the problem of determining the relative proportion of minerals in a mixture from a reflectance spectrum of the mixture is presented. Fundamental to this approach is a priori information concerning reflectance spectra of the minerals in the mixture and some estimate of the particle sizes of the mixture components. Reflectance spectra of intimate mixtures are a systematic but nonlinear combination of the spectra of the minerals in the mixtures. Equations for bidirectional reflectance are used to linearize the systematics of spectral mixing for intimate mixtures. The equations are simplified by assuming that particulate media scatter light isotropically at phase angles between 15 o and 40 o. This method for linearizing the mixing systematics is used to determine mineral relative geometric cross sections (proportional to mass fraction/density x particle diameter) from reflectance spectra of mixtures of igneous rock-forming minerals (oilvine, magnetite, enstatite, and anorthite) and to determine endmember relative geometric cross sections from reflectance spectra of mixtures of terrestrial desert soils. Since particle diameters are known, the mass fractions of the mixture components are also calculated. For materials without strongly absorbing components, the accuracy of abundance determinations is better than 5%. The results indicate that the method presented can be used to accurately determine the relative proportions of components (minerals or complex endmembers) in a mixture from a reflectance spectrum of the mixture given information of the endmembers in the mixture, reflectance spectra of the endmembers, and an estimate of particle sizes of the respective mixture components.
---
paper_title: Nonlinear Hyperspectral Mixture Analysis for tree cover estimates in orchards
paper_content:
Abstract Accurate monitoring of spatial and temporal variation in tree cover provides essential information for steering management practices in orchards. In this light, the present study investigates the potential of Hyperspectral Mixture Analysis. Specific focus lies on a thorough study of non-linear mixing effects caused by multiple photon scattering. In a series of experiments the importance of multiple scattering is demonstrated while a novel conceptual Nonlinear Spectral Mixture Analysis approach is presented and successfully tested on in situ measured mixed pixels in Citrus sinensis L. orchards. The rationale behind the approach is the redistribution of nonlinear fractions (i.e., virtual fractions) among the actual physical ground cover entities (e.g., tree, soil). These ‘virtual’ fractions, which account for the extent and nature of multiple photon scattering only have a physical meaning at the spectral level but cannot be interpreted as an actual physical part of the ground cover. Results illustrate that the effect of multiple scattering on Spectral Mixture Analysis is significant as the linear approach provides a mean relative root mean square error (RMSE) for tree cover fraction estimates of 27%. While traditional nonlinear approaches only slightly reduce this error (RMSE = 23%), important improvements are obtained for the novel Nonlinear Spectral Mixture Analysis approach (RMSE = 12%).
---
paper_title: Nonlinear unmixing of hyperspectral images using a generalized bilinear model
paper_content:
This paper studies a generalized bilinear model and a hierarchical Bayesian algorithm for unmixing hyperspectral images. The proposed model is a generalization of the accepted linear mixing model but also of a bilinear model recently introduced in the literature. Appropriate priors are chosen for its parameters in particular to satisfy the positivity and sum-to-one constraints for the abundances. The joint posterior distribution of the unknown parameter vector is then derived. A Metropolis-within-Gibbs algorithm is proposed which allows samples distributed according to the posterior of interest to be generated and to estimate the unknown model parameters. The performance of the resulting unmixing strategy is evaluated via simulations conducted on synthetic and real data.
---
paper_title: Supervised Nonlinear Spectral Unmixing Using a Postnonlinear Mixing Model for Hyperspectral Imagery
paper_content:
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
---
paper_title: Nonlinear Spectral Mixture Analysis for Hyperspectral Imagery in an Unknown Environment
paper_content:
Nonlinear spectral mixture analysis for hyperspectral imagery is investigated without prior information about the image scene. A simple but effective nonlinear mixture model is adopted, where the multiplication of each pair of endmembers results in a virtual endmember representing multiple scattering effect during pixel construction process. The analysis is followed by linear unmixing for abundance estimation. Due to a large number of nonlinear terms being added in an unknown environment, the following abundance estimation may contain some errors if most of the endmembers do not really participate in the mixture of a pixel. We take advantage of the developed endmember variable linear mixture model (EVLMM) to search the actual endmember set for each pixel, which yields more accurate abundance estimation in terms of smaller pixel reconstruction error, smaller residual counts, and more pixel abundances satisfying sum-to-one and nonnegativity constraints.
---
paper_title: Comparative study between a new nonlinear model and common linear model for analysing laboratory simulated-forest hyperspectral data
paper_content:
The spectral unmixing of mixed pixels is a key factor in remote sensing images, especially for hyperspectral imagery. A commonly used approach to spectral unmixing has been linear unmixing. However, the question of whether linear or nonlinear processes dominate spectral signatures of mixed pixels is still an unresolved matter. In this study, we put forward a new nonlinear model for inferring end-member fractions within hyperspectral scenes. This study focuses on comparing the nonlinear model with a linear model. A detail comparative analysis of the fractions 'sunlit crown', 'sunlit background' and 'shadow' between the two methods was carried out through visualization, and comparing with supervised classification using a database of laboratory simulated-forest scenes. Our results show that the nonlinear model of spectral unmixing outperforms the linear model, especially in the scenes with translucent crown on a white background. A nonlinear mixture model is needed to account for the multiple scattering between tree crowns and background.
---
paper_title: Modal mineralogy of planetary surfaces from visible and near-infrared spectral data
paper_content:
Real planetary surfaces are composed of several to many different minerals and ices. Deconvolving a reflectance spectrum to material abundance in an unambiguous way is difficult, because the spectra are complex nonlinear functions of grain size, abundance, and material opacity. Multiple scattering models can provide approximate solutions to the radiative transfer in a particulate medium. The paper examines the different approaches which deal with the theory of radiative transfer on atmosphereless bodies. We present the relative merits of two scattering theories based on the equivalent slab model: the extensively used Hapke theory [1] and the Shkuratov theory [2]. The performances of the two models for determining mineral abundance in multicomponent mixtures are also evaluated using laboratory data. Finally, one application on real planetary surfaces will be shown.
---
paper_title: Nonlinear mixture model for hyperspectral unmixing
paper_content:
This paper addresses the problem of unmixing hyperspectral images, when the light suffers multiple interactions ::: among distinct endmembers. In these scenarios, linear unmixing has poor accuracy since the multiple light ::: scattering effects are not accounted for by the linear mixture model. ::: Herein, a nonlinear scenario composed by a single layer of vegetation above the soil is considered. For this ::: class of scene, the adopted mixing model, takes into account the second-order scattering interactions. Higher ::: order interactions are assumed negligible. A semi-supervised unmixing method is proposed and evaluated with ::: simulated and real hyperspectral data sets.
---
paper_title: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
paper_content:
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
---
paper_title: Compressed sensing
paper_content:
Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
---
paper_title: Emergence of simple-cell receptive field properties by learning a sparse code for natural images
paper_content:
THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.
---
paper_title: Algorithm taxonomy for hyperspectral unmixing
paper_content:
In this paper, we introduce a set of taxonomies that hierarchically organize and specify algorithms associated with hyperspectral unmixing. Our motivation is to collectively organize and relate algorithms in order to assess the current state-of-the-art in the field and to facilitate objective comparisons between methods. The hyperspectral sensing community is populated by investigators with disparate scientific backgrounds and, speaking in their respective languages, efforts in spectral unmixing developed within disparate communities have inevitably led to duplication. We hope our analysis removes this ambiguity and redundancy by using a standard vocabulary, and that the presentation we provide clearly summarizes what has and has not been done. As we shall see, the framework for the taxonomies derives its organization from the fundamental, philosophical assumptions imposed on the problem, rather than the common calculations they perform, or the similar outputs they might yield.
---
paper_title: Survey of geometric and statistical unmixing algorithms for hyperspectral images
paper_content:
Spectral mixture analysis (also called spectral unmixing) has been an alluring exploitation goal since the earliest days of imaging spectroscopy. No matter the spatial resolution, the spectral signatures collected in natural environments are invariably a mixture of the signatures of the various materials found within the spatial extent of the ground instantaneous field view of the imaging instrument. In this paper, we give a comprehensive enumeration of the unmixing methods used in practice, because of their implementation in widely used software packages, and those published in the literature. We have structured the review according to the basic computational approach followed by the algorithms, with particular attention to those based on the computational geometry formulation, and statistical approaches with a probabilistic foundation. The quantitative assessment of some available techniques in both categories provides an opportunity to review recent advances and to anticipate future developments.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Mixed pixels classification
paper_content:
There are two major approaches in spectral unmixing: linear and non-linear ones. They are appropriate for different types of mixture, namely checkerboard mixtures and intimate mixtures. The two approaches are briefly reviewed. Then in a carefully controlled laboratory experiment, the limitations and applicability of two of the methods (a linear and a non- linear one) are compared, in the context of unmixing an intimate mixture.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Minimum-volume transforms for remotely sensed data
paper_content:
Scatter diagrams for multispectral remote sensing data tend to be triangular, in the two-band case, pyramidal for three bands, and so on. They radiate away from the so-called darkpoint, which represents the scanner's response to an un-illuminated target. A minimum-volume transform may be described (provisionally) as a nonorthogonal linear transformation of the multivariate data to new axes passing through the dark point, with directions chosen such that they (for two bands), or the new coordinate planes (for three bands, etc.) embrace the data cloud as tightly as possible. The reason for the observed shapes of scatter diagrams is to be found in the theory of linear mixing at the subfootprint scale. Thus, suitably defined, minimum-volume transforms can often be used to unmix images into new spatial variables showing the proportions of the different cover types present, a type of enhancement that is not only intense, but physically meaningful. The present paper furnishes details for constructing computer programs to effect this operation. It will serve as a convenient technical source that may be referenced in subsequent, more profusely illustrated publications that address the intended application, the mapping of surface mineralogy. >
---
paper_title: Hyperspectral unmixing: geometrical, statistical, and sparse regression-based approaches
paper_content:
Hyperspectral instruments acquire electromagnetic energy scattered within their ground instantaneous field view ::: in hundreds of spectral channels with high spectral resolution. Very often, however, owing to low spatial resolution ::: of the scanner or to the presence of intimate mixtures (mixing of the materials at a very small scale) in the scene, ::: the spectral vectors (collection of signals acquired at different spectral bands from a given pixel) acquired by the ::: hyperspectral scanners are actually mixtures of the spectral signatures of the materials present in the scene. ::: Given a set of mixed spectral vectors, spectral mixture analysis (or spectral unmixing) aims at estimating the ::: number of reference materials, also called endmembers, their spectral signatures, and their fractional abundances. ::: Spectral unmixing is, thus, a source separation problem where, under a linear mixing model, the sources are the ::: fractional abundances and the endmember spectral signatures are the columns of the mixing matrix. As such, ::: the independent component analysis (ICA) framework came naturally to mind to unmix spectral data. However, ::: the ICA crux assumption of source statistical independence is not satisfied in spectral applications, since the ::: sources are fractions and, thus, non-negative and sum to one. As a consequence, ICA-based algorithms have ::: severe limitations in the area of spectral unmixing, and this has fostered new unmixing research directions taking ::: into account geometric and statistical characteristics of hyperspectral sources. ::: This paper presents an overview of the principal research directions in hyperspectral unmixing. The presentations ::: is organized into four main topics: i) mixing models, ii) signal subspace identification, iii) geometrical-based ::: spectral unmixing, (iv) statistical-based spectral unmixing, and (v) sparse regression-based unmixing. In each ::: topic, we describe what physical or mathematical problems are involved and summarize state-of-the-art algorithms ::: to address these problems.
---
paper_title: Spectral mixture modeling - A new analysis of rock and soil types at the Viking Lander 1 site. [on Mars]
paper_content:
A Viking Lander 1 image was modeled as mixtures of reflectance spectra of palagonite dust, gray andesitelike rock, and a coarse rocklike soil. The rocks are covered to varying degrees by dust but otherwise appear unweathered. Rocklike soil occurs as lag deposits in deflation zones around stones and on top of a drift and as a layer in a trench dug by the lander. This soil probably is derived from the rocks by wind abrasion and/or spallation. Dust is the major component of the soil and covers most of the surface. The dust is unrelated spectrally to the rock but is equivalent to the global-scale dust observed telescopically. A new method was developed to model a multispectral image as mixtures of end-member spectra and to compare image spectra directly with laboratory reference spectra. The method for the first time uses shade and secondary illumination effects as spectral end-members; thus the effects of topography and illumination on all scales can be isolated or removed. The image was calibrated absolutely from the laboratory spectra, in close agreement with direct calibrations. The method has broad applications to interpreting multispectral images, including satellite images.
---
paper_title: Hyperspectral Subspace Identification
paper_content:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
---
paper_title: Improved Manifold Coordinate Representations of Large-Scale Hyperspectral Scenes
paper_content:
In recent publications, we have presented a data-driven approach to representing the nonlinear structure of hyperspectral imagery using manifold coordinates. The approach relies on graph methods to derive geodesic distances on the high-dimensional hyperspectral data manifold. From these distances, a set of intrinsic manifold coordinates that parameterizes the data manifold is derived. Scaling the solution relied on divide-conquer-and-merge strategies for the. manifold coordinates because of the computational and memory scaling of the geodesic coordinate calculations. In this paper, we improve the scaling performance of isometric mapping (ISOMAP) and achieve full-scene global manifold coordinates while removing artifacts generated by the original methods. The CPU time of the enhanced ISOMAP approach scales as O(·N log 2 (N)), where N is the number of samples, while the memory requirement is bounded by O(N log(N)). Full hyperspectral scenes of O(10 6 ) samples or greater are obtained via a reconstruction algorithm, which allows insertion of large numbers of samples into a representative "backbone" manifold obtained for a smaller but representative set of O(10 5 ) samples. We provide a classification example using a coastal hyperspectral scene to illustrate the approach.
---
paper_title: A new look at the statistical model identification
paper_content:
The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.
---
paper_title: An information theoretic comparison of projection pursuit and principal component features for classification of Landsat TM imagery of central Colorado
paper_content:
Projection pursuit (PP) and principal component analysis (PCA) projections derived from Landsat Thematic Mapper (TM) imagery of central Colorado were compared. While PCA is a simple subset of the general class of PP algorithms, it cannot distinguish Gaussian from non-Gaussian distributions, since it maximizes projected variance. PP algorithms, which maximize higher-order statistics, can be used to find skew or multi-modal projections in order to reveal underlying class structure. These data projections have greater fidelity to underlying land-cover distributions. On sequestered test data, PP projections improved separation of individual categories from a few percent to as much as 24%. PP performance exceeded that of PCA for all but one of the 14 land-cover categories.
---
paper_title: Statistical Signal Processing Detection Estimation And Time Series Analysis
paper_content:
Thank you for reading statistical signal processing detection estimation and time series analysis. Maybe you have knowledge that, people have look hundreds times for their chosen novels like this statistical signal processing detection estimation and time series analysis, but end up in harmful downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their laptop.
---
paper_title: Hyperspectral Subspace Identification
paper_content:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
---
paper_title: Automatic reduction of hyperspectral imagery using wavelet spectral analysis
paper_content:
Hyperspectral imagery provides richer information about materials than multispectral imagery. The new larger data volumes from hyperspectral sensors present a challenge for traditional processing techniques. For example, the identification of each ground surface pixel by its corresponding spectral signature is still difficult because of the immense volume of data. Conventional classification methods may not be used without dimension reduction preprocessing. This is due to the curse of dimensionality, which refers to the fact that the sample size needed to estimate a function of several variables to a given degree of accuracy grows exponentially with the number of variables. Principal component analysis (PCA) has been the technique of choice for dimension reduction. However, PCA is computationally expensive and does not eliminate anomalies that can be seen at one arbitrary band. Spectral data reduction using automatic wavelet decomposition could be useful. This is because it preserves the distinctions among spectral signatures. It is also computed in automatic fashion and can filter data anomalies. This is due to the intrinsic properties of wavelet transforms that preserves high- and low-frequency features, therefore preserving peaks and valleys found in typical spectra. Compared to PCA, for the same level of data reduction, we show that automatic wavelet reduction yields better or comparable classification accuracy for hyperspectral data, while achieving substantial computational savings.
---
paper_title: A transformation for ordering multispectral data in terms of image quality with implications for noise removal
paper_content:
A transformation known as the maximum noise fraction (MNF) transformation, which always produces new components ordered by image quality, is presented. It can be shown that this transformation is equivalent to principal components transformations when the noise variance is the same in all bands and that it reduces to a multiple linear regression when noise is in one band only. Noise can be effectively removed from multispectral data by transforming to the MNF space, smoothing or rejecting the most noisy components, and then retransforming to the original space. In this way, more intense smoothing can be applied to the MNF components with high noise and low signal content than could be applied to each band of the original data. The MNF transformation requires knowledge of both the signal and noise covariance matrices. Except when the noise is in one band only, the noise covariance matrix needs to be estimated. One procedure for doing this is discussed and examples of cleaned images are presented. >
---
paper_title: Real-time analysis of hyperspectral data sets using NRL's ORASIS algorithm
paper_content:
The covered lantern project was initiated by the central MASINT Technology Coordination Office to demonstrate the tactical use of hyperspectral imagery with real time processing capability. We report on the design and use of the HYCORDER system developed for Covered Lantern that was tested in June 1995. The HYCORDER system consisted of an imaging spectrometer flying in a Pioneer Uncrewed Aeronautical Vehicle and a ground based real time analysis and visualization system. The camera was intensified allowing dawn to dusk operation. The spectral information was downlinked to the analysis system as standard analog video. The analysis system was constructed from 17 Texas Instrument C44 DSPs controlled by a 200 MHz Pentium Pro PC. A real time, parallel version of NRL's optical real-time adaptive spectral identification system algorithm was developed for this system. The system was capable of running continuously, allowing for broad area coverage. The algorithm was adaptive, accommodating changing lighting conditions and terrain. The general architecture of the algorithm will be discussed as well as results from the test.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Manifold learning techniques for the analysis of hyperspectral ocean data
paper_content:
A useful technique in hyperspectral data analysis is dimensionality reduction, which replaces the original high dimensional data with low dimensional representations. Usually this is done with linear techniques such as linear mixing or principal components (PCA). While often useful, there is no a priori reason for believing that the data is actually linear. ::: ::: Lately there has been renewed interest in modeling high dimensional data using nonlinear techniques such as manifold learning (ML). In ML, the data is assumed to lie on a low dimensional, possibly curved surface (or manifold). The goal is to discover this manifold and therefore find the best low dimensional representation of the data. ::: ::: Recently, researchers at the Naval Research Lab have begun to model hyperspectral data using ML. We continue this work by applying ML techniques to hyperspectral ocean water data. We focus on water since there are underlying physical reasons for believing that the data lies on a certain type of nonlinear manifold. In particular, ocean data is influenced by three factors: the water parameters, the bottom type, and the depth. For fixed water and bottom types, the spectra that arise by varying the depth will lie on a nonlinear, one dimensional manifold (i.e. a curve). Generally, water scenes will contain a number of different water and bottom types, each combination of which leads to a distinct curve. In this way, the scene may be modeled as a union of one dimensional curves. In this paper, we investigate the use of manifold learning techniques to separate the various curves, thus partitioning the scene into homogeneous areas. We also discuss ways in which these techniques may be able to derive various scene characteristics such as bathymetry.
---
paper_title: Constrained band selection for hyperspectral imagery
paper_content:
Constrained energy minimization (CEM) has shown effective in hyperspectral target detection. It linearly constrains a desired target signature while minimizing interfering effects caused by other unknown signatures. This paper explores this idea for band selection and develops a new approach to band selection, referred to as constrained band selection (CBS) for hyperspectral imagery. It interprets a band image as a desired target signature vector while considering other band images as unknown signature vectors. As a result, the proposed CBS using the concept of the CEM to linearly constrain a band image, while also minimizing band correlation or dependence provided by other band images, is referred to as CEM-CBS. Four different criteria referred to as Band Correlation Minimization (BCM), Band Correlation Constraint (BCC), Band Dependence Constraint (BDC), and Band Dependence Minimization (BDM) are derived for CEM-CBS.. Since dimensionality resulting from conversion of a band image to a vector may be huge, the CEM-CBS is further reinterpreted as linearly constrained minimum variance (LCMV)-based CBS by constraining a band image as a matrix where the same four criteria, BCM, BCC, BDC, and BDM, can be also used for LCMV-CBS. In order to determine the number of bands required to select p, a recently developed concept, called virtual dimensionality, is used to estimate the p. Once the p is determined, a set of p desired bands can be selected by the CEM/LCMV-CBS. Finally, experiments are conducted to substantiate the proposed CEM/LCMV-CBS four criteria, BCM, BCC, BDC, and BDM, in comparison with variance-based band selection, information divergence-based band selection, and uniform band selection.
---
paper_title: Spatially Coherent Nonlinear Dimensionality Reduction and Segmentation of Hyperspectral Images
paper_content:
The nonlinear dimensionality reduction and its effects on vector classification and segmentation of hyperspectral images are investigated in this letter. In particular, the way dimensionality reduction influences and helps classification and segmentation is studied. The proposed framework takes into account the nonlinear nature of high-dimensional hyperspectral images and projects onto a lower dimensional space via a novel spatially coherent locally linear embedding technique. The spatial coherence is introduced by comparing pixels based on their local surrounding structure in the image domain and not just on their individual values as classically done. This spatial coherence in the image domain across the multiple bands defines the high-dimensional local neighborhoods used for the dimensionality reduction. This spatial coherence concept is also extended to the segmentation and classification stages that follow the dimensionality reduction, introducing a modified vector angle distance. We present the underlying concepts of the proposed framework and experimental results showing the significant classification improvements
---
paper_title: Unsupervised hyperspectral image analysis with projection pursuit
paper_content:
Principal components analysis (PCA) is effective at compressing information in multivariate data sets by computing orthogonal projections that maximize the amount of data variance. Unfortunately, information content in hyperspectral images does not always coincide with such projections. The authors propose an application of projection pursuit (PP), which seeks to find a set of projections that are "interesting," in the sense that they deviate from the Gaussian distribution assumption. Once these projections are obtained, they can be used for image compression, segmentation, or enhancement for visual analysis. To find these projections, a two-step iterative process is followed where they first search for a projection that maximizes a projection index based on the information divergence of the projection's estimated probability distribution from the Gaussian distribution and then reduce the rank by projecting the data onto the subspace orthogonal to the previous projections. To calculate each projection, they use a simplified approach to maximizing the projection index, which does not require an optimization algorithm. It searches for a solution by obtaining a set of candidate projections from the data and choosing the one with the highest projection index. The effectiveness of this method is demonstrated through simulated examples as well as data from the hyperspectral digital imagery collection experiment (HYDICE) and the spatially enhanced broadband array spectrograph system (SEBASS).
---
paper_title: Estimation of number of spectrally distinct signal sources in hyperspectral imagery
paper_content:
With very high spectral resolution, hyperspectral sensors can now uncover many unknown signal sources which cannot be identified by visual inspection or a priori. In order to account for such unknown signal sources, we introduce a new definition, referred to as virtual dimensionality (VD) in this paper. It is defined as the minimum number of spectrally distinct signal sources that characterize the hyperspectral data from the perspective view of target detection and classification. It is different from the commonly used intrinsic dimensionality (ID) in the sense that the signal sources are determined by the proposed VD based only on their distinct spectral properties. These signal sources may include unknown interfering sources, which cannot be identified by prior knowledge. With this new definition, three Neyman-Pearson detection theory-based thresholding methods are developed to determine the VD of hyperspectral imagery, where eigenvalues are used to measure signal energies in a detection model. In order to evaluate the performance of the proposed methods, two information criteria, an information criterion (AIC) and minimum description length (MDL), and the factor analysis-based method proposed by Malinowski, are considered for comparative analysis. As demonstrated in computer simulations, all the methods and criteria studied in this paper may work effectively when noise is independent identically distributed. This is, unfortunately, not true when some of them are applied to real image data. Experiments show that all the three eigenthresholding based methods (i.e., the Harsanyi-Farrand-Chang (HFC), the noise-whitened HFC (NWHFC), and the noise subspace projection (NSP) methods) produce more reliable estimates of VD compared to the AIC, MDL, and Malinowski's empirical indicator function, which generally overestimate VD significantly. In summary, three contributions are made in this paper, 1) an introduction of the new definition of VD, 2) three Neyman-Pearson detection theory-based thresholding methods, HFC, NWHFC, and NSP derived for VD estimation, and 3) experiments that show the AIC and MDL commonly used in passive array processing and the second-order statistic-based Malinowski's method are not effective measures in VD estimation.
---
paper_title: Modeling by shortest data description
paper_content:
The number of digits it takes to write down an observed sequence x"1, ..., x"N of a time series depends on the model with its parameters that one assumes to have generated the observed data. Accordingly, by finding the model which minimizes the description length one obtains estimates of both the integer-valued structure parameters and the real-valued system parameters.
---
paper_title: Curvilinear Component Analysis: a Self-Organizing Neural Network for Nonlinear Mapping of Data Sets
paper_content:
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.
---
paper_title: Principal Component Analysis
paper_content:
When large multivariate datasets are analyzed, it is often desirable to reduce their dimensionality. Principal component analysis is one technique for doing this. It replaces the p original variables by a smaller number, q, of derived variables, the principal components, which are linear combinations of the original variables. Often, it is possible to retain most of the variability in the original variables with q very much smaller than p. Despite its apparent simplicity, principal component analysis has a number of subtleties, and it has many uses and extensions. A number of choices associated with the technique are briefly discussed, namely, covariance or correlation, how many components, and different normalization constraints, as well as confusion with factor analysis. Various uses and extensions are outlined. ::: ::: ::: Keywords: ::: ::: dimension reduction; ::: factor analysis; ::: multivariate analysis; ::: variance maximization
---
paper_title: Detection of signals by information theoretic criteria
paper_content:
A new approach is presented to the problem of detecting the number of signals in a multichannel time-series, based on the application of the information theoretic criteria for model selection introduced by Akaike (AIC) and by Schwartz and Rissanen (MDL). Unlike the conventional hypothesis testing based approach, the new approach does not requite any subjective threshold settings; the number of signals is obtained merely by minimizing the AIC or the MDL criteria. Simulation results that illustrate the performance of the new method for the detection of the number of signals received by a sensor array are presented.
---
paper_title: Enhancement of high spectral resolution remote-sensing data by a noise-adjusted principal components transform
paper_content:
High-spectral-resolution remote-sensing data are first transformed so that the noise covariance matrix becomes the identity matrix. Then the principal components transform is applied. This transform is equivalent to the maximum noise fraction transform and is optimal in the sense that it maximizes the signal-to-noise ratio (SNR) in each successive transform component, just as the principal component transform maximizes the data variance in successive components. Application of this transform requires knowledge or an estimate of the noise covariance matrix of the data. The effectiveness of this transform for noise removal is demonstrated in both the spatial and spectral domains. Results that demonstrate the enhancement of geological mapping and detection of alteration mineralogy in data from the Pilbara region of Western Australia, including mapping of the occurrence of pyrophyllite over an extended area, are presented. >
---
paper_title: Exploiting manifold geometry in hyperspectral imagery
paper_content:
A new algorithm for exploiting the nonlinear structure of hyperspectral imagery is developed and compared against the de facto standard of linear mixing. This new approach seeks a manifold coordinate system that preserves geodesic distances in the high-dimensional hyperspectral data space. Algorithms for deriving manifold coordinates, such as isometric mapping (ISOMAP), have been developed for other applications. ISOMAP guarantees a globally optimal solution, but is computationally practical only for small datasets because of computational and memory requirements. Here, we develop a hybrid technique to circumvent ISOMAP's computational cost. We divide the scene into a set of smaller tiles. The manifolds derived from the individual tiles are then aligned and stitched together to recomplete the scene. Several alignment methods are discussed. This hybrid approach exploits the fact that ISOMAP guarantees a globally optimal solution for each tile and the presumed similarity of the manifold structures derived from different tiles. Using land-cover classification of hyperspectral imagery in the Virginia Coast Reserve as a test case, we show that the new manifold representation provides better separation of spectrally similar classes than one of the standard linear mixing models. Additionally, we demonstrate that this technique provides a natural data compression scheme, which dramatically reduces the number of components needed to model hyperspectral data when compared with traditional methods such as the minimum noise fraction transform.
---
paper_title: Applying nonlinear manifold learning to hyperspectral data for land cover classification
paper_content:
The shortest path k-nearest neighbor classifier (SkNN), that utilizes nonlinear manifold learning, is proposed for analysis of hyperspectral data. In contrast to classifiers that deal with the high dimensional feature space directly, this approach uses the pairwise distance matrix over a nonlinear manifold to classify novel observations. Because manifold learning preserves the local pairwise distances and updates distances of a sample to samples beyond the user-defined neighborhood along the shortest path on the manifold, similar samples are moved into closer proximity. High classification accuracies are achieved by using the simple k-nearest neighbor (kNN) classifier. SkNN was applied to hyperspectral data collected by the Hyperion sensor on the EO1 satellite over the Okavango Delta of Botswana. Classification accuracies and generalization capability are compared to those achieved by the best basis binary hierarchical classifier, the hierarchical support vector machine classifier, and the k-nearest neighbor classifier on both the original data and a subset of its principal components.
---
paper_title: Information-theory-based band selection and utility evaluation for reflective spectral systems
paper_content:
We have developed a methodology for wavelength band selection. This methodology can be used in system design studies to provide an optimal sensor cost, data reduction, and data utility trade-off relative to a specific application. The methodology combines an information theory- based criterion for band selection with a genetic algorithm to search for a near-optimal solution. We have applied this methodology to 612 material spectra from a combined database to determine the band locations for 6, 9, 15, 30, and 60- band sets in the 0.42 to 2.5 microns spectral region that permit the best material separation. These optimal bands sets were then evaluated in terms of their utility related to anomaly ddetection and material identification using multi-band data cubes generated from two HYDICE cubes. The optimal band locations and their corresponding entropies are given in this paper. Our optimal band locations for the 6, 9, and 15-band sets are compared to the bands of existing multi-band systems such as Landsat 7, Multispectral Thermal Imager, Advanced Land Imager, Daedalus, and M7. Also presented are the anomaly detection and material identification results obtained from our generalted multi- band data cubes. Comparisons are made between these exploitation results with those obtained from the original 210-band HYDICE data cubes.
---
paper_title: Hyperspectral image processing using locally linear embedding
paper_content:
We describe a method of processing hyperspectral images of natural scenes that uses a combination of k-means clustering and locally linear embedding (LLE). The primary goal is to assist anomaly detection by preserving spectral uniqueness among the pixels. In order to reduce redundancy among the pixels, adjacent pixels which are spectrally similar are grouped using the k-means clustering algorithm. Representative pixels from each cluster are chosen and passed to the LLE algorithm, where the high dimensional spectral vectors are encoded by a low dimensional mapping. Finally, monochromatic and tri-chromatic images are constructed from the k-means cluster assignments and LLE vector mappings. The method generates images where differences in the original spectra are reflected in differences in the output vector assignments. An additional benefit of mapping to a lower dimensional space is reduced data size. When spectral irregularities are added to a patch of the hyperspectral images, again the method successfully generated color assignments that detected the changes in the spectra.
---
paper_title: Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivative-domain wavelet shrinkage
paper_content:
In this paper, a new noise reduction algorithm is introduced and applied to the problem of denoising hyperspectral imagery. This algorithm resorts to the spectral derivative domain, where the noise level is elevated, and benefits from the dissimilarity of the signal regularity in the spatial and the spectral dimensions of hyperspectral images. The performance of the new algorithm is tested on two different hyperspectral datacubes: an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) datacube that is acquired in a vegetation-dominated site and a simulated AVIRIS datacube that simulates a geological site. The new algorithm provides signal-to-noise-ratio improvement up to 84.44% and 98.35% in the first and the second datacubes, respectively.
---
paper_title: Hyperspectral Subspace Identification
paper_content:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
---
paper_title: Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering
paper_content:
We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call "groups." Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
---
paper_title: A Convex Analysis-Based Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing
paper_content:
Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing approaches to hyperspectral unmixing rely on the pure-pixel assumption, which may be violated for highly mixed data. A heuristic unmixing criterion without requiring the pure-pixel assumption has been reported by Craig: The endmember estimates are determined by the vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, using convex analysis, we show that the hyperspectral unmixing by Craig's criterion can be formulated as an optimization problem of finding a minimum-volume enclosing simplex (MVES). An algorithm that cyclically solves the MVES problem via linear programs (LPs) is also proposed. Some Monte Carlo simulations are provided to demonstrate the efficacy of the proposed MVES algorithm.
---
paper_title: Minimum-volume transforms for remotely sensed data
paper_content:
Scatter diagrams for multispectral remote sensing data tend to be triangular, in the two-band case, pyramidal for three bands, and so on. They radiate away from the so-called darkpoint, which represents the scanner's response to an un-illuminated target. A minimum-volume transform may be described (provisionally) as a nonorthogonal linear transformation of the multivariate data to new axes passing through the dark point, with directions chosen such that they (for two bands), or the new coordinate planes (for three bands, etc.) embrace the data cloud as tightly as possible. The reason for the observed shapes of scatter diagrams is to be found in the theory of linear mixing at the subfootprint scale. Thus, suitably defined, minimum-volume transforms can often be used to unmix images into new spatial variables showing the proportions of the different cover types present, a type of enhancement that is not only intense, but physically meaningful. The present paper furnishes details for constructing computer programs to effect this operation. It will serve as a convenient technical source that may be referenced in subsequent, more profusely illustrated publications that address the intended application, the mapping of surface mineralogy. >
---
paper_title: Spectral mixture analysis for subpixel vegetation fractions in the urban environment: How to incorporate endmember variability?
paper_content:
In the urban environment both quality of life and surface biophysical processes are closely related to the presence of vegetation. Spectral mixture analysis (SMA) has been frequently used to derive subpixel vegetation information from remotely sensed imagery in urban areas, where the underlying landscapes are assumed to be composed of a few fundamental components, called endmembers. A critical step in SMA is to identify the endmembers and their corresponding spectral signatures. A common practice in SMA assumes a constant spectral signature for each endmember. In fact, the spectral signatures of endmembers may vary from pixel to pixel due to changes in biophysical (e.g. leaves, stems and bark) and biochemical (e.g. chlorophyll content) composition. This study developed a Bayesian Spectral Mixture Analysis (BSMA) model to understand the impact of endmember variability on the derivation of subpixel vegetation fractions in an urban environment. BSMA incorporates endmember spectral variability in the unmixing process based on Bayes Theorem. In traditional SMA, each endmember is represented by a constant signature, while BSMA uses the endmember signature probability distribution in the analysis. BSMA has the advantage of maximally capturing the spectral variability of an image with the least number of endmembers. In this study, the BSMA model is first applied to simulated images, and then to Ikonos and Landsat ETM+ images. BSMA leads to an improved estimate of subpixel vegetation fractions, and provides uncertainty information for the estimates. The study also found that the traditional SMA using the statistical means of the signature distributions as endmember signatures produces subpixel endmember fractions with almost the same and sometimes even better accuracy than those from BSMA except without uncertainty information for the estimates. However, using the modes of signature distributions as endmembers may result in serious bias in subpixel endmember fractions derived from traditional SMA.
---
paper_title: Endmember bundles: a new approach to incorporating endmember variability into spectral mixture analysis
paper_content:
Accuracy of vegetation cover fractions, computed with spectral mixture analysis, may be compromised by variation in canopy structure and biochemistry when a single endmember represents top-of-canopy reflectance. In this article, endmember variability is incorporated into mixture analysis by representing each endmember by a set or bundle of spectra, each of which could reasonably be the reflectance of an instance of the endmember. Endmember bundles are constructed from the data itself by an extension to a previously described method of manually deriving endmembers from remotely sensed data. Applied to remotely sensed images, bundle unmixing produces maximum and minimum fraction images bounding the correct cover fractions and specifying error due to endmember variability. In this article, endmember bundles and bounding fraction images were created for an airborne visible/infrared imaging spectrometer (AVIRIS) subscene simulated with a canopy radiative transfer/geometric-optical model. Variation in endmember reflectance was achieved using ranges of parameter values including leaf area index (LAI) and tissue optical properties observed In a North Texas savanna. The subscene's spatial pattern was based on a 1992 Landsat Thematic Mapper image of the study region. Bounding fraction images bracketed the cover fractions of the simulated data for 98% of the pixels for soil, 97% for senescent grass and 93% for trees. Averages of bounding images estimated fractional coverage used in the simulation with an average error of /spl les/0.05, a significant improvement over previous methods with important implications for regional-scale research on vegetation extent and dynamics.
---
paper_title: Sequential N-FINDR algorithms
paper_content:
Abstract N-finder algorithm (N-FINDR) is probably one of most popular and widely used algorithms used for endmember extraction. Three major obstacles need to be overcome in its practical implementation. One is that the number of endmembers must be known a priori. A second one is the use of random initial endmembers to initialize N-FINDR, which results in inconsistent final results of extracted endm embers. A third one is its very expensive computational cost caused by an exhaustive search. While the first two issues can be resolved by a recently developed concept, virtual dimensionality (VD) and custom-designed initialization algorithms respectively, the third issue seems to remain challenging. This paper addresses the latter issue by re-designing N-FINDR which can generate one endmember at a time sequentially in a successive fashion to ease computational complexity. Such resulting algorithm is called SeQuential N-FINDR (SQ N-FINDR) as opposed to the orig inal N-FINDR referred to as SiMultaneous N-FINDR (SM N-FINDR) which generates all endmembers simultaneously at once. Two variants of SQ N-FINDR can be further derived to reduce computational complexity. Interestingly, e xperimental results show that SQ N-FINDR can perform as well as SM-N-FINDR if initial endmem bers are appropriately selected. Key words: Endmember extraction. Mixed pixels. Subpixel targets. Target Embeddedness (TE). Target Implantation (TI).
---
paper_title: Autonomous single-pass endmember approximation using lattice auto-associative memories
paper_content:
We propose a novel method for the autonomous determination of endmembers that employs recent results from the theory of lattice based auto-associative memories. In contrast to several other existing methods, the endmembers determined by the proposed method are physically linked to the data set spectra. Numerical examples are provided to illustrate lattice theoretical concepts and a hyperspectral image subcube, from the Cuprite site in Nevada, is used to find all endmember candidates in a single pass.
---
paper_title: N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data
paper_content:
The analysis of hyperspectral data sets requires the determination of certain basis spectra called 'end-members.' Once these spectra are found, the image cube can be 'unmixed' into the fractional abundance of each material in each pixel. There exist several techniques for accomplishing the determination of the end-members, most of which involve the intervention of a trained geologist. Often these-end-members are assumed to be present in the image, in the form of pure, or unmixed, pixels. In this paper a method based upon the geometry of convex sets is proposed to find a unique set of purest pixels in an image. The technique is based on the fact that in N spectral dimensions, the N-volume contained by a simplex formed of the purest pixels is larger than any other volume formed from any other combination of pixels. The algorithm works by 'inflating' a simplex inside the data, beginning with a random set of pixels. For each pixel and each end-member, the end-member is replaced with the spectrum of the pixel and the volume is recalculated. If it increases, the spectrum of the new pixel replaces that end-member. This procedure is repeated until no more replacements are done. This algorithm successfully derives end-members in a synthetic data set, and appears robust with less than perfect data. Spectral end-members have been extracted for the AVIRIS Cuprite data set which closely match reference spectra, and resulting abundance maps match published mineral maps.
---
paper_title: The sequential maximum angle convex cone (SMACC) endmember model
paper_content:
A new endmember extraction method has been developed that is based on a convex cone model for representing vector data. The endmembers are selected directly from the data set. The algorithm for finding the endmembers is sequential: the convex cone model starts with a single endmember and increases incrementally in dimension. Abundance maps are simultaneously generated and updated at each step. A new endmember is identified based on the angle it makes with the existing cone. The data vector making the maximum angle with the existing cone is chosen as the next endmember to add to enlarge the endmember set. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The algorithm terminates when all of the data vectors are within the convex cone, to some tolerance. The method offers advantages for hyperspectral data sets where high correlation among channels and pixels can impair un-mixing by standard techniques. The method can also be applied as a band-selection tool, finding end-images that are unique and forming a convex cone for modeling the remaining hyperspectral channels. The method is described and applied to hyperspectral data sets.
---
paper_title: A New Growing Method for Simplex-Based Endmember Extraction Algorithm
paper_content:
A new growing method for simplex-based endmember extraction algorithms (EEAs), called simplex growing algorithm (SGA), is presented in this paper. It is a sequential algorithm to find a simplex with the maximum volume every time a new vertex is added. In order to terminate this algorithm a recently developed concept, virtual dimensionality (VD), is implemented as a stopping rule to determine the number of vertices required for the algorithm to generate. The SGA improves one commonly used EEA, the N-finder algorithm (N-FINDR) developed by Winter, by including a process of growing simplexes one vertex at a time until it reaches a desired number of vertices estimated by the VD, which results in a tremendous reduction of computational complexity. Additionally, it also judiciously selects an appropriate initial vector to avoid a dilemma caused by the use of random vectors as its initial condition in the N-FINDR where the N-FINDR generally produces different sets of final endmembers if different sets of randomly generated initial endmembers are used. In order to demonstrate the performance of the proposed SGA, the N-FINDR and two other EEAs, pixel purity index, and vertex component analysis are used for comparison.
---
paper_title: Two lattice computing approaches for the unsupervised segmentation of hyperspectral images
paper_content:
Endmembers for the spectral unmixing analysis of hyperspectral images are sets of affinely independent vectors, which define a convex polytope covering the data points that represent the pixel image spectra. Strong lattice independence (SLI) is a property defined in the context of lattice associative memories convergence analysis. Recent results show that SLI implies affine independence, confirming the value of lattice associative memories for the study of endmember induction algorithms. In fact, SLI vector sets can be easily deduced from the vectors composing the lattice auto-associative memories (LAM). However, the number of candidate endmembers found by this algorithm is very large, so that some selection algorithm is needed to obtain the full benefits of the approach. In this paper we explore the unsupervised segmentation of hyperspectral images based on the abundance images computed, first, by an endmember selection algorithm and, second, by a previously proposed heuristically defined algorithm. We find their results comparable on a qualitative basis.
---
paper_title: Vertex component analysis: a fast algorithm to unmix hyperspectral data
paper_content:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
---
paper_title: Mapping target signatures via partial unmixing of AVIRIS data
paper_content:
A complete spectral unmixing of a complicated AVIRIS scene may not always be possible or even desired. High quality data of spectrally complex areas are very high dimensional and are consequently difficult to fully unravel. Partial unmixing provides a method of solving only that fraction of the data inversion problem that directly relates to the specific goals of the investigation. Many applications of imaging spectrometry can be cast in the form of the following question: 'Are my target signatures present in the scene, and if so, how much of each target material is present in each pixel?' This is a partial unmixing problem. The number of unmixing endmembers is one greater than the number of spectrally defined target materials. The one additional endmember can be thought of as the composite of all the other scene materials, or 'everything else'. Several workers have proposed partial unmixing schemes for imaging spectrometry data, but each has significant limitations for operational application. The low probability detection methods described by Farrand and Harsanyi and the foreground-background method of Smith et al are both examples of such partial unmixing strategies. The new method presented here builds on these innovative analysis concepts, combining their different positive attributes while attempting to circumvent their limitations. This new method partially unmixes AVIRIS data, mapping apparent target abundances, in the presence of an arbitrary and unknown spectrally mixed background. It permits the target materials to be present in abundances that drive significant portions of the scene covariance. Furthermore it does not require a priori knowledge of the background material spectral signatures. The challenge is to find the proper projection of the data that hides the background variance while simultaneously maximizing the variance amongst the targets.
---
paper_title: A Simplex Volume Maximization Framework for Hyperspectral Endmember Extraction
paper_content:
In the late 1990s, Winter proposed an endmember extraction belief that has much impact on endmember extraction techniques in hyperspectral remote sensing. The idea is to find a maximum-volume simplex whose vertices are drawn from the pixel vectors. Winter's belief has stimulated much interest, resulting in many different variations of pixel search algorithms, widely known as N-FINDR, being proposed. In this paper, we take a continuous optimization perspective to revisit Winter's belief, where the aim is to provide an alternative framework of formulating and understanding Winter's belief in a systematic manner. We first prove that, fundamentally, the existence of pure pixels is not only sufficient for the Winter problem to perfectly identify the ground-truth endmembers but also necessary. Then, under the umbrella of the Winter problem, we derive two methods using two different optimization strategies. One is by alternating optimization. The resulting algorithm turns out to be an N-FINDR variant, but, with the proposed formulation, we can pin down some of its convergence characteristics. Another is by successive optimization; interestingly, the resulting algorithm is found to exhibit some similarity to vertex component analysis. Hence, the framework provides linkage and alternative interpretations to these existing algorithms. Furthermore, we propose a robust worst case generalization of the Winter problem for accounting for perturbed pixel effects in the noisy scenario. An algorithm combining alternating optimization and projected subgradients is devised to deal with the problem. We use both simulations and real data experiments to demonstrate the viability and merits of the proposed algorithms.
---
paper_title: A lattice matrix method for hyperspectral image unmixing
paper_content:
In this manuscript we propose a method for the autonomous determination of endmembers in hyperspectral imagery based on recent theoretical advancements on lattice auto-associative memories. Given a hyperspectral image, the lattice algebra approach finds in a single-pass all possible candidate endmembers from which various affinely independent sets of final endmembers may be derived. In contrast to other endmember detection methods, the endmembers found using two dual canonical lattice matrices are geometrically linked to the data set spectra. The mathematical foundation of the proposed method is first described in some detail followed by application examples that illustrate the key steps of the proposed lattice based method.
---
paper_title: Unmixing of Hyperspectral Images using Bayesian Non-negative Matrix Factorization with Volume Prior
paper_content:
Hyperspectral imaging can be used in assessing the quality of foods by decomposing the image into constituents such as protein, starch, and water. Observed data can be considered a mixture of underlying characteristic spectra (endmembers), and estimating the constituents and their abundances requires efficient algorithms for spectral unmixing. We present a Bayesian spectral unmixing algorithm employing a volume constraint and propose an inference procedure based on Gibbs sampling. We evaluate the method on synthetic and real hyperspectral data of wheat kernels. Results show that our method perform as good or better than existing volume constrained methods. Further, our method gives credible intervals for the endmembers and abundances, which allows us to asses the confidence of the results.
---
paper_title: A Convex Analysis-Based Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing
paper_content:
Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing approaches to hyperspectral unmixing rely on the pure-pixel assumption, which may be violated for highly mixed data. A heuristic unmixing criterion without requiring the pure-pixel assumption has been reported by Craig: The endmember estimates are determined by the vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, using convex analysis, we show that the hyperspectral unmixing by Craig's criterion can be formulated as an optimization problem of finding a minimum-volume enclosing simplex (MVES). An algorithm that cyclically solves the MVES problem via linear programs (LPs) is also proposed. Some Monte Carlo simulations are provided to demonstrate the efficacy of the proposed MVES algorithm.
---
paper_title: Piece-wise convex spatial-spectral unmixing of hyperspectral imagery using possibilistic and fuzzy clustering
paper_content:
Imaging spectroscopy refers to methods for identifying materials in a scene using cameras that digitize light into hundreds of spectral bands. Each pixel in these images consists of vectors representing the amount of light reflected in the different spectral bands from the physical location corresponding to the pixel. Images of this type are called hyperspectral images. Hyperspectral image analysis differs from traditional image analysis in that, in addition to the spatial information inherent in an image, there is abundant spectral information at the pixel or sub-pixel level that can be used to identify materials in the scene. Spectral unmixing techniques attempt to identify the material spectra in a scene down to the sub-pixel level. In this paper, a piece-wise convex hyperspectral unmixing algorithm using both spatial and spectral image information is presented. The proposed method incorporates possibilistic and fuzzy clustering methods. The typicality and membership estimates from those methods can be combined with traditional material proportion estimates to produce more meaningful proportion estimates than obtained with previous spectral unmixing algorithms. An analysis of the utility of using all three estimates produce a better estimate is given using real hyperspectral imagery.
---
paper_title: Collaborative nonnegative matrix factorization for remotely sensed hyperspectral unmixing
paper_content:
In this paper, we develop a new algorithm for hyperspectral unmixing which can provide suitable endmembers (and their corresponding abundances) in a single step. Hence, the algorithm does not require a previous subspace identification step to estimate the number of endmembers as it can cope with the two most likely scenarios in practice (i.e., the number of endmembers is correctly determined or overestimated a priori). The proposed approach, termed collaborative NMF (CoNMF), uses a collaborative regularization prior which forces the abundances corresponding to the overestimated endmembers to zero, such that it is guaranteed that only the true endmembers have fractional abundance contributions and the estimation of the number of endmembers is not required in advance. The obtained experimental results demonstrate that the proposed method exhibits very good performance in case the number of endmember is not available a priori.
---
paper_title: Spatially-smooth piece-wise convex endmember detection
paper_content:
An endmember detection and spectral unmixing algorithm that uses both spatial and spectral information is presented. This method, Spatial Piece-wise Convex Multiple Model Endmember Detection (Spatial P-COMMEND), autonomously estimates multiple sets of endmembers and performs spectral unmixing for input hyperspectral data. Spatial P-COMMEND does not restrict the estimated endmembers to define a single convex region during spectral unmixing. Instead, a piece-wise convex representation is used that can effectively represent non-convex hyperspectral data. Spatial P-COMMEND drives neighboring pixels to be unmixed by the same set of endmembers encouraging spatially-smooth unmixing results.
---
paper_title: Minimum Volume Simplex Analysis: A Fast Algorithm to Unmix Hyperspectral Data
paper_content:
This paper presents a new method of minimum volume class for hyperspectral unmixing, termed minimum volume simplex analysis (MVSA). The underlying mixing model is linear; i.e., the mixed hyperspectral vectors are modeled by a linear mixture of the end-member signatures weighted by the correspondent abundance fractions. MVSA approaches hyperspectral unmixing by fitting a minimum volume simplex to the hyperspectral data, constraining the abundance fractions to belong to the probability simplex. The resulting optimization problem is solved by implementing a sequence of quadratically constrained subproblems. In a final step, the hard constraint on the abundance fractions is replaced with a hinge type loss function to account for outliers and noise. We illustrate the state-of-the-art performance of the MVSA algorithm in unmixing simulated data sets. We are mainly concerned with the realistic scenario in which the pure pixel assumption (i.e., there exists at least one pure pixel per end member) is not fulfilled. In these conditions, the MVSA yields much better performance than the pure pixel based algorithms.
---
paper_title: Bayesian nonnegative Matrix Factorization with volume prior for unmixing of hyperspectral images
paper_content:
In hyperspectral image analysis the objective is to unmix a set of acquired pixels into pure spectral signatures (end-members) and corresponding fractional abundances. The Non-negative Matrix Factorization (NMF) methods have received a lot of attention for this unmixing process. Many of these NMF based unmixing algorithms are based on sparsity regularization encouraging pure spectral endmembers, but this is not optimal for certain applications, such as foods, where abundances are not sparse. The pixels will theoretically lie on a simplex and hence the endmembers can be estimated as the vertices of the smallest enclosing simplex. In this context we present a Bayesian framework employing a volume constraint for the NMF algorithm, where the posterior distribution is numerically sampled from using a Gibbs sampling procedure. We evaluate the method on synthetical and real hyperspectral data of wheat kernels.
---
paper_title: A comparison of deterministic and probabilistic approaches to endmember representation
paper_content:
The piece-wise convex multiple model endmember detection algorithm (P-COMMEND) and the Piece-wise Convex End-member detection (PCE) algorithm autonomously estimate many sets of endmembers to represent a hyperspectral image. A piece-wise convex model with several sets of endmembers is more effective for representing non-convex hyperspectral imagery over the standard convex geometry model (or linear mixing model). The terms of the objective function in P-COMMEND are based on geometric properties of the input data and the endmember estimates. In this paper, the P-COMMEND algorithm is extended to autonomously determine the number of sets of endmembers needed. The number of sets of endmembers, or convex regions, is determined by incorporating the competitive agglomeration algorithm into P-COMMEND. Results are shown comparing the Competitive Agglomeration P-COMMEND (CAP) algorithm to results found using the statistical PCE endmember detection method.
---
paper_title: Modelling cognitive representations
paper_content:
This thesis analyzes the modelling of visual cognitive representations based on extracting cognitive components from the MNIST dataset of handwritten digits using simple unsupervised linear and non-linear matrix factorizations both non-negative and unconstrained based on gradient descent learning. We introduce two different classes of generative models for modelling the cognitive data: Mixture Models and Deep Network Models. Mixture models based on K-Means, Guassian and factor analyzer kernel functions are presented as simple generative models in a general framework. From simulations we analyze the generative properties of these models and show how they render insufficient to proper model the complex distribution of the visual cognitive data. Motivated by the introduction of deep belief nets by Hinton et al. [12] we propose a simpler generative deep network model based on cognitive components. A theoretical framework is presented as individual modules for building a generative hierarchical network model. We analyze the performance in terms of classification and generation of MNIST digits and show how our simplifications compared to Hinton et al. [12] leads to degraded performance. In this respect we outline the differences and conjecture obvious improvements.
---
paper_title: A Variable Splitting Augmented Lagrangian Approach to Linear Spectral Unmixing
paper_content:
This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig's seminal ideas, hyperspectral linear unmixing amounts to finding the minimum vol ume simplex containing the hyperspectral vectors. This is a noncon vex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data.
---
paper_title: Fully Constrained Least Squares Spectral Unmixing by Simplex Projection
paper_content:
We present a new algorithm for linear spectral mixture analysis, which is capable of supervised unmixing of hyperspectral data while respecting the constraints on the abundance coefficients. This simplex-projection unmixing algorithm is based upon the equivalence of the fully constrained least squares problem and the problem of projecting a point onto a simplex. We introduce several geometrical properties of high-dimensional simplices and combine them to yield a recursive algorithm for solving the simplex-projection problem. A concrete implementation of the algorithm for large data sets is provided, and the algorithm is benchmarked against well-known fully constrained least squares unmixing (FCLSU) techniques, on both artificial data sets and real hyperspectral data collected over the Cuprite mining region. Unlike previous algorithms for FCLSU, the presented algorithm possesses no optimization steps and is completely analytical, severely reducing the required processing power.
---
paper_title: Minimum-volume transforms for remotely sensed data
paper_content:
Scatter diagrams for multispectral remote sensing data tend to be triangular, in the two-band case, pyramidal for three bands, and so on. They radiate away from the so-called darkpoint, which represents the scanner's response to an un-illuminated target. A minimum-volume transform may be described (provisionally) as a nonorthogonal linear transformation of the multivariate data to new axes passing through the dark point, with directions chosen such that they (for two bands), or the new coordinate planes (for three bands, etc.) embrace the data cloud as tightly as possible. The reason for the observed shapes of scatter diagrams is to be found in the theory of linear mixing at the subfootprint scale. Thus, suitably defined, minimum-volume transforms can often be used to unmix images into new spatial variables showing the proportions of the different cover types present, a type of enhancement that is not only intense, but physically meaningful. The present paper furnishes details for constructing computer programs to effect this operation. It will serve as a convenient technical source that may be referenced in subsequent, more profusely illustrated publications that address the intended application, the mapping of surface mineralogy. >
---
paper_title: ICE: a statistical approach to identifying endmembers in hyperspectral images
paper_content:
Several of the more important endmember-finding algorithms for hyperspectral data are discussed and some of their shortcomings highlighted. A new algorithm - iterated constrained endmembers (ICE) - which attempts to address these shortcomings is introduced. An example of its use is given. There is also a discussion of the advantages and disadvantages of normalizing spectra before the application of ICE or other endmember-finding algorithms.
---
paper_title: Endmember Extraction From Highly Mixed Data Using Minimum Volume Constrained Nonnegative Matrix Factorization
paper_content:
Endmember extraction is a process to identify the hidden pure source signals from the mixture. In the past decade, numerous algorithms have been proposed to perform this estimation. One commonly used assumption is the presence of pure pixels in the given image scene, which are detected to serve as endmembers. When such pixels are absent, the image is referred to as the highly mixed data, for which these algorithms at best can only return certain data points that are close to the real endmembers. To overcome this problem, we present a novel method without the pure-pixel assumption, referred to as the minimum volume constrained nonnegative matrix factorization (MVC-NMF), for unsupervised endmember extraction from highly mixed image data. Two important facts are exploited: First, the spectral data are nonnegative; second, the simplex volume determined by the endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. The proposed method takes advantage of the fast convergence of NMF schemes, and at the same time eliminates the pure-pixel assumption. The experimental results based on a set of synthetic mixtures and a real image scene demonstrate that the proposed method outperforms several other advanced endmember detection approaches
---
paper_title: Multispectral and hyperspectral image analysis with convex cones
paper_content:
A new approach to multispectral and hyperspectral image analysis is presented. This method, called convex cone analysis (CCA), is based on the bet that some physical quantities such as radiance are nonnegative. The vectors formed by discrete radiance spectra are linear combinations of nonnegative components, and they lie inside a nonnegative, convex region. The object of CCA is to find the boundary points of this region, which can be used as endmember spectra for unmixing or as target vectors for classification. To implement this concept, the authors find the eigenvectors of the sample spectral correlation matrix of the image. Given the number of endmembers or classes, they select as many eigenvectors corresponding to the largest eigenvalues. These eigenvectors are used as a basis to form linear combinations that have only nonnegative elements, and thus they lie inside a convex cone. The vertices of the convex cone will be those points whose spectral vector contains as many zero elements as the number of eigenvectors minus one. Accordingly, a mixed pixel can be decomposed by identifying the vertices that were used to form its spectrum. An algorithm for finding the convex cone boundaries is presented, and applications to unsupervised unmixing and classification are demonstrated with simulated data as well as experimental data from the hyperspectral digital imagery collection experiment (HYDICE).
---
paper_title: Multiple model endmember detection based on spectral and spatial information
paper_content:
We introduce a new spectral mixture analysis approach. Unlike most available approaches that only use the spectral information, this approach uses the spectral and spatial information available in the hyperspectral data. Moreover, it does not assume a global convex geometry model that encompasses all the data but rather multiple local convex models. Both the multiple model boundaries and the model's endmembers and abundances are fuzzy. This allows points to belong to multiple groups with different membership degrees. Our approach is based on minimizing a joint objective function to simultaneously learn the underling fuzzy multiple convex geometry models and find a robust estimate of the model's endmembers and abundances.
---
paper_title: Algorithms for Non-negative Matrix Factorization
paper_content:
Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.
---
paper_title: Fully Constrained Linear Spectral Unmixing: Analytic Solution Using Fuzzy Sets
paper_content:
The linear mixture model is a convenient way to describe image pixels as a linear combination of pure spectra - termed end-members. The fractional contribution from each end-member is calculated through inversion of the linear model. Despite the simplicity of the model, a nonnegativity constraint that is imposed on the fractions leads to an unmixing problem for which it is hard to find a closed analytical solution. Current solutions to this problem involve iterative algorithms, which are computationally intensive and not appropriate for unmixing large number of pixels. This paper presents an algorithm to build fuzzy membership functions that are equivalent to the least square solution of the fully constrained linear spectral unmixing problem. The efficiency and effectiveness of the proposed solution is demonstrated using both simulated and real data.
---
paper_title: Chance-Constrained Robust Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing
paper_content:
Effective unmixing of hyperspectral data cube under a noisy scenario has been a challenging research problem in remote sensing arena. A branch of existing hyperspectral unmixing algorithms is based on Craig's criterion, which states that the vertices of the minimum-volume simplex enclosing the hyperspectral data should yield high fidelity estimates of the endmember signatures associated with the data cloud. Recently, we have developed a minimum-volume enclosing simplex (MVES) algorithm based on Craig's criterion and validated that the MVES algorithm is very useful to unmix highly mixed hyperspectral data. However, the presence of noise in the observations expands the actual data cloud, and as a consequence, the endmember estimates obtained by applying Craig-criterion-based algorithms to the noisy data may no longer be in close proximity to the true endmember signatures. In this paper, we propose a robust MVES (RMVES) algorithm that accounts for the noise effects in the observations by employing chance constraints. These chance constraints in turn control the volume of the resulting simplex. Under the Gaussian noise assumption, the chance-constrained MVES problem can be formulated into a deterministic nonlinear program. The problem can then be conveniently handled by alternating optimization, in which each subproblem involved is handled by using sequential quadratic programming solvers. The proposed RMVES is compared with several existing benchmark algorithms, including its predecessor, the MVES algorithm. Monte Carlo simulations and real hyperspectral data experiments are presented to demonstrate the efficacy of the proposed RMVES algorithm.
---
paper_title: Robust Endmember detection using L1 norm factorization
paper_content:
The results from L 1 -Endmembers display the algorithm's stability and accuracy with increasing levels of noise. The algorithm was extremely stable in the number of endmembers when compared to the SPICE algorithm and the Virtual Dimensionality methods for estimating the number of endmembers. Furthermore, the results shown for this algorithm were generated with the same parameter set for all of the data sets, from two-dimensional data to 51-dimensional real hyperspectral data. This indicates L 1 -Endmembers may lack of sensitivity to parameter value settings.
---
paper_title: Hyperspectral Unmixing via $L_{1/2}$ Sparsity-Constrained Nonnegative Matrix Factorization
paper_content:
Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material end-members. As an important constraint for NMF, sparsity has been modeled making use of the L1 regularizer. Unfortunately, the L1 regularizer cannot enforce further sparsity when the full additivity constraint of material abundances is used, hence limiting the practical efficacy of NMF methods in hyperspectral unmixing. In this paper, we extend the NMF method by incorporating the L1/2 sparsity constraint, which we name L1/2 -NMF. The L1/2 regularizer not only induces sparsity but is also a better choice among Lq(0 <; q <; 1) regularizers. We propose an iterative estimation algorithm for L1/2-NMF, which provides sparser and more accurate results than those delivered using the L1 norm. We illustrate the utility of our method on synthetic and real hyperspectral data and compare our results to those yielded by other state-of-the-art methods.
---
paper_title: PCE: Piecewise Convex Endmember Detection
paper_content:
A new hyperspectral endmember detection method that represents endmembers as distributions, autonomously partitions the input data set into several convex regions, and simultaneously determines endmember distributions (EDs) and proportion values for each convex region is presented. Spectral unmixing methods that treat endmembers as distributions or hyperspectral images as piecewise convex data sets have not been previously developed. Piecewise convex endmember (PCE) detection can be viewed in two parts. The first part, the ED detection algorithm, estimates a distribution for each endmember rather than estimating a single spectrum. By using EDs, PCE can incorporate an endmember's inherent spectral variation and the variation due to changing environmental conditions. ED uses a new sparsity-promoting polynomial prior while estimating abundance values. The second part of PCE partitions the input hyperspectral data set into convex regions and estimates EDs and proportions for each of these regions. The number of convex regions is determined autonomously using the Dirichlet process. PCE is effective at handling highly mixed hyperspectral images where all of the pixels in the scene contain mixtures of multiple endmembers. Furthermore, each convex region found by PCE conforms to the convex geometry model for hyperspectral imagery. This model requires that the proportions associated with a pixel be nonnegative and sum to one. Algorithm results on hyperspectral data indicate that PCE produces endmembers that represent the true ground-truth classes of the input data set. The algorithm can also effectively represent endmembers as distributions, thus incorporating an endmember's spectral variability.
---
paper_title: Unmixing of Hyperspectral Images using Bayesian Non-negative Matrix Factorization with Volume Prior
paper_content:
Hyperspectral imaging can be used in assessing the quality of foods by decomposing the image into constituents such as protein, starch, and water. Observed data can be considered a mixture of underlying characteristic spectra (endmembers), and estimating the constituents and their abundances requires efficient algorithms for spectral unmixing. We present a Bayesian spectral unmixing algorithm employing a volume constraint and propose an inference procedure based on Gibbs sampling. We evaluate the method on synthetic and real hyperspectral data of wheat kernels. Results show that our method perform as good or better than existing volume constrained methods. Further, our method gives credible intervals for the endmembers and abundances, which allows us to asses the confidence of the results.
---
paper_title: Algorithm taxonomy for hyperspectral unmixing
paper_content:
In this paper, we introduce a set of taxonomies that hierarchically organize and specify algorithms associated with hyperspectral unmixing. Our motivation is to collectively organize and relate algorithms in order to assess the current state-of-the-art in the field and to facilitate objective comparisons between methods. The hyperspectral sensing community is populated by investigators with disparate scientific backgrounds and, speaking in their respective languages, efforts in spectral unmixing developed within disparate communities have inevitably led to duplication. We hope our analysis removes this ambiguity and redundancy by using a standard vocabulary, and that the presentation we provide clearly summarizes what has and has not been done. As we shall see, the framework for the taxonomies derives its organization from the fundamental, philosophical assumptions imposed on the problem, rather than the common calculations they perform, or the similar outputs they might yield.
---
paper_title: Joint Bayesian endmember extraction and linear unmixing for hyperspectral imagery
paper_content:
This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for non-negativity and full-additivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images.
---
paper_title: Implementation strategies for hyperspectral unmixing using Bayesian source separation
paper_content:
Bayesian positive source separation (BPSS) is a useful unsupervised approach for hyperspectral data unmixing, where numerical nonnegativity of spectra and abundances has to be ensured, such as in remote sensing. Moreover, it is sensible to impose a sum-to-one (full additivity) constraint to the estimated source abundances in each pixel. Even though nonnegativity and full additivity are two necessary properties to get physically interpretable results, the use of BPSS algorithms has so far been limited by high computation time and large memory requirements due to the Markov chain Monte Carlo calculations. An implementation strategy that allows one to apply these algorithms on a full hyperspectral image, as it is typical in earth and planetary science, is introduced. The effects of pixel selection and the impact of such sampling on the relevance of the estimated component spectra and abundance maps, as well as on the computation times, are discussed. For that purpose, two different data sets have been used: a synthetic one and a real hyperspectral image from Mars.
---
paper_title: Hyperspectral Unmixing Based on Mixtures of Dirichlet Components
paper_content:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
---
paper_title: Bayesian separation of spectral sources under non-negativity and full additivity constraints
paper_content:
This paper addresses the problem of separating spectral sources which are linearly mixed with unknown proportions. The main difficulty of the problem is to ensure the full additivity (sum-to-one) of the mixing coefficients and non-negativity of sources and mixing coefficients. A Bayesian estimation approach based on Gamma priors was recently proposed to handle the non-negativity constraints in a linear mixture model. However, incorporating the full additivity constraint requires further developments. This paper studies a new hierarchical Bayesian model appropriate to the non-negativity and sum-to-one constraints associated to the regressors and regression coefficients of linear mixtures. The estimation of the unknown parameters of this model is performed using samples generated using an appropriate Gibbs sampler. The performance of the proposed algorithm is evaluated through simulation results conducted on synthetic mixture models. The proposed approach is also applied to the processing of multicomponent chemical mixtures resulting from Raman spectroscopy.
---
paper_title: Unmixing Hyperspectral Data
paper_content:
In hyperspectral imagery one pixel typically consists of a mixture of the reflectance spectra of several materials, where the mixture coefficients correspond to the abundances of the constituting materials. We assume linear combinations of reflectance spectra with some additive normal sensor noise and derive a probabilistic MAP framework for analyzing hyperspectral data. As the material reflectance characteristics are not know a priori, we face the problem of unsupervised linear unmixing. The incorporation of different prior information (e.g. positivity and normalization of the abundances) naturally leads to a family of interesting algorithms, for example in the noise-free case yielding an algorithm that can be understood as constrained independent component analysis (ICA). Simulations underline the usefulness of our theory.
---
paper_title: Does independent component analysis play a role in unmixing hyperspectral data
paper_content:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
---
paper_title: Blind Decomposition of Transmission Light Microscopic Hyperspectral Cube Using Sparse Representation
paper_content:
In this paper, we address the problem of fully automated decomposition of hyperspectral images for transmission light microscopy. The hyperspectral images are decomposed into spectrally homogeneous compounds. The resulting compounds are described by their spectral characteristics and optical density. We present the multiplicative physical model of image formation in transmission light microscopy, justify reduction of a hyperspectral image decomposition problem to a blind source separation problem, and provide method for hyperspectral restoration of separated compounds. In our approach, dimensionality reduction using principal component analysis (PCA) is followed by a blind source separation (BSS) algorithm. The BSS method is based on sparsifying transformation of observed images and relative Newton optimization procedure. The presented method was verified on hyperspectral images of biological tissues. The method was compared to the existing approach based on nonnegative matrix factorization. Experiments showed that the presented method is faster and better separates the biological compounds from imaging artifacts. The results obtained in this work may be used for improving automatic microscope hardware calibration and computer-aided diagnostics.
---
paper_title: Analyzing hyperspectral data with independent component analysis
paper_content:
Hyperspectral image sensors provide images with a large number of contiguous spectral channels per pixel and enable information about different materials within a pixel to be obtained. The problem of spectrally unmixing materials may be viewed as a specific case of the blind source separation problem where data consists of mixed signals and the goal is to determine the contribution of each mineral to the mix without prior knowledge of the minerals in the mix. The technique of independent component analysis (ICA) assumes that the spectral components are close to statistically independent and provides an unsupervised method for blind source separation. We introduce contextual ICA in the context of hyperspectral data analysis and apply the method to mineral data from synthetically mixed minerals and real image signatures.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Endmember Extraction From Highly Mixed Data Using Minimum Volume Constrained Nonnegative Matrix Factorization
paper_content:
Endmember extraction is a process to identify the hidden pure source signals from the mixture. In the past decade, numerous algorithms have been proposed to perform this estimation. One commonly used assumption is the presence of pure pixels in the given image scene, which are detected to serve as endmembers. When such pixels are absent, the image is referred to as the highly mixed data, for which these algorithms at best can only return certain data points that are close to the real endmembers. To overcome this problem, we present a novel method without the pure-pixel assumption, referred to as the minimum volume constrained nonnegative matrix factorization (MVC-NMF), for unsupervised endmember extraction from highly mixed image data. Two important facts are exploited: First, the spectral data are nonnegative; second, the simplex volume determined by the endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. The proposed method takes advantage of the fast convergence of NMF schemes, and at the same time eliminates the pure-pixel assumption. The experimental results based on a set of synthetic mixtures and a real image scene demonstrate that the proposed method outperforms several other advanced endmember detection approaches
---
paper_title: Hyperspectral unmixing algorithm via dependent component analysis
paper_content:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (end member signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the end-members signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on independent component analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
---
paper_title: On the decomposition of Mars hyperspectral data by ICA and Bayesian positive source separation
paper_content:
The surface of Mars is currently being imaged with an unprecedented combination of spectral and spatial resolution. This high resolution, and its spectral range, gives the ability to pinpoint chemical species on the surface and the atmosphere of Mars more accurately than before. The subject of this paper is to present a method to extract informations on these chemicals from hyperspectral images. A first approach, based on independent component analysis (ICA) [P. Comon, Independent component analysis, a new concept? Signal Process. 36 (3) (1994) 287-314], is able to extract artifacts and locations of CO"2 and H"2O ices. However, the main independence assumption and some basic properties (like the positivity of images and spectra) being unverified, the reliability of all the independent components (ICs) is weak. For improving the component extraction and consequently the endmember classification, a combination of spatial ICA with spectral Bayesian positive source separation (BPSS) [S. Moussaoui, D. Brie, A. Mohammad-Djafari, C. Carteret, Separation of non-negative mixture of non-negative sources using a Bayesian approach and MCMC sampling, IEEE Trans. Signal Process. 54 (11) (2006) 4133-4145] is proposed. To reduce the computational burden, the basic idea is to use spatial ICA yielding a rough classification of pixels, which allows selection of small, but relevant, number of pixels. Then, BPSS is applied for the estimation of the source spectra using the spectral mixtures provided by this reduced set of pixels. Finally, the abundances of the components are assessed on the whole pixels of the images. Results of this approach are shown and evaluated by comparison with available reference spectra.
---
paper_title: Separation of Non-Negative Mixture of Non-Negative Sources Using a Bayesian Approach and MCMC Sampling
paper_content:
This paper addresses blind-source separation in the case where both the source signals and the mixing coefficients are non-negative. The problem is referred to as non-negative source separation and the main application concerns the analysis of spectrometric data sets. The separation is performed in a Bayesian framework by encoding non-negativity through the assignment of Gamma priors on the distributions of both the source signals and the mixing coefficients. A Markov chain Monte Carlo (MCMC) sampling procedure is proposed to simulate the resulting joint posterior density from which marginal posterior mean estimates of the source signals and mixing coefficients are obtained. Results obtained with synthetic and experimental spectra are used to discuss the problem of non-negative source separation and to illustrate the effectiveness of the proposed method
---
paper_title: Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)
paper_content:
Abstract Imaging spectroscopy is of growing interest as a new approach to Earth remote sensing. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) was the first imaging sensor to measure the solar reflected spectrum from 400 nm to 2500 nm at 10 nm intervals. The calibration accuracy and signal-to-noise of AVIRIS remain unique. The AVIRIS system as well as the science research and applications have evolved significantly in recent years. The initial design and upgraded characteristics of the AVIRIS system are described in terms of the sensor, calibration, data system, and flight operation. This update on the characteristics of AVIRIS provides the context for the science research and applications that use AVIRIS data acquired in the past several years. Recent science research and applications are reviewed spanning investigations of atmospheric correction, ecology and vegetation, geology and soils, inland and coastal waters, the atmosphere, snow and ice hydrology, biomass burning, environmental hazards, satellite simulation and calibration, commercial applications, spectral algorithms, human infrastructure, as well as spectral modeling.
---
paper_title: Bayesian linear unmixing of hyperspectral images corrupted by colored Gaussian noise with unknown covariance matrix
paper_content:
This paper addresses the problem of unmixing hyperspectral images contamined by additive colored noise. Each pixel of the image is modeled as a linear combination of pure materials (denoted as end-members) corrupted by an additive zero mean Gaussian noise sequence with unknown covariance matrix. Appropriate priors are defined ensuring positivity and additivity constraints on the mixture coefficients (denoted as abundances). These coefficients as well as the noise covariance matrix are then estimated from their joint posterior distribution. A Gibbs sampling strategy generates abundances and noise covariance matrices distributed according to the joint posterior. These samples are then averaged for minimum mean square error estimation.
---
paper_title: Unsupervised signature extraction and separation in hyperspectral images: a noise-adjusted fast independent component analysis approach
paper_content:
Multispectral/hyperspectral imaging spectrometry in earth re- mote sensing applications mostly focuses on determining the identity and abundance of materials in a geographic area of interest. Without any prior knowledge, however, it is generally very difficult to identify and determine how many endmembers reside in a scene. We cope with this limitation by estimating the number of endmembers using a noise- adjusted version of the transformed Gerschgorin disk approach (NATGD). This estimated result is then applied to a noise-adjusted ver- sion of fast independent component analysis (NAFICA). Experimental results indicate that NAFICA offers a new approach for unsupervised signature extraction and separation in hyperspectral images. © 2000 So- ciety of Photo-Optical Instrumentation Engineers. (S0091-3286(00)02004-3)
---
paper_title: Signal processing for hyperspectral image exploitation
paper_content:
Electro-optical remote sensing involves the acquisition of information about an object or scene without coming into physical contact with it. This is achieved by exploiting the fact that the materials comprising the various objects in a scene reflect, absorb, and emit electromagnetic radiation in ways characteristic of their molecular composition and shape. If the radiation arriving at the sensor is measured at each wavelength over a sufficiently broad spectral band, the resulting spectral signature, or simply spectrum, can be used (in principle) to uniquely characterize and identify any given material. An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification. This dimensionality reduction is implemented in a scene-dependent (adaptive) manner and may be implemented as a distinct step in the processing or as an integral part of the overall algorithm. The most widely used algorithm for dimensionality reduction is principal component analysis (PCA) or, equivalently, Karhunen-Loeve transformation.
---
paper_title: Hyperspectral image data analysis
paper_content:
The fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the Earth's surface and, in particular, from the spatial, spectral, and temporal variations in that field. Rather than focusing on the spatial variations, which imagery perhaps best conveys, why not move on to look at how the spectral variations might be used. The idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated. The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.
---
paper_title: Fuzzy Spectral and Spatial Feature Integration for Classification of Nonferrous Materials in Hyperspectral Data
paper_content:
Hyperspectral data allows the construction of more elaborate models to sample the properties of the nonferrous materials than the standard RGB color representation. In this paper, the nonferrous waste materials are studied as they cannot be sorted by classical procedures due to their color, weight and shape similarities. The experimental results presented in this paper reveal that factors such as the various levels of oxidization of the waste materials and the slight differences in their chemical composition preclude the use of the spectral features in a simplistic manner for robust material classification. To address these problems, the proposed FUSSER (fuzzy spectral and spatial classifier) algorithm detailed in this paper merges the spectral and spatial features to obtain a combined feature vector that is able to better sample the properties of the nonferrous materials than the single pixel spectral features when applied to the construction of multivariate Gaussian distributions. This approach allows the implementation of statistical region merging techniques in order to increase the performance of the classification process. To achieve an efficient implementation, the dimensionality of the hyperspectral data is reduced by constructing bio-inspired spectral fuzzy sets that minimize the amount of redundant information contained in adjacent hyperspectral bands. The experimental results indicate that the proposed algorithm increased the overall classification rate from 44% using RGB data up to 98% when the spectral-spatial features are used for nonferrous material classification.
---
paper_title: Atomic Decomposition by Basis Pursuit
paper_content:
The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). ::: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. ::: BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.
---
paper_title: On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations
paper_content:
An underdetermined linear system of equations Ax = b with nonnegativity constraint x ges 0 is considered. It is shown that for matrices A with a row-span intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique. The bound on the required sparsity depends on a coherence property of the matrix A. This coherence measure can be improved by applying a conditioning stage on A, thereby strengthening the claimed result. The obtained uniqueness theorem relies on an extended theoretical analysis of the lscr0 - lscr1 equivalence developed here as well, considering a matrix A with arbitrary column norms, and an arbitrary monotone element-wise concave penalty replacing the lscr1-norm objective function. Finally, from a numerical point of view, a greedy algorithm-a variant of the matching pursuit-is presented, such that it is guaranteed to find this sparse solution. It is further shown how this algorithm can benefit from well-designed conditioning of A .
---
paper_title: Least Angle Regression
paper_content:
The purpose of model selection algorithms such as All Subsets, Forward ::: Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. ::: Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. ::: (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method;
---
paper_title: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition
paper_content:
We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. >
---
paper_title: Learning Sparse Codes for Hyperspectral Imagery
paper_content:
The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Compressed sensing
paper_content:
Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
---
paper_title: On the use of spectral libraries to perform sparse unmixing of hyperspectral data
paper_content:
In recent years, the increasing availability of spectral libraries has opened a new path toward solving the hyperspec-tral unmixing problem in a semi-supervised fashion. The spectrally pure constituent materials (called endmembers) can be derived from a (potentially very large) spectral library and used for unmixing purposes. The advantage of this approach is that the results of the unmixing process do not depend on the availability of pure pixels in the original hyperspectral data nor on the ability of an endmember extraction algorithm to identify such endmembers. However, resulting from the fact that spectral libraries are usually very large, this approach generally results in a sparse solution. In this paper, we investigate the sensitivity of sparse unmixing techniques to certain characteristics of real and synthetic spectral libraries, including parameters such as mutual coherence and spectral similarity between the signatures contained in the library. Our main goal is to illustrate, via detailed experimental assessment, the potential of using spectral libraries to solve the spectral unmixing problem.
---
paper_title: L1 unmixing and its application to hyperspectral image enhancement
paper_content:
Because hyperspectral imagery is generally low resolution, it is possible for one pixel in the image to contain ::: several materials. The process of determining the abundance of representative materials in a single pixel is called ::: spectral unmixing. We discuss the L1 unmixing model and fast computational approaches based on Bregman ::: iteration. We then use the unmixing information and Total Variation (TV) minimization to produce a higher ::: resolution hyperspectral image in which each pixel is driven towards a "pure" material. This method produces ::: images with higher visual quality and can be used to indicate the subpixel location of features.
---
paper_title: Matching pursuits with time-frequency dictionaries
paper_content:
The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >
---
paper_title: Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
paper_content:
We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods
---
paper_title: Sparse Unmixing of Hyperspectral Data
paper_content:
Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model each mixed pixel in the scene. In practice, this is a combinatorial problem which calls for efficient linear sparse regression (SR) techniques based on sparsity-inducing regularizers, since the number of endmembers participating in a mixed pixel is usually very small compared with the (ever-growing) dimensionality (and availability) of spectral libraries. Linear SR is an area of very active research, with strong links to compressed sensing, basis pursuit (BP), BP denoising, and matching pursuit. In this paper, we study the linear spectral unmixing problem under the light of recent theoretical results published in those referred to areas. Furthermore, we provide a comparison of several available and new linear SR algorithms, with the ultimate goal of analyzing their potential in solving the spectral unmixing problem by resorting to available spectral libraries. Our experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra. This opens new perspectives for spectral unmixing, since the abundance estimation process no longer depends on the availability of pure spectral signatures in the input data nor on the capacity of a certain endmember extraction algorithm to identify such pure signatures.
---
paper_title: Iterative Spectral Unmixing for Optimizing Per-Pixel Endmember Sets
paper_content:
Fractional abundances predicted for a given pixel using spectral mixture analysis (SMA) are most accurate when only the endmembers that comprise it are used, with larger errors occurring if inappropriate endmembers are included in the unmixing process. This paper presents an iterative implementation of SMA (ISMA) to determine optimal per-pixel endmember sets from the image endmember set using two steps: 1) an iterative unconstrained unmixing, which removes one endmember per iteration based on minimum abundance and 2) analysis of the root-mean-square error as a function of iteration to locate the critical iteration defining the optimal endmember set. The ISMA was tested using simulated data at various signal-to-noise ratios (SNRs), and the results were compared with those of published unmixing methods. The ISMA method correctly selected the optimal endmember set 96% of the time for SNR of 100 : 1. As a result, per-pixel errors in fractional abundances were lower than for unmixing each pixel using the full endmember set. ISMA was also applied to Airborne Visible/Infrared Imaging Spectrometer hyperspectral data of Cuprite, NV. Results show that the ISMA is effective in obtaining abundance fractions that are physically realistic (sum close to one and nonnegative) and is more effective at selecting endmembers that occur within a pixel as opposed to those that are simply used to improve the goodness of fit of the model but not part of the mixture
---
paper_title: Sparsity and Incoherence in Compressive Sampling
paper_content:
We consider the problem of reconstructing a sparse signal x 0 2 R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0 , where U is an orthonormal matrix, we show that ‘1 minimization recovers x 0 exactly when the number of measurements exceeds m Const ·µ 2 (U) ·S · logn, where S is the number of nonzero components in x 0 , and µ is the largest entry in U properly normalized: µ(U) = p n · maxk,j |Uk,j|. The smaller µ, the fewer samples needed. The result holds for “most” sparse signals x 0 supported on a fixed (but arbitrary) set T. Given T, if the sign of x 0 for each nonzero entry on T and the observed values of Ux 0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples.
---
paper_title: Stable signal recovery from incomplete and inaccurate measurements
paper_content:
Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? ::: To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. ::: As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
---
paper_title: Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing
paper_content:
Convex optimization problems are common in hyperspectral unmixing. Examples include: the constrained least squares (CLS) and the fully constrained least squares (FCLS) problems, which are used to compute the fractional abundances in linear mixtures of known spectra; the constrained basis pursuit (CBP) problem, which is used to find sparse (i.e., with a small number of non-zero terms) linear mixtures of spectra from large libraries; the constrained basis pursuit denoising (CBPDN) problem, which is a generalization of BP that admits modeling errors. In this paper, we introduce two new algorithms to efficiently solve these optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family. The algorithms are termed SUnSAL (sparse unmixing by variable splitting and augmented Lagrangian) and C-SUnSAL (constrained SUnSAL). C-SUnSAL solves the CBP and CBPDN problems, while SUnSAL solves CLS and FCLS, as well as a more general version thereof, called constrained sparse regression (CSR). C-SUnSAL and SUnSAL are shown to outperform off-the-shelf methods in terms of speed and accuracy.
---
paper_title: Sparse Approximate Solutions to Linear Systems
paper_content:
The following problem is considered: given a matrix $A$ in ${\bf R}^{m \times n}$, ($m$ rows and $n$ columns), a vector $b$ in ${\bf R}^m$, and ${\bf \epsilon} > 0$, compute a vector $x$ satisfying $\| Ax - b \|_2 \leq {\bf \epsilon}$ if such exists, such that $x$ has the fewest number of non-zero entries over all such vectors. It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most $\left\lceil 18 \mbox{ Opt} ({\bf \epsilon}/2) \|{\bf A}^+\|^2_2 \ln(\|b\|_2/{\bf \epsilon}) \right\rceil$ non-zero entries, where $\mbox{Opt}({\bf \epsilon}/2)$ is the optimum number of nonzero entries at error ${\bf \epsilon}/2$, ${\bf A}$ is the matrix obtained by normalizing each column of $A$ with respect to the $L_2$ norm, and ${\bf A}^+$ is its pseudo-inverse.
---
paper_title: Robust hyperspectral data unmixing with spatial and spectral regularized NMF
paper_content:
This paper considers the problem of unsupervised hyperspectral data unmixing under the linear spectral mixing model assumption (LSMM). The aim is to recover both end member spectra and abundances fractions. The problem is ill-posed and needs some additional information to be solved. We consider here the Non-negative Matrix Factorization (NMF), which is degenerated on its own, but has the advantage of low complexity and the ability to easily include physical constraints. In addition with abundances sum-to-one constraint, we propose to introduce relevant information within spatial and spectral regularization for the NMF, derived from the analysis of the hyperspectral data. We use an alternate projected gradient to minimize the regularized error reconstruction function. This algorithm, called MDMD-NMF for Minimum Spectral Dispersion Maximum Spatial Dispersion NMF, allows to simultaneously estimate the number of end members, the abundances fractions, and accurately recover more than 10 end members without any pure pixel in the scene.
---
paper_title: Piece-wise convex spatial-spectral unmixing of hyperspectral imagery using possibilistic and fuzzy clustering
paper_content:
Imaging spectroscopy refers to methods for identifying materials in a scene using cameras that digitize light into hundreds of spectral bands. Each pixel in these images consists of vectors representing the amount of light reflected in the different spectral bands from the physical location corresponding to the pixel. Images of this type are called hyperspectral images. Hyperspectral image analysis differs from traditional image analysis in that, in addition to the spatial information inherent in an image, there is abundant spectral information at the pixel or sub-pixel level that can be used to identify materials in the scene. Spectral unmixing techniques attempt to identify the material spectra in a scene down to the sub-pixel level. In this paper, a piece-wise convex hyperspectral unmixing algorithm using both spatial and spectral image information is presented. The proposed method incorporates possibilistic and fuzzy clustering methods. The typicality and membership estimates from those methods can be combined with traditional material proportion estimates to produce more meaningful proportion estimates than obtained with previous spectral unmixing algorithms. An analysis of the utility of using all three estimates produce a better estimate is given using real hyperspectral imagery.
---
paper_title: SVM- and MRF-Based Method for Accurate Classification of Hyperspectral Images
paper_content:
The high number of spectral bands acquired by hyperspectral sensors increases the capability to distinguish physical materials and objects, presenting new challenges to image analysis and classification. This letter presents a novel method for accurate spectral-spatial classification of hyperspectral images. The proposed technique consists of two steps. In the first step, a probabilistic support vector machine pixelwise classification of the hyperspectral image is applied. In the second step, spatial contextual information is used for refining the classification results obtained in the first step. This is achieved by means of a Markov random field regularization. Experimental results are presented for three hyperspectral airborne images and compared with those obtained by recently proposed advanced spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
---
paper_title: Integration of spatial-spectral information for the improved extraction of endmembers
paper_content:
Spectral-based image endmember extraction methods hinge on the ability to discriminate between pixels based on spectral characteristics alone. Endmembers with distinct spectral features (high spectral contrast) are easy to select, whereas those with minimal unique spectral information (low spectral contrast) are more problematic. Spectral contrast, however, is dependent on the endmember assemblage, such that as the assemblage changes so does the “relative” spectral contrast of each endmember to all other endmembers. It is then possible for an endmember to have low spectral contrast with respect to the full image, but have high spectral contrast within a subset of the image. The spatial–spectral endmember extraction tool (SSEE) works by analyzing a scene in parts (subsets), such that we increase the spectral contrast of low contrast endmembers, thus improving the potential for these endmembers to be selected. The SSEE method comprises three main steps: 1) application of singular value decomposition (SVD) to determine a set of basis vectors that describe most of the spectral variance for subsets of the image; 2) projection of the full image data set onto the locally defined basis vectors to determine a set of candidate endmember pixels; and, 3) imposing spatial constraints for averaging spectrally similar endmembers, allowing for separation of endmembers that are spectrally similar, but spatially independent. The SSEE method is applied to two real hyperspectral data sets to demonstrate the effects of imposing spatial constraints on the selection of endmembers. The results show that the SSEE method is an effective approach to extracting image endmembers. Specific improvements include the extraction of physically meaningful, low contrast endmembers that occupy unique image regions.
---
paper_title: Spatially-smooth piece-wise convex endmember detection
paper_content:
An endmember detection and spectral unmixing algorithm that uses both spatial and spectral information is presented. This method, Spatial Piece-wise Convex Multiple Model Endmember Detection (Spatial P-COMMEND), autonomously estimates multiple sets of endmembers and performs spectral unmixing for input hyperspectral data. Spatial P-COMMEND does not restrict the estimated endmembers to define a single convex region during spectral unmixing. Instead, a piece-wise convex representation is used that can effectively represent non-convex hyperspectral data. Spatial P-COMMEND drives neighboring pixels to be unmixed by the same set of endmembers encouraging spatially-smooth unmixing results.
---
paper_title: Hyperspectral Image Segmentation Using a New Bayesian Approach With Active Learning
paper_content:
This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps. First, we use a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. This is done by using a recently introduced logistic regression via splitting and augmented Lagrangian algorithm. Second, we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information. In order to reduce the cost of acquiring large training sets, active learning is performed based on the MLR posterior probabilities. Another contribution of this paper is the introduction of a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling. Furthermore, we have implemented our proposed method in an efficient way. For instance, in order to obtain the time-consuming maximum a posteriori segmentation, we use the α-expansion min-cut-based integer optimization algorithm. The state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image analysis methods.
---
paper_title: Semi-Supervised Linear Spectral Unmixing Using a Hierarchical Bayesian Model for Hyperspectral Imagery
paper_content:
This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data.
---
paper_title: Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles
paper_content:
Classification of hyperspectral data with high spatial resolution from urban areas is discussed. An approach has been proposed which is based on using several principal components from the hyperspectral data and build morphological profiles. These profiles can be used all together in one extended morphological profile. A shortcoming of the approach is that it is primarily designed for classification of urban structures and it does not fully utilize the spectral information in the data. Similarly, a pixel-wise classification solely based on the spectral content can be performed, but it lacks information on the structure of the features in the image. An extension is proposed in this paper in order to overcome these dual problems. The proposed method is based on the data fusion of the morphological information and the original hyperspectral data: the two vectors of attributes are concatenated. After a reduction of the dimensionality using Decision Boundary Feature Extraction, the final classification is achieved using a Support Vector Machines classifier. The proposed approach is tested in experiments on ROSIS data from urban areas. Significant improvements are achieved in terms of accuracies when compared to results of approaches based on the use of morphological profiles based on PCs only and conventional spectral classification.
---
paper_title: Spectral and Spatial Complexity-Based Hyperspectral Unmixing
paper_content:
Hyperspectral unmixing, which decomposes pixel spectra into a collection of constituent spectra, is a preprocessing step for hyperspectral applications like target detection and classification. It can be considered as a blind source separation (BSS) problem. Independent component analysis, which is a widely used method for performing BSS, models a mixed pixel as a linear mixture of its constituent spectra weighted by the correspondent abundance fractions (sources). The sources are assumed to be independent and stationary. However, in many instances, this assumption is not valid. In this paper, a complexity-based BSS algorithm is introduced, which studies the complexity of sources instead of the independence. We extend the 1-D temporal complexity, which is called complexity pursuit that was proposed by Stone, to the 2-D spatial complexity, which is named spatial complexity BSS (SCBSS), to describe the spatial autocorrelation of each abundance fraction. Further, the temporal complexity of spectrum is combined into SCBSS to account for the spectral smoothness, which is termed spectral and spatial complexity BSS. More importantly, a strict theoretic interpretation is given, showing that the complexity-based BSS is very suitable for hyperspectral unmixing. Experimental results on synthetic and real hyperspectral data demonstrate the advantages of the proposed two algorithms with respect to other methods.
---
paper_title: Spectral–Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques
paper_content:
A new spectral-spatial classification scheme for hyperspectral images is proposed. The method combines the results of a pixel wise support vector machine classification and the segmentation map obtained by partitional clustering using majority voting. The ISODATA algorithm and Gaussian mixture resolving techniques are used for image clustering. Experimental results are presented for two hyperspectral airborne images. The developed classification scheme improves the classification accuracies and provides classification maps with more homogeneous regions, when compared to pixel wise classification. The proposed method performs particularly well for classification of images with large spatial structures and when different classes have dissimilar spectral responses and a comparable number of pixels.
---
paper_title: L1 unmixing and its application to hyperspectral image enhancement
paper_content:
Because hyperspectral imagery is generally low resolution, it is possible for one pixel in the image to contain ::: several materials. The process of determining the abundance of representative materials in a single pixel is called ::: spectral unmixing. We discuss the L1 unmixing model and fast computational approaches based on Bregman ::: iteration. We then use the unmixing information and Total Variation (TV) minimization to produce a higher ::: resolution hyperspectral image in which each pixel is driven towards a "pure" material. This method produces ::: images with higher visual quality and can be used to indicate the subpixel location of features.
---
paper_title: Enhancing hyperspectral image unmixing with spatial correlations
paper_content:
This paper describes a new algorithm for hyperspectral image unmixing. Most unmixing algorithms proposed in the literature do not take into account the possible spatial correlations between the pixels. In this paper, a Bayesian model is introduced to exploit these correlations. The image to be unmixed is assumed to be partitioned into regions (or classes) where the statistical properties of the abundance coefficients are homogeneous. A Markov random field, is then proposed to model the spatial dependencies between the pixels within any class. Conditionally upon a given class, each pixel is modeled by using the classical linear mixing model with additive white Gaussian noise. For this model, the posterior distributions of the unknown parameters and hyperparameters allow the parameters of interest to be inferred. These parameters include the abundances for each pixel, the means and variances of the abundances for each class, as well as a classification map indicating the classes of all pixels in the image. To overcome the complexity of the posterior distribution, we consider a Markov chain Monte Carlo method that generates samples asymptotically distributed according to the posterior. The generated samples are then used for parameter and hyperparameter estimation. The accuracy of the proposed algorithms is illustrated on synthetic and real data.
---
paper_title: Spectral–Spatial Hyperspectral Image Segmentation Using Subspace Multinomial Logistic Regression and Markov Random Fields
paper_content:
This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov-Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the min-cut-based integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting state-of-the-art performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial-contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain.
---
paper_title: Semisupervised Hyperspectral Image Segmentation Using Multinomial Logistic Regression With Active Learning
paper_content:
This paper presents a new semisupervised segmentation algorithm, suited to high-dimensional data, of which remotely sensed hyperspectral image data sets are an example. The algorithm implements two main steps: 1) semisupervised learning of the posterior class distributions followed by 2) segmentation, which infers an image of class labels from a posterior distribution built on the learned class distributions and on a Markov random field. The posterior class distributions are modeled using multinomial logistic regression, where the regressors are learned using both labeled and, through a graph-based technique, unlabeled samples. Such unlabeled samples are actively selected based on the entropy of the corresponding class label. The prior on the image of labels is a multilevel logistic model, which enforces segmentation results in which neighboring labels belong to the same class. The maximum a posteriori segmentation is computed by the α-expansion min-cut-based integer optimization algorithm. Our experimental results, conducted using synthetic and real hyperspectral image data sets collected by the Airborne Visible/Infrared Imaging Spectrometer system of the National Aeronautics and Space Administration Jet Propulsion Laboratory over the regions of Indian Pines, IN, and Salinas Valley, CA, reveal that the proposed approach can provide classification accuracies that are similar or higher than those achieved by other supervised methods for the considered scenes. Our results also indicate that the use of a spatial prior can greatly improve the final results with respect to a case in which only the learned class densities are considered, confirming the importance of jointly considering spatial and spectral information in hyperspectral image segmentation.
---
paper_title: Bayesian Hyperspectral Image Segmentation With Discriminative Class Learning
paper_content:
This paper introduces a new supervised technique to segment hyperspectral images: the Bayesian segmentation based on discriminative classification and on multilevel logistic (MLL) spatial prior. The approach is Bayesian and exploits both spectral and spatial information. Given a spectral vector, the posterior class probability distribution is modeled using multinomial logistic regression (MLR) which, being a discriminative model, allows to learn directly the boundaries between the decision regions and, thus, to successfully deal with high-dimensionality data. To control the machine complexity and, thus, its generalization capacity, the prior on the multinomial logistic vector is assumed to follow a componentwise independent Laplacian density. The vector of weights is computed via the fast sparse multinomial logistic regression (FSMLR), a variation of the sparse multinomial logistic regression (SMLR), conceived to deal with large data sets beyond the reach of the SMLR. To avoid the high computational complexity involved in estimating the Laplacian regularization parameter, we have also considered the Jeffreys prior, as it does not depend on any hyperparameter. The prior probability distribution on the class-label image is an MLL Markov-Gibbs distribution, which promotes segmentation results with equal neighboring class labels. The -expansion optimization algorithm, a powerful graph-cut-based integer optimization tool, is used to compute the maximum a posteriori segmentation. The effectiveness of the proposed methodology is illustrated by comparing its performance with the state-of-the-art methods on synthetic and real hyperspectral image data sets. The reported results give clear evidence of the relevance of using both spatial and spectral information in hyperspectral image segmentation.
---
paper_title: Spatial Preprocessing for Endmember Extraction
paper_content:
Endmember extraction is the process of selecting a collection of pure signature spectra of the materials present in a remotely sensed hyperspectral scene. These pure signatures are then used to decompose the scene into abundance fractions by means of a spectral unmixing algorithm. Most techniques available in the endmember extraction literature rely on exploiting the spectral properties of the data alone. As a result, the search for endmembers in a scene is conducted by treating the data as a collection of spectral measurements with no spatial arrangement. In this paper, we propose a novel strategy to incorporate spatial information into the traditional spectral-based endmember search process. Specifically, we propose to estimate, for each pixel vector, a scalar spatially derived factor that relates to the spectral similarity of pixels lying within a certain spatial neighborhood. This scalar value is then used to weigh the importance of the spectral information associated to each pixel in terms of its spatial context. Two key aspects of the proposed methodology are given as follows: 1) No modification of existing image spectral-based endmember extraction methods is necessary in order to apply the proposed approach. 2) The proposed preprocessing method enhances the search for image spectral endmembers in spatially homogeneous areas. Our experimental results, which were obtained using both synthetic and real hyperspectral data sets, indicate that the spectral endmembers obtained after spatial preprocessing can be used to accurately model the original hyperspectral scene using a linear mixture model. The proposed approach is suitable for jointly combining spectral and spatial information when searching for image-derived endmembers in highly representative hyperspectral image data sets.
---
paper_title: Spatial/spectral endmember extraction by multidimensional morphological operations
paper_content:
Spectral mixture analysis provides an efficient mechanism for the interpretation and classification of remotely sensed multidimensional imagery. It aims to identify a set of reference signatures (also known as endmembers) that can be used to model the reflectance spectrum at each pixel of the original image. Thus, the modeling is carried out as a linear combination of a finite number of ground components. Although spectral mixture models have proved to be appropriate for the purpose of large hyperspectral dataset subpixel analysis, few methods are available in the literature for the extraction of appropriate endmembers in spectral unmixing. Most approaches have been designed from a spectroscopic viewpoint and, thus, tend to neglect the existing spatial correlation between pixels. This paper presents a new automated method that performs unsupervised pixel purity determination and endmember extraction from multidimensional datasets; this is achieved by using both spatial and spectral information in a combined manner. The method is based on mathematical morphology, a classic image processing technique that can be applied to the spectral domain while being able to keep its spatial characteristics. The proposed methodology is evaluated through a specifically designed framework that uses both simulated and real hyperspectral data.
---
paper_title: Total variation regulatization in sparse hyperspectral unmixing
paper_content:
Hyperspectral unmixing has recently been addressed as a sparse regression problem by using predefined spectral libraries instead of image-derived endmembers in the unmixing process. This new approach has attracted much attention, as it sidesteps well known obstacles met in endmember extraction, such as the stopping criteria for the extraction process (represented by the number of endmembers needed to explain the observed scene) and the fact that the scene might not contain pure pixels. It happens, however, that in many applications the spectral libraries contain highly correlated signatures, which limits the success of sparse regression applied to mixtures with a very small number of materials. In this paper, we mitigate this limitation by adding the total variation regularization to the classical sparse regression, thus, exploiting the spatial contextual information present in the hyperspectral images. The effectiveness of the new approach is illustrated in experiments carried out on simulated data sets.
---
paper_title: Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing
paper_content:
Spectral unmixing aims at estimating the fractional abundances of pure spectral signatures (also called endmembers) in each mixed pixel collected by a remote sensing hyperspectral imaging instrument. In recent work, the linear spectral unmixing problem has been approached in semisupervised fashion as a sparse regression one, under the assumption that the observed image signatures can be expressed as linear combinations of pure spectra, known a priori and available in a library. It happens, however, that sparse unmixing focuses on analyzing the hyperspectral data without incorporating spatial information. In this paper, we include the total variation (TV) regularization to the classical sparse regression formulation, thus exploiting the spatial-contextual information present in the hyperspectral images and developing a new algorithm called sparse unmixing via variable splitting augmented Lagrangian and TV. Our experimental results, conducted with both simulated and real hyperspectral data sets, indicate the potential of including spatial information (through the TV term) on sparse unmixing formulations for improved characterization of mixed pixels in hyperspectral imagery.
---
paper_title: Spatial classification using fuzzy membership models
paper_content:
In the usual statistical approach to spatial classification, it is assumed that each pixel belongs to precisely one of a small number of known groups. This framework is extended to include mixed-pixel data; then, only a proportion of each pixel belongs to each group. Two models based on multivariate Gaussian random fields are proposed to model this fuzzy membership process. The problems of predicting the group membership and estimating the parameters are discussed. Some simulations are presented to study the properties of this approach, and an example is given using Landsat remote-sensing data. >
---
paper_title: Spatial-spectral unmixing using fuzzy local information
paper_content:
Hyperspectral unmixing estimates the proportions of materials represented within a spectral signature. The overwhelming majority of hyperspectral unmixing algorithms are based entirely on the spectral signatures of each individual pixel and do not incorporate the spatial information found in a hyperspectral data cube. In this work, a spectral unmixing algorithm, the Local Information Proportion estimation (LIP) algorithm, is presented. The proposed LIP algorithm incorporates spatial information while determining the proportions of materials found within a spectral signature. Spatial information is incorporated through the addition of a spatial term that regularizes proportion value estimates based on the weighted proportion values of neighboring pixels. Results are shown in the AVIRIS Indian Pines hyperspectral data set.
---
paper_title: Multiple Spectral–Spatial Classification Approach for Hyperspectral Data
paper_content:
A new multiple-classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region with a corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker-selection procedure, each of them combining the results of a pixelwise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification-driven marker and forms a region in the spectral-spatial classification map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies when compared with previously proposed classification techniques.
---
paper_title: Hyperspectral Image Unmixing via Alternating Projected Subgradients
paper_content:
We consider the problem of factorizing a hyperspectral image into the product of two nonnegative matrices, which represent nonnegative bases for image spectra and mixing coefficients, respectively. This spectral unmixing problem is a nonconvex optimization problem, which is very difficult to solve exactly. We present a simple heuristic for approximately solving this problem based on the idea of alternating projected subgradient descent. Finally, we present the results of applying this method on the 1990 AVIRIS image of Cuprite, Nevada and show that our results are in agreement with similar studies on the same data.
---
paper_title: Hyperspectral Image Unmixing Using a Multiresolution Sticky HDP
paper_content:
This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors. We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a tree-structured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scale-recursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data.
---
paper_title: C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework
paper_content:
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an l1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso at the individual feature level, with the block-sparsity property of the Group Lasso, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for applications such as source identification and separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the framework and optimization approach is complemented by experimental examples and theoretical results regarding recovery guarantees.
---
paper_title: Parallel heterogeneous CBIR system for efficient hyperspectral image retrieval using spectral mixture analysis
paper_content:
The purpose of content-based image retrieval (CBIR) is to retrieve, from real data stored in a database, information that is relevant to a query. In remote sensing applications, the wealth of spectral information provided by latest-generation (hyperspectral) instruments has quickly introduced the need for parallel CBIR systems able to effectively retrieve features of interest from ever-growing data archives. To address this need, this paper develops a new parallel CBIR system that has been specifically designed to be run on heterogeneous networks of computers (HNOCs). These platforms have soon become a standard computing architecture in remote sensing missions due to the distributed nature of data repositories. The proposed heterogeneous system first extracts an image feature vector able to characterize image content with sub-pixel precision using spectral mixture analysis concepts, and then uses the obtained feature as a search reference. The system is validated using a complex hyperspectral image database, and implemented on several networks of workstations and a Beowulf cluster at NASA's Goddard Space Flight Center. Our experimental results indicate that the proposed parallel system can efficiently retrieve hyperspectral images from complex image databases by efficiently adapting to the underlying parallel platform on which it is run, regardless of the heterogeneity in the compute nodes and communication links that form such parallel platform. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Parallel unmixing of remotely sensed hyperspectral images on commodity graphics processing units
paper_content:
Hyperspectral imaging instruments are capable of collecting hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. One of the main problems in the analysis of hyperspectral data cubes is the presence of mixed pixels, which arise when the spatial resolution of the sensor is not enough to separate spectrally distinct materials. Hyperspectral unmixing is one of the most popular techniques to analyze hyperspectral data. It comprises two stages: (i) automatic identification of pure spectral signatures (endmembers) and (ii) estimation of the fractional abundance of each endmember in each pixel. The spectral unmixing process is quite expensive in computational terms, mainly due to the extremely high dimensionality of hyperspectral data cubes. Although this process maps nicely to high performance systems such as clusters of computers, these systems are generally expensive and difficult to adapt to real-time data processing requirements introduced by several applications, such as wildland fire tracking, biological threat detection, monitoring of oil spills, and other types of chemical contamination. In this paper, we develop an implementation of the full hyperspectral unmixing chain on commodity graphics processing units (GPUs). The proposed methodology has been implemented, using the CUDA (compute device unified architecture), and tested on three different GPU architectures: NVidia Tesla C1060, NVidia GeForce GTX 275, and NVidia GeForce 9800 GX2, achieving near real-time unmixing performance in some configurations tested when analyzing two different hyperspectral images, collected over the World Trade Center complex in New York City and the Cuprite mining district in Nevada. Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: Spectral unmixing
paper_content:
Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.
---
paper_title: Recent Developments in High Performance Computing for Remote Sensing: A Review
paper_content:
Remote sensing data have become very widespread in recent years, and the exploitation of this technology has gone from developments mainly conducted by government intelligence agencies to those carried out by general users and companies. There is a great deal more to remote sensing data than meets the eye, and extracting that information turns out to be a major computational challenge. For this purpose, high performance computing (HPC) infrastructure such as clusters, distributed networks or specialized hardware devices provide important architectural developments to accelerate the computations related with information extraction in remote sensing. In this paper, we review recent advances in HPC applied to remote sensing problems; in particular, the HPC-based paradigms included in this review comprise multiprocessor systems, large-scale and heterogeneous networks of computers, grid and cloud computing environments, and hardware systems such as field programmable gate arrays (FPGAs) and graphics processing units (GPUs). Combined, these parts deliver a snapshot of the state-of-the-art and most recent developments in those areas, and offer a thoughtful perspective of the potential and emerging challenges of applying HPC paradigms to remote sensing problems.
---
paper_title: High Performance Computing in Remote Sensing
paper_content:
Solutions for Time-Critical Remote Sensing Applications The recent use of latest-generation sensors in airborne and satellite platforms is producing a nearly continual stream of high-dimensional data, which, in turn, is creating new processing challenges. To address the computational requirements of time-critical applications, researchers have begun incorporating high performance computing (HPC) models in remote sensing missions. High Performance Computing in Remote Sensing is one of the first volumes to explore state-of-the-art HPC techniques in the context of remote sensing problems. It focuses on the computational complexity of algorithms that are designed for parallel computing and processing. A Diverse Collection of Parallel Computing Techniques and Architectures The book first addresses key computing concepts and developments in remote sensing. It also covers application areas not necessarily related to remote sensing, such as multimedia andvideo processing. Each subsequent chapter illustrate s a specific parallel computing paradigm, including multiprocessor (cluster-based) systems, large-scale and heterogeneous networks of computers, grid computing platforms, and specialized hardware architectures for remotely sensed data analysis and interpretation. An Interdisciplinary Forum to Encourage Novel Ideas The extensive reviews of current and future developments combined with thoughtful perspectives on the potential challenges of adapting HPC paradigms to remote sensing problems will undoubtedly foster collaboration and development among many fields.
---
paper_title: FPGA Implementation of Abundance Estimation for Spectral Unmixing of Hyperspectral Data Using the Image Space Reconstruction Algorithm
paper_content:
One of the most popular and widely used techniques for analyzing remotely sensed hyperspectral data is spectral unmixing, which relies on two stages: (i) identification of pure spectral signatures (endmembers) in the data, and (ii) estimation of the abundance of each endmember in each (possibly mixed) pixel. Due to the high dimensionality of the hyperspectral data, spectral unmixing is a very time-consuming task. With recent advances in reconfigurable computing, especially using field programmable gate arrays (FPGAs), hyperspectral image processing algorithms can now be accelerated for on-board exploitation using compact hardware components with small size and cost. Although in previous work several efforts have been directed towards FPGA implementation of endmember extraction algorithms, the abundance estimation step has received comparatively much less attention. In this work, we develop a parallel FPGA-based design of the image space reconstruction algorithm (ISRA), a technique for solving linear inverse problems with positive constraints that has been used to estimate the abundance of each endmember in each pixel of a hyperspectral image. It is an iterative algorithm that guarantees convergence (after a certain number of iterations) and positive values in the results of the abundances (an important consideration in unmixing applications). Our system includes a direct memory access (DMA) module and implements a pre-fetching technique to hide the latency of the input/output communications. The method has been implemented on a Virtex-4 XC4VFX60 FPGA (a model that is similar to radiation-hardened FPGAs certified for space operation) and tested using real hyperspectral data sets collected by the Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada and the Jasper Ridge Biological Preserve in California. Experimental results demonstrate that our hardware version can significantly outperform an equivalent software version, thus being able to provide abundance estimation results in near real-time, which makes our reconfigurable system appealing for on-board hyperspectral data processing.
---
paper_title: FPGA Implementation of the N-FINDR Algorithm for Remotely Sensed Hyperspectral Image Analysis
paper_content:
Hyperspectral remote sensing attempts to identify features in the surface of the Earth using sensors that generally provide large amounts of data. The data are usually collected by a satellite or an airborne instrument and sent to a ground station that processes it. The main bottleneck of this approach is the (often reduced) bandwidth connection between the satellite and the station, which drastically limits the information that can be sent and processed in real time. A possible way to overcome this problem is to include onboard computing resources able to preprocess the data, reducing its size by orders of magnitude. Reconfigurable field-programmable gate arrays (FPGAs) are a promising platform that allows hardware/software codesign and the potential to provide powerful onboard computing capability and flexibility at the same time. Since FPGAs can implement custom hardware solutions, they can reach very high performance levels. Moreover, using run-time reconfiguration, the functionality of the FPGA can be updated at run time as many times as needed to perform different computations. Hence, the FPGA can be reused for several applications reducing the number of computing resources needed. One of the most popular and widely used techniques for analyzing hyperspectral data is linear spectral unmixing, which relies on the identification of pure spectral signatures via a so-called endmember extraction algorithm. In this paper, we present the first FPGA design for N-FINDR, a widely used endmember extraction algorithm in the literature. Our system includes a direct memory access module and implements a prefetching technique to hide the latency of the input/output communications. The proposed method has been implemented on a Virtex-4 XC4VFX60 FPGA (a model that is similar to radiation-hardened FPGAs certified for space operation) and tested using real hyperspectral data collected by NASA's Earth Observing-1 Hyperion (a satellite instrument) and the Airborne Visible Infra-Red Imaging Spectrometer over the Cuprite mining district in Nevada and the Jasper Ridge Biological Preserve in California. Experimental results demonstrate that our hardware version of the N-FINDR algorithm can significantly outperform an equivalent software version and is able to provide accurate results in near real time, which makes our reconfigurable system appealing for onboard hyperspectral data processing.
---
| Title: Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Section 1: INTRODUCTION
Description 1: Provide an introduction to hyperspectral cameras and their applications, the concept of hyperspectral unmixing (HU), and an overview of linear and nonlinear mixing models.
Section 2: Linear and Nonlinear Mixing Models
Description 2: Discuss linear mixing and nonlinear mixing models in detail, incidents of light interaction with materials, and the applicability of each model in real scenarios.
Section 3: Brief Overview of Nonlinear Approaches
Description 3: Present an overview of various nonlinear unmixing approaches, including radiative transfer theory, kernel methods, and machine learning strategies.
Section 4: Hyperspectral Unmixing Processing Chain
Description 4: Detail the processing steps typically involved in hyperspectral unmixing including atmospheric correction, data reduction, endmember determination, and inversion.
Section 5: LINEAR MIXTURE MODEL
Description 5: Describe the linear mixture model (LMM), including the mathematical representation and characterization of the spectral unmixing inverse problem.
Section 6: SIGNAL SUBSPACE IDENTIFICATION
Description 6: Explain signal subspace identification, including various methods such as band selection/extraction, projection techniques, and model order inference.
Section 7: GEOMETRICAL BASED APPROACHES TO LINEAR SPECTRAL UNMIXING
Description 7: Discuss geometrical-based approaches divided into two main categories: Pure Pixel (PP) based algorithms and Minimum Volume (MV) based algorithms.
Section 8: STATISTICAL METHODS
Description 8: Explore statistical methods for hyperspectral unmixing, including Bayesian approaches and independent component analysis (ICA).
Section 9: SPARSE REGRESSION BASED UNMIXING
Description 9: Discuss sparse regression-based unmixing, the use of spectral libraries, and learning dictionaries directly from the dataset.
Section 10: SPATIAL-SPECTRAL CONTEXTUAL INFORMATION
Description 10: Address the integration of spatial and spectral information for unmixing purposes and various methods to exploit contextual information in hyperspectral images.
Section 11: SUMMARY
Description 11: Summarize the main points covered in the paper, including the state-of-the-art techniques in hyperspectral unmixing and future directions. |
1 Introduction Automata for XML — A Survey | 10 | ---
paper_title: Typechecking for XML transformers
paper_content:
We study the typechecking problem for XML (eXtensible Markup Language) transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.
---
paper_title: Typing and querying XML documents: some complexity bounds
paper_content:
We study the complexity bound of validating XML documents, viewed as labeled unranked ordered trees, against various typing systems like DTDs, XML schemas, tree automata ... We also consider query evaluation complexities for various fragments of XPath. For both problems, validation and query evaluation, we consider data and combined complexity bounds.
---
paper_title: On Recognizable Sets and Tree Automata
paper_content:
Publisher Summary This chapter discusses a few aspects of recognizability in general algebraic structures It presents the comparison of the two usual meanings of the term recognizable. The first one is algebraic and the second one is algorithmic. There is in addition a third use of the term recognizable in theories of sets of infinite words and of sets of infinite trees. This use corresponds to an extension of the algorithmic sense of recognizability: A word is accepted by an automaton if there exists an infinite computation of the automaton satisfying certain conditions concerning the states occurring infinitely many times. This notion of acceptance is not effective as it concerns infinite objects. It is not algebraic either as the corresponding sets of infinite trees or words are not recognizable with respect to any algebraic structure.
---
paper_title: Querying unranked trees with stepwise tree automata
paper_content:
The problem of selecting nodes in unranked trees is the most basic querying problem for XML. We propose stepwise tree automata for querying unranked trees. Stepwise tree automata can express the same monadic queries as monadic Datalog and monadic second-order logic. We prove this result by reduction to the ranked case, via a new systematic correspondence that relates unranked and ranked queries.
---
paper_title: Typechecking for XML transformers
paper_content:
We study the typechecking problem for XML (eXtensible Markup Language) transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.
---
paper_title: Querying unranked trees with stepwise tree automata
paper_content:
The problem of selecting nodes in unranked trees is the most basic querying problem for XML. We propose stepwise tree automata for querying unranked trees. Stepwise tree automata can express the same monadic queries as monadic Datalog and monadic second-order logic. We prove this result by reduction to the ranked case, via a new systematic correspondence that relates unranked and ranked queries.
---
paper_title: Typechecking for Semistructured Data
paper_content:
Semistructured data is used in data exchange applications, like B2B and EAI, and represents data in a flexible format. Every data item has a unique tag (also called label), and data items can be nested. Formally, a semistructured data instance is a tree whose nodes are labeled with tags and leaves are labeled with data values. XML [Con98] is a standard syntax for describing such trees; Fig. 1 shows a tree representing a semistructured data instance and its XML syntax. We will refer interchangeably to semistructured data instances as trees or XML trees.
---
paper_title: Visibly pushdown languages
paper_content:
We propose the class of visibly pushdown languages as embeddings of context-free languages that is rich enough to model program analysis questions and yet is tractable and robust like the class of regular languages. In our definition, the input symbol determines when the pushdown automaton can push or pop, and thus the stack depth at every position. We show that the resulting class V pl of languages is closed under union, intersection, complementation, renaming, concatenation, and Kleene-*, and problems such as inclusion that are undecidable for context-free languages are E xptime -complete for visibly pushdown automata. Our framework explains, unifies, and generalizes many of the decision procedures in the program analysis literature, and allows algorithmic verification of recursive programs with respect to many context-free properties including access control properties via stack inspection and correctness of procedures with respect to pre and post conditions. We demonstrate that the class V pl is robust by giving two alternative characterizations: a logical characterization using the monadic second order (MSO) theory over words augmented with a binary matching predicate, and a correspondence to regular tree languages. We also consider visibly pushdown languages of infinite words and show that the closure properties, MSO-characterization and the characterization in terms of regular trees carry over. The main difference with respect to the case of finite words turns out to be determinizability: nondeterministic Buchi visibly pushdown automata are strictly more expressive than deterministic Muller visibly pushdown automata.
---
paper_title: Deterministic Automata on Unranked Trees
paper_content:
We investigate bottom-up and top-down deterministic automata on unranked trees. We show that for an appropriate definition of bottom-up deterministic automata it is possible to minimize the number of states efficiently and to obtain a unique canonical representative of the accepted tree language. For top-down deterministic automata it is well known that they are less expressive than the non-deterministic ones. By generalizing a corresponding proof from the theory of ranked tree automata we show that it is decidable whether a given regular language of unranked trees can be recognized by a top-down deterministic automaton. The standard deterministic top-down model is slightly weaker than the model we use, where at each node the automaton can scan the sequence of the labels of its successors before deciding its next move.
---
paper_title: Minimizing tree automata for unranked trees
paper_content:
Automata for unranked trees form a foundation for XML schemas, querying and pattern languages. We study the problem of efficiently minimizing such automata. We start with the unranked tree automata (UTAs) that are standard in database theory, assuming bottom-up determinism and that horizontal recursion is represented by deterministic finite automata. We show that minimal UTAs in that class are not unique and that minimization is np-hard. We then study more recent automata classes that do allow for polynomial time minimization. Among those, we show that bottom-up deterministic stepwise tree automata yield the most succinct representations.
---
paper_title: Tree acceptors and some of their applications
paper_content:
This paper concerns a generalization of finite automata, the ''tree acceptors,'' which have as their inputs finite trees of symbols rather than the usual sequences of symbols. Ordinary finite automata prove to be special cases of tree acceptors, and many of the results of finite automata theory continue to hold in their appropriately generalized forms. The tree acceptors provide new characterizations of the classes of regular sets and of context-free languages. The theory of tree acceptors is applied to a decision problem of mathematical logic. It is shown here that the weak secondorder theory of two successors is decidable, thus settling a problem of Buchi. This result is in turn applied to obtain positive solutions to the decision problems for various other theories, e.g., the weak second-order theories of order types built up from the finite types, @w, and @h (the type of the rationals) by finitely many applications of the operations of order type addition, multiplication, and converse; and the weak second-order theory of locally free algebras with only unary operations.
---
paper_title: Monadic datalog and the expressive power of languages for Web information extraction
paper_content:
Research on information extraction from Web pages (wrapping) has seen much activity in recent times (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping.In this paper, we first study monadic datalog as a wrapping language (over ranked or unranked tree structures). Using previous work by Neven and Schwentick, we show that this simple language is equivalent to full monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and thus propose MSO as a yardstick for evaluating and comparing wrappers.Using the above result, we study the kernel fragment Elog- of the Elog wrapping language used in the Lixto system (a visual wrapper generator). The striking fact here is that Elog- exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified. We also formally compare Elog to other wrapping languages proposed in the literature.
---
paper_title: Generalized finite automata theory with an application to a decision problem of second-order logic
paper_content:
Many of the important concepts and results of conventional finite automata theory are developed for a generalization in which finite algebras take the place of finite automata. The standard closure theorems are proved for the class of sets “recognizable” by finite algebras, and a generalization of Kleene's regularity theory is presented. The theorems of the generalized theory are then applied to obtain a positive solution to a decision problem of second-order logic.
---
paper_title: On the power of tree-walking automata
paper_content:
Tree-walking automata (TWAs) recently received new attention in the fields of formal languages and databases. To achieve a better understanding of their expressiveness, we characterize them in terms of transitive closure logic formulas in normal form. It is conjectured by Engelfriet and Hoogeboom that TWAs cannot define all regular tree languages, or equivalently, all of monadic second-order logic. We prove this conjecture for a restricted, but powerful, class of TWAs. In particular, we show that 1-bounded TWAs, that is TWAs that are only allowed to traverse every edge of the input tree at most once in every direction, cannot define all regular languages. We then extend this result to a class of TWAs that can simulate first-order logic (FO) and is capable of expressing properties not definable in FO extended with regular path expressions; the latter logic being a valid abstraction of current query languages for XML and semistructured data.
---
paper_title: Translations on a context free grammar
paper_content:
Two schemes for the specification of translations on a context-free grammar are proposed. The first scheme, called a generalized syntax directed translation (GSDT), consists of a context free grammar with a set of semantic rules associated with each production of the grammar. In a GSDT an input word is parsed according to the underlying context free grammar, and at each node of the tree, a finite number of translation strings are computed in terms of the translation strings defined at the descendants of that node. The functional relationship between the length of input and length of output for translations defined by GSDT's is investigated. The second method for the specification of translations is in terms of tree automata - finite automata with output, operating on derivation trees of a context free grammar. It is shown that tree automata provide an exact characterization for those GSDT's with a linear relationship between input and output length.
---
paper_title: Tree Transducers, L Systems, and Two-Way Machines
paper_content:
A relationship between parallel rewriting systems and two-way machines is investigated. Restrictions on the “copying power” of these devices endow them with rich structuring and give insight into the issues of determinism, parallelism, and copying. Among the parallel rewriting systems considered are the top-down tree transducer; the generalized syntax-directed translation scheme and the ETOL system, and among the two-way machines are the tree-walking automaton, the two-way finite-state transducer, and (generalizations of) the one-way checking stack automaton. The. relationship of these devices to macro grammars is also considered. An effort is made .to provide a systematic survey of a number of existing results.
---
paper_title: Parallel and two-way automata on directed ordered acyclic graphs
paper_content:
In this paper we study automata which work on directed ordered acyclic graphs, in particular those graphs, called derivation dags ( d -dags), which model derivations of phrase-structure grammars. A rather complete characterization of the relative power of the following features of automata on d -dags is obtained: parallel versus sequential, deterministic versus nondeterministic and finite state versus a (restricted type of) pushdown store. New results concerning trees follows as special cases. Closure properties of classes of d -dag languages definable by various automata are studied for some basic operations. Characterization of general directed ordered acyclic graphs by these automata is also given. Contents. Abstract. 1. Introduction. 2. Definitions of the graphs. 3. Parallel dag automata. 4. Finite state relabeling. 5. Two-way dag-walking automata. 6. Comparison. 7. Closure properties of d -dag languages. 8. Recognition of doags. Acknowledgment. References.
---
paper_title: Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure
paper_content:
String languages recognizable in (deterministic) log-space are characterized either by two-way (deterministic) multi-head automata, or following Immerman, by first- order logic with (deterministic) transitive closure. Here we elaborate this result, and match the number of heads to the arity of the transitive closure. More precisely, first-order logic with k-ary deterministic transitive closure has the same power as deterministic automata walking on their input with k heads, additionally using a finite set of nested pebbles. This result is valid for strings, ordered trees, and in general for families of graphs having a fixed automaton that can be used to traverse the nodes of each of the graphs in the family. Other examples of such families are grids, toruses, and rectangular mazes. For nondeterministic automata, the logic is restricted to positive occurrences of transitive closure. The special case of k = 1 for trees, shows that single-head deterministic tree-walking automata with nested pebbles are characterized by first-order logic with unary determin- istic transitive closure. This refines our earlier result that placed these automata between first-order and monadic second-order logic on trees.
---
paper_title: Validating streaming XML documents
paper_content:
This paper investigates the on-line validation of streaming XML documents with respect to a DTD, under memory constraints. We first consider validation using constant memory, formalized by a finite-state automaton ( FSA ). We examine two flavors of the problem, depending on whether or not the XML document is assumed to be well-formed. The main results of the paper provide conditions on the DTDs under which validation of either flavor can be done using an FSA . For DTDs that cannot be validated by an FSA , we investigate two alternatives. The first relaxes the constant memory requirement by allowing a stack bounded in the depth of the XML document, while maintaining the deterministic, one-pass requirement. The second approach consists in refining the DTD to provide additional information that allows validation by an FSA .
---
paper_title: Visibly pushdown languages
paper_content:
We propose the class of visibly pushdown languages as embeddings of context-free languages that is rich enough to model program analysis questions and yet is tractable and robust like the class of regular languages. In our definition, the input symbol determines when the pushdown automaton can push or pop, and thus the stack depth at every position. We show that the resulting class V pl of languages is closed under union, intersection, complementation, renaming, concatenation, and Kleene-*, and problems such as inclusion that are undecidable for context-free languages are E xptime -complete for visibly pushdown automata. Our framework explains, unifies, and generalizes many of the decision procedures in the program analysis literature, and allows algorithmic verification of recursive programs with respect to many context-free properties including access control properties via stack inspection and correctness of procedures with respect to pre and post conditions. We demonstrate that the class V pl is robust by giving two alternative characterizations: a logical characterization using the monadic second order (MSO) theory over words augmented with a binary matching predicate, and a correspondence to regular tree languages. We also consider visibly pushdown languages of infinite words and show that the closure properties, MSO-characterization and the characterization in terms of regular trees carry over. The main difference with respect to the case of finite words turns out to be determinizability: nondeterministic Buchi visibly pushdown automata are strictly more expressive than deterministic Muller visibly pushdown automata.
---
paper_title: Typing and querying XML documents: some complexity bounds
paper_content:
We study the complexity bound of validating XML documents, viewed as labeled unranked ordered trees, against various typing systems like DTDs, XML schemas, tree automata ... We also consider query evaluation complexities for various fragments of XPath. For both problems, validation and query evaluation, we consider data and combined complexity bounds.
---
paper_title: Typing and querying XML documents: some complexity bounds
paper_content:
We study the complexity bound of validating XML documents, viewed as labeled unranked ordered trees, against various typing systems like DTDs, XML schemas, tree automata ... We also consider query evaluation complexities for various fragments of XPath. For both problems, validation and query evaluation, we consider data and combined complexity bounds.
---
paper_title: Deciding equivalence of finite tree automata
paper_content:
We show: for every constant m it can be decided in polynomial time whether or not two m-ambiguous finite tree automata are equivalent. In general, inequivalence for finite tree automata is DEXPTIME-complete w.r.t. logspace reductions, and PSPACE-complete w.r.t. logspace reductions, if the automata in question are supposed to accept only finite languages. For finite tree automata with coefficients in a field R we give a polynomial time algorithm for deciding ambiguity-equivalence provided R-operations and R-tests for 0 can be performed in constant time. We apply this algorithm for deciding ambiguity-inequivalence of finite tree automata in randomized polynomial time.
---
paper_title: Efficient incremental validation of XML documents
paper_content:
We discuss incremental validation of XML documents with respect to DTDs and XML schema definitions. We consider insertions and deletions of subtrees, as opposed to leaf nodes only, and we also consider the validation of ID and IDREF attributes. For arbitrary schemas, we give a worst-case n log n time and linear space algorithm, and show that it often is far superior to revalidation from scratch. We present two classes of schemas, which capture most real-life DTDs, and show that they admit a logarithmic time incremental validation algorithm that, in many cases, requires only constant auxiliary space. We then discuss an implementation of these algorithms that is independent of, and can be customized for different storage mechanisms for XML. Finally, we present extensive experimental results showing that our approach is highly efficient and scalable.
---
paper_title: Regular tree languages definable in FO
paper_content:
We consider regular languages of ranked labeled trees. We give an algebraic characterization of the regular languages over such trees that are definable in first-order logic in the language of labeled graphs. These languages are the analog on ranked trees of the “locally threshold testable” languages on strings. We show that this characterization yields a decision procedure for determining whether a regular collection of trees is first-order definable: the procedure is polynomial time in the minimal automaton presenting the regular language.
---
paper_title: Checking potential validity of XML documents
paper_content:
The process of creation of document-centric XML documents often starts with a prepared textual content, into which the editor introduces markup. In such situations, intermediate XML is almost never valid with respect to the DTD/Schema used for the encoding. At the same time, it is important to ensure that at each moment of time, the editor is working with an XML document that can enriched with further markup to become valid. In this paper we introduce the notion of potential validity of XML documents, which allows us to distinguish between XML documents that are invalid because the encoding is simply incomplete and XML documents that are invalid because some of the DTD rules guiding the structure of the encoding were violated during the markup process. We give a linear-time algorithm for checking potential validity for documents.
---
paper_title: Deterministic Automata on Unranked Trees
paper_content:
We investigate bottom-up and top-down deterministic automata on unranked trees. We show that for an appropriate definition of bottom-up deterministic automata it is possible to minimize the number of states efficiently and to obtain a unique canonical representative of the accepted tree language. For top-down deterministic automata it is well known that they are less expressive than the non-deterministic ones. By generalizing a corresponding proof from the theory of ranked tree automata we show that it is decidable whether a given regular language of unranked trees can be recognized by a top-down deterministic automaton. The standard deterministic top-down model is slightly weaker than the model we use, where at each node the automaton can scan the sequence of the labels of its successors before deciding its next move.
---
paper_title: Minimizing tree automata for unranked trees
paper_content:
Automata for unranked trees form a foundation for XML schemas, querying and pattern languages. We study the problem of efficiently minimizing such automata. We start with the unranked tree automata (UTAs) that are standard in database theory, assuming bottom-up determinism and that horizontal recursion is represented by deterministic finite automata. We show that minimal UTAs in that class are not unique and that minimization is np-hard. We then study more recent automata classes that do allow for polynomial time minimization. Among those, we show that bottom-up deterministic stepwise tree automata yield the most succinct representations.
---
paper_title: Constructing Finite State Automata for High-Performance XML Web Services
paper_content:
This paper describes a validating XML parsing method based on deterministic finite state automata (DFA). XML parsing and validation is performed by a schema-specific XML parser that encodes the admissible parsing states as a DFA. This DFA is automatically constructed from the XML schemas of XML messages using a code generator. A twolevel DFA architecture is used to increase efficiency and to reduce the generated code size. The lower-level DFA efficiently parses syntactically well-formed XML messages. The higher-level DFA validates the messages and produces application events associated with transitions in the DFA. Two example case studies are presented and performance results are given to demonstrate that the approach supports the implementation of high-performance Web services.
---
paper_title: Counting in Trees for Free
paper_content:
It is known that MSO logic for ordered unranked trees is undecidable if Presburger constraints are allowed at children of nodes. We show here that a decidable logic is obtained if we use a modal fixpoint logic instead. We present a characterization of this logic by means of deterministic Presburger tree automata and show how it can be used to express numerical document queries. Surprisingly, the complexity of satisfiability for the extended logic is asymptotically the same as for the original fixpoint logic. The non-emptiness for Presburger tree automata (PTA) is pspace-complete, which is moderate given that it is already pspace-hard to test whether the complement of a regular expression is non-empty. We also identify a subclass of PTAs with a tractable non-emptiness problem. Further, to decide whether a tree t satisfies a formula ϕ is polynomial in the size of ϕ and linear in the size of t.
---
paper_title: DTDs versus XML schema: a practical study
paper_content:
Among the various proposals answering the shortcomings of Document Type Definitions (DTDs), XML Schema is the most widely used. Although DTDs and XML Schema Definitions (XSDs) differ syntactically, they are still quite related on an abstract level. Indeed, freed from all syntactic sugar, XML Schemas can be seen as an extension of DTDs with a restricted form of specialization. In the present paper, we inspect a number of DTDs and XSDs harvested from the web and try to answer the following questions: (1) which of the extra features/expressiveness of XML Schema not allowed by DTDs are effectively used in practice; and, (2) how sophisticated are the structural properties (i.e. the nature of regular expressions) of the two formalisms. It turns out that at present real-world XSDs only sparingly use the new features introduced by XML Schema: on a structural level the vast majority of them can already be defined by DTDs. Further, we introduce a class of simple regular expressions and obtain that a surprisingly high fraction of the content models belong to this class. The latter result sheds light on the justification of simplifying assumptions that sometimes have to be made in XML research.
---
paper_title: Minimal Ascending and Descending Tree Automata
paper_content:
We propose a generalization of the notion "deterministic" to "l-r-deterministic" for descending tree automata (also called root-to-frontier). The corresponding subclass of recognizable tree languages is characterized by a structural property that we name "homogeneous." Given a descending tree automaton recognizing a homogeneous tree language, it can be left-to-right (l-r) determinized and then minimized. The obtained minimal l-r-deterministic tree automaton is characterized algebraically. We exhibit a formal correspondence between the two evaluation modes on trees (ascending and descending) and the two on words (right-to-left and left-to-right). This is possible by embedding trees into the free monoid of pointed trees. We obtain a unified view of the theories of minimization of deterministic ascending and l-r-deterministic descending tree automata.
---
paper_title: Numerical document queries
paper_content:
A query against a database behind a site like Napster may search, e.g., for all users who have downloaded more jazz titles than pop music titles. In order to express such queries, we extend classical monadic second-order logic by Presburger predicates which pose numerical restrictions on the children (content) of an element node and provide a precise automata-theoretic characterization. While the existential fragment of the resulting logic is decidable, it turns out that satisfiability of the full logic is undecidable. Decidable satisfiability and a querying algorithm even with linear data complexity can be obtained if numerical constraints are only applied to those contents of elements where ordering is irrelevant. Finally, it is sketched how these techniques can be extended also to answer questions like, e.g., whether the total price of the jazz music downloaded so far exceeds a user's budget.
---
paper_title: Languages, Automata, and Logic
paper_content:
The subject of this chapter is the study of formal languages (mostly languages recognizable by finite automata) in the framework of mathematical logic.
---
paper_title: Cardinality Constraint Automata: A Core Technology for Efficient XML Schema-aware Parsers
paper_content:
A direct conversion receiver having a tri-phase architecture including three separate baseband signal channels. RF communications signals which are being tuned by the receiver are split into three equal and in-phase components which are mixed with three equal but substantially out-of-phase injection signals on frequency with the communications signal. The resulting baseband component signals are independently filtered and amplified on the three signal channels. The baseband components are then directed to a signal processing unit which corrects the baseband components for gain or phase mismatch errors between the signal channels based on the information carried by the three components and thereafter demodulates the signals in order to acquire the information carried by the RF communications signal. An automatic gain control system and a signal filtering system adapted for use with direct conversion receivers are also disclosed.
---
paper_title: Incremental validation of XML documents
paper_content:
We investigate the incremental validation of XML documents with respect to DTDs, specialized DTDs, and XML Schemas, under updates consisting of element tag renamings, insertions, and deletions. DTDs are modeled as extended context-free grammars. "Specialized DTDs" allow the decoupling of element types from element tags. XML Schemas are abstracted as specialized DTDs with limitations on the type assignment. For DTDs and XML Schemas, we exhibit an O(m log n) incremental validation algorithm using an auxiliary structure of size O(n), where n is the size of the document and m the number of updates. The algorithm does not handle the incremental validation of XML Schema wrt renaming of internal nodes, which is handled by the specialized DTDs incremental validation algorithm. For specialized DTDs, we provide an O(m log2 n) incremental algorithm, again using an auxiliary structure of size O(n). This is a significant improvement over brute-force re-validation from scratch.We exhibit a restricted class of DTDs called local that arise commonly in practice and for which incremental validation can be done in practically constant time by maintaining only a list of counters. We present implementations of both general incremental validation and local validation on an XML database built on top of a relational database.Our experimentation includes a study of the applicability of local validation in practice, results on the calibration of parameters of the auxiliary data structure, and results on the performance comparison between the general incremental validation technique, the local validation technique, and brute-force validation from scratch.
---
paper_title: XML schema, tree logic and sheaves automata
paper_content:
XML documents, and other forms of semi-structured data, may be roughly described as edge labeled trees; it is therefore natural to use tree automata to reason on them. This idea has already been successfully applied in the context of Document Type Definition (DTD), the simplest standard for defining XML documents validity, but additional work is needed to take into account XML Schema, a more advanced standard, for which regular tree automata are not satisfactory. In this paper, we define a tree logic that directly embeds XML Schema as a plain subset as well as a new class of automata for unranked trees, used to decide this logic, which is well-suited to the processing of XML documents and schemas.
---
paper_title: DTD Inference for Views of XML Data
paper_content:
We study the inference of Data Type Definitions (DTDs) for views of XML data, using an abstraction that focuses on document content structure. The views are defined by a query language that produces a list of documents selected from one or more input sources. The selection conditions involve vertical and horizontal navigation, thus querying explicitly the order present in input documents. We point several strong limitations in the descriptive ability of current DTDs and the need for extending them with (i) a subtyping mechanism and (ii) a more powerful specification mechanism than regular languages, such as context-free languages. With these extensions, we show that one can always infer tight DTDs, that precisely characterize a selection view on sources satisfying given DTDs. We also show important special cases where one can infer a tight DTD without requiring extension (ii). Finally we consider related problems such as verifying conformance of a view definition with a predefined DTD. Extensions to more powerful views that construct complex documents are also briefly discussed.
---
paper_title: Taxonomy of XML schema languages using formal language theory
paper_content:
On the basis of regular tree grammars, we present a formal framework for XML schema languages. This framework helps to describe, compare, and implement such schema languages in a rigorous manner. Our main results are as follows: (1) a simple framework to study three classes of tree languages (local, single-type, and regular); (2) classification and comparison of schema languages (DTD, W3C XML Schema, and RELAX NG) based on these classes; (3) efficient document validation algorithms for these classes; and (4) other grammatical concepts and advanced validation algorithms relevant to an XML model (e.g., binarization, derivative-based validation).
---
paper_title: Using regular tree automata as XML schemas
paper_content:
We address the problem of tight XML schemas and propose regular tree automata to model XML data. We show that the tree automata model is more powerful than the XML DTDs and is closed under main algebraic operations. We introduce the XML query algebra based on the tree automata model, and discuss the query optimization and query pruning techniques. Finally we show the conversion of tree automata schema into XML DTDs.
---
paper_title: DTD Inference for Views of XML Data
paper_content:
We study the inference of Data Type Definitions (DTDs) for views of XML data, using an abstraction that focuses on document content structure. The views are defined by a query language that produces a list of documents selected from one or more input sources. The selection conditions involve vertical and horizontal navigation, thus querying explicitly the order present in input documents. We point several strong limitations in the descriptive ability of current DTDs and the need for extending them with (i) a subtyping mechanism and (ii) a more powerful specification mechanism than regular languages, such as context-free languages. With these extensions, we show that one can always infer tight DTDs, that precisely characterize a selection view on sources satisfying given DTDs. We also show important special cases where one can infer a tight DTD without requiring extension (ii). Finally we consider related problems such as verifying conformance of a view definition with a predefined DTD. Extensions to more powerful views that construct complex documents are also briefly discussed.
---
paper_title: Regular expression pattern matching for XML
paper_content:
We propose regular expression pattern matching as a core feature for programming languages for manipulating XML (and similar tree-structured data formats). We extend conventional pattern-matching facilities with regular expression operators such as repetition (*), alternation (I), etc., that can match arbitrarily long sequences of subtrees, allowing a compact pattern to extract data from the middle of a complex sequence. We show how to check standard notions of exhaustiveness and redundancy for these patterns.Regular expression patterns are intended to be used in languages whose type systems are also based on the regular expression types. To avoid excessive type annotations, we develop a type inference scheme that propagates type constraints to pattern variables from the surrounding context. The type inference algorithm translates types and patterns into regular tree automata and then works in terms of standard closure operations (union, intersection, and difference) on tree automata. The main technical challenge is dealing with the interaction of repetition and alternation patterns with the first-match policy, which gives rise to subtleties concerning both the termination and the precision of the analysis. We address these issues by introducing a data structure representing closure operations lazily.
---
paper_title: Taxonomy of XML schema languages using formal language theory
paper_content:
On the basis of regular tree grammars, we present a formal framework for XML schema languages. This framework helps to describe, compare, and implement such schema languages in a rigorous manner. Our main results are as follows: (1) a simple framework to study three classes of tree languages (local, single-type, and regular); (2) classification and comparison of schema languages (DTD, W3C XML Schema, and RELAX NG) based on these classes; (3) efficient document validation algorithms for these classes; and (4) other grammatical concepts and advanced validation algorithms relevant to an XML model (e.g., binarization, derivative-based validation).
---
paper_title: Expressiveness of XSDs: from practice to theory, there and back again
paper_content:
On an abstract level, XML Schema increases the limited expressive power of Document Type Definitions (DTDs) by extending them with a recursive typing mechanism. However, an investigation of the XML Schema Definitions (XSDs) occurring in practice reveals that the vast majority of them are structurally equivalent to DTDs. This might be due to the complexity of the XML Schema specification and the difficulty to understand the effect of constraints on typing and validation of schemas. To shed some light on the actual expressive power of XSDs this paper studies the impact of the Element Declarations Consistent (EDC) and the Unique Particle Attribution (UPA) rule. An equivalent formalism based on contextual patterns rather than on recursive types is proposed which might serve as a light-weight front end for XML Schema. Finally, the effect of EDC and UPA on the way XML documents can be typed is discussed. It is argued that a cleaner, more robust, stronger but equally efficient class is obtained by replacing EDC and UPA with the notion of 1-pass preorder typing: schemas that allow to determine the type of an element of a streaming document when its opening tag is met. This notion can be defined in terms of restrained competition regular expressions and there is again an equivalent syntactical formalism based on contextual patterns.
---
paper_title: DTD Inference for Views of XML Data
paper_content:
We study the inference of Data Type Definitions (DTDs) for views of XML data, using an abstraction that focuses on document content structure. The views are defined by a query language that produces a list of documents selected from one or more input sources. The selection conditions involve vertical and horizontal navigation, thus querying explicitly the order present in input documents. We point several strong limitations in the descriptive ability of current DTDs and the need for extending them with (i) a subtyping mechanism and (ii) a more powerful specification mechanism than regular languages, such as context-free languages. With these extensions, we show that one can always infer tight DTDs, that precisely characterize a selection view on sources satisfying given DTDs. We also show important special cases where one can infer a tight DTD without requiring extension (ii). Finally we consider related problems such as verifying conformance of a view definition with a predefined DTD. Extensions to more powerful views that construct complex documents are also briefly discussed.
---
paper_title: Taxonomy of XML schema languages using formal language theory
paper_content:
On the basis of regular tree grammars, we present a formal framework for XML schema languages. This framework helps to describe, compare, and implement such schema languages in a rigorous manner. Our main results are as follows: (1) a simple framework to study three classes of tree languages (local, single-type, and regular); (2) classification and comparison of schema languages (DTD, W3C XML Schema, and RELAX NG) based on these classes; (3) efficient document validation algorithms for these classes; and (4) other grammatical concepts and advanced validation algorithms relevant to an XML model (e.g., binarization, derivative-based validation).
---
paper_title: Which XML schemas admit 1-pass preorder typing?
paper_content:
It is shown that the class of regular tree languages admitting one-pass preorder typing is exactly the class defined by restrained competition tree grammars introduced by Murata et al. [14]. In a streaming context, the former is the largest class of XSDs where every element in a document can be typed when its opening tag is met. The main technical machinery consists of semantical characterizations of restrained competition grammars and their subclasses. In particular, they can be characterized in terms of the context of nodes, closure properties, allowed patterns and guarded DTDs. It is further shown that deciding whether a schema is restrained competition is tractable. Deciding whether a schema is equivalent to a restrained competition tree grammar, or one of its subclasses, is much more difficult: it is complete for EXPTIME. We show that our semantical characterizations allow for easy optimization and minimization algorithms. Finally, we relate the notion of one-pass preorder typing to the existing XML Schema standard.
---
paper_title: Complexity of Decision Problems for Simple Regular Expressions
paper_content:
We study the complexity of the inclusion, equivalence, and intersection problem for simple regular expressions arising in practical XML schemas. These basically consist of the concatenation of factors where each factor is a disjunction of strings possibly extended with '*' or '?'. We obtain lower and upper bounds for various fragments of simple regular expressions. Although we show that inclusion and intersection are already intractable for very weak expressions, we also identify some tractable cases. For equivalence, we only prove an initial tractability result leaving the complexity of more general cases open. The main motivation for this research comes from database theory, or more specifically XML and semi-structured data. We namely show that all lower and upper bounds for inclusion and equivalence, carry over to the corresponding decision problems for extended context-free grammars and single-type tree grammars, which are abstractions of DTDs and XML Schemas, respectively. For intersection, we show that the complexity only carries over for DTDs.
---
paper_title: Which XML schemas admit 1-pass preorder typing?
paper_content:
It is shown that the class of regular tree languages admitting one-pass preorder typing is exactly the class defined by restrained competition tree grammars introduced by Murata et al. [14]. In a streaming context, the former is the largest class of XSDs where every element in a document can be typed when its opening tag is met. The main technical machinery consists of semantical characterizations of restrained competition grammars and their subclasses. In particular, they can be characterized in terms of the context of nodes, closure properties, allowed patterns and guarded DTDs. It is further shown that deciding whether a schema is restrained competition is tractable. Deciding whether a schema is equivalent to a restrained competition tree grammar, or one of its subclasses, is much more difficult: it is complete for EXPTIME. We show that our semantical characterizations allow for easy optimization and minimization algorithms. Finally, we relate the notion of one-pass preorder typing to the existing XML Schema standard.
---
paper_title: Validating streaming XML documents
paper_content:
This paper investigates the on-line validation of streaming XML documents with respect to a DTD, under memory constraints. We first consider validation using constant memory, formalized by a finite-state automaton ( FSA ). We examine two flavors of the problem, depending on whether or not the XML document is assumed to be well-formed. The main results of the paper provide conditions on the DTDs under which validation of either flavor can be done using an FSA . For DTDs that cannot be validated by an FSA , we investigate two alternatives. The first relaxes the constant memory requirement by allowing a stack bounded in the depth of the XML document, while maintaining the deterministic, one-pass requirement. The second approach consists in refining the DTD to provide additional information that allows validation by an FSA .
---
paper_title: Which XML schemas admit 1-pass preorder typing?
paper_content:
It is shown that the class of regular tree languages admitting one-pass preorder typing is exactly the class defined by restrained competition tree grammars introduced by Murata et al. [14]. In a streaming context, the former is the largest class of XSDs where every element in a document can be typed when its opening tag is met. The main technical machinery consists of semantical characterizations of restrained competition grammars and their subclasses. In particular, they can be characterized in terms of the context of nodes, closure properties, allowed patterns and guarded DTDs. It is further shown that deciding whether a schema is restrained competition is tractable. Deciding whether a schema is equivalent to a restrained competition tree grammar, or one of its subclasses, is much more difficult: it is complete for EXPTIME. We show that our semantical characterizations allow for easy optimization and minimization algorithms. Finally, we relate the notion of one-pass preorder typing to the existing XML Schema standard.
---
paper_title: TypEx: A Type Based Approach to XML Stream Querying
paper_content:
We consider the topic of query evaluation over semistructured information streams, and XML data streams in particular. Streaming evaluation methods are necessarily eventdriven, which is in tension with high-level query models; in general, the more expressive the query language, the harder it is to translate queries into an event-based implementation with finite resource bounds.
---
paper_title: On validation of XML streams using finite state machines
paper_content:
We study validation of streamed XML documents by means of finite state machines. Previous work has shown that validation is in principle possible by finite state automata, but the construction was prohibitively expensive, giving an exponential-size nondeterministic automaton. Instead, we want to find deterministic automata for validating streamed documents: for them, the complexity of validation is constant per tag. We show that for a reading window of size one and nonrecursive DTDs with one-unambiguous content (i.e. conforming to the current XML standard) there is an algorithm producing a deterministic automaton that validates documents with respect to that DTD. The size of the automaton is at most exponential and we give matching lower bounds. To capture the possible advantages offered by reading windows of size k, we introduce k-unambiguity as a generalization of one-unambiguity, and study the validation against DTDs with k-unambiguous content. We also consider recursive DTDs and give conditions under which they can be validated against by using one-counter automata.
---
paper_title: Querying unranked trees with stepwise tree automata
paper_content:
The problem of selecting nodes in unranked trees is the most basic querying problem for XML. We propose stepwise tree automata for querying unranked trees. Stepwise tree automata can express the same monadic queries as monadic Datalog and monadic second-order logic. We prove this result by reduction to the ranked case, via a new systematic correspondence that relates unranked and ranked queries.
---
paper_title: Locating Matches of Tree Patterns in Forests
paper_content:
We deal with matching and locating of patterns in forests of variable arity. A pattern consists of a structural and a contextual condition for subtrees of a forest, both of which are given as tree or forest regular languages. We use the notation of constraint systems to uniformly specify both kinds of conditions. In order to implement pattern matching we introduce the class of pushdown forest automata. We identify a special class of contexts such that not only pattern matching but also locating all of a forest’s subtrees matching in context can be performed in a single traversal. We also give a method for computing the reachable states of an automaton in order to minimize the size of transition tables.
---
paper_title: Expressiveness of structured document query languages based on attribute grammars
paper_content:
Structured document databases can be naturally viewed as derivation trees of a context-free grammar. Under this view, the classical formalism of attribute grammars becomes a formalism for structured document query languages. From this perspective, we study the expressive power of BAGs: Boolean-valued attribute grammars with propositional logic formulas as semantic rules, and RAGs: relation-valued attribute grammars with first-order logic formulas as semantic rules. BAGs can express only unary queries; RAGs can express queries of any arity. We first show that the (unary) queries expressible by BAGs are precisely those definable in monadic second-order logic. We then show that the queries expressible by RAGs are precisely those definable by first-order inductions of linear depth, or, equivalently, those computable in linear time on a parallel machine with polynomially many processors. Further, we show that RAGs that only use synthesized attributes are strictly weaker than RAGs that use both synthesized and inherited attributes. We show that RAGs are more expressive than monadic second-order logic for queries of any arity. Finally, we discuss relational attribute grammars in the context of BAGs and RAGs. We show that in the case of BAGs this does not increase the expressive power, while different semantics for relational RAGs capture the complexity classes NP, coNP and UP ∩ coUP.
---
paper_title: Comparing the succinctness of monadic query languages over finite trees
paper_content:
We study the succinctness of monadic second-order logic and a variety of monadic fixed point logics on trees. All these languages are known to have the same expressive power on trees, but some can express the same queries much more succinctly than others. For example, we show that, under some complexity theoretic assumption, monadic second-order logic is non-elementarily more succinct than monadic least fixed point logic, which in turn is non-elementarily more succinct than monadic datalog.
---
paper_title: Attribute Grammars for Unranked Trees as a query language for structured documents
paper_content:
Document specification languages, like for instance XML, model documents using extended context-free grammars. These differ from standard context-free grammars in that they allow arbitrary regular expressions on the right-hand side of productions. To query such documents, we introduce a new form of attribute grammars (extended AGs) that work directly over extended context-free grammars rather than over standard context-free grammars. Viewed as a query language, extended AGs are particularly relevant as they can take into account the inherent order of the children of a node in a document. We show that non-circularity remains decidable in EXPTIME and establish the complexity of the non-emptiness and equivalence problem of extended AGs to be complete for EXPTIME. As an application we show that the Region Algebra expressions can be efficiently translated into extended AGs. This translation drastically improves the known upper bound on the complexity of the emptiness and equivalence test for Region Algebra expressions from non-elementary to EXPTIME. Finally, we characterize the expressiveness of extended AGs in terms of monadic second-order logic.
---
paper_title: The Regularity of Two-Way Nondeterministic Tree Automata Languages
paper_content:
We establish that regularly extended two-way nondeterministic tree automata with unranked alphabets have the same expressive power as regularly extended nondeterministic tree automata with unranked alphabets alphabets. We obtain this results by establishing regularly extended versions of a congruence on trees and of a congruence on, so called, views. Our motivation for the study of these tree models is the Extensible Markup Language (XML), a metalanguage for defining document grammars. Such grammars have regular sets of right-hand sides for their productions and tree automata provide an alternative and useful modeling tool for them. In particular, we believe that they provide a useful computational model for what we call caterpillar expressions.
---
paper_title: Monadic datalog and the expressive power of languages for Web information extraction
paper_content:
Research on information extraction from Web pages (wrapping) has seen much activity in recent times (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping.In this paper, we first study monadic datalog as a wrapping language (over ranked or unranked tree structures). Using previous work by Neven and Schwentick, we show that this simple language is equivalent to full monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and thus propose MSO as a yardstick for evaluating and comparing wrappers.Using the above result, we study the kernel fragment Elog- of the Elog wrapping language used in the Lixto system (a visual wrapper generator). The striking fact here is that Elog- exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified. We also formally compare Elog to other wrapping languages proposed in the literature.
---
paper_title: Satisfiability of XPath Expressions
paper_content:
In this paper, we investigate the complexity of deciding the satisfiability of XPath 2.0 expressions, i.e., whether there is an XML document for which their result is nonempty. Several fragments that allow certain types of expressions are classified as either in PTIME or NP-hard to see which type of expression make this a hard problem. Finally, we establish a link between XPath expressions and partial tree descriptions which are studied in computational linguistics.
---
paper_title: Monadic datalog and the expressive power of languages for Web information extraction
paper_content:
Research on information extraction from Web pages (wrapping) has seen much activity in recent times (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping.In this paper, we first study monadic datalog as a wrapping language (over ranked or unranked tree structures). Using previous work by Neven and Schwentick, we show that this simple language is equivalent to full monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and thus propose MSO as a yardstick for evaluating and comparing wrappers.Using the above result, we study the kernel fragment Elog- of the Elog wrapping language used in the Lixto system (a visual wrapper generator). The striking fact here is that Elog- exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified. We also formally compare Elog to other wrapping languages proposed in the literature.
---
paper_title: DTDs versus XML schema: a practical study
paper_content:
Among the various proposals answering the shortcomings of Document Type Definitions (DTDs), XML Schema is the most widely used. Although DTDs and XML Schema Definitions (XSDs) differ syntactically, they are still quite related on an abstract level. Indeed, freed from all syntactic sugar, XML Schemas can be seen as an extension of DTDs with a restricted form of specialization. In the present paper, we inspect a number of DTDs and XSDs harvested from the web and try to answer the following questions: (1) which of the extra features/expressiveness of XML Schema not allowed by DTDs are effectively used in practice; and, (2) how sophisticated are the structural properties (i.e. the nature of regular expressions) of the two formalisms. It turns out that at present real-world XSDs only sparingly use the new features introduced by XML Schema: on a structural level the vast majority of them can already be defined by DTDs. Further, we introduce a class of simple regular expressions and obtain that a surprisingly high fraction of the content models belong to this class. The latter result sheds light on the justification of simplifying assumptions that sometimes have to be made in XML research.
---
paper_title: First order paths in ordered trees
paper_content:
We give two sufficient conditions on XPath like languages for having first order expressivity, meaning that every first order definable set of paths in an ordered node-labeled tree is definable in that XPath language. They are phrased in terms of expansions of navigational (sometimes called “Core”) XPath. Adding either complementation, or the more elegant conditional paths is sufficient. A conditional path is an axis relation of the form (one_step_axis::n[F])+, denoting the transitive closure of the relation expressed by one_step_axis::n[F]. As neither is expressible in navigational XPath we also give characterizations in terms of first order logic of the answer sets and the sets of paths navigational XPath can define. The first in terms of a suitable two variable fragment, the second in terms of unions of conjunctive queries.
---
paper_title: Satisfiability of XPath Expressions
paper_content:
In this paper, we investigate the complexity of deciding the satisfiability of XPath 2.0 expressions, i.e., whether there is an XML document for which their result is nonempty. Several fragments that allow certain types of expressions are classified as either in PTIME or NP-hard to see which type of expression make this a hard problem. Finally, we establish a link between XPath expressions and partial tree descriptions which are studied in computational linguistics.
---
paper_title: Efficient algorithms for processing XPath queries
paper_content:
Our experimental analysis of several popular XPath processors reveals a striking fact: Query evaluation in each of the systems requires time exponential in the size of queries in the worst case. We show that XPath can be processed much more efficiently, and propose main-memory algorithms for this problem with polynomial-time combined query evaluation complexity. Moreover, we show how the main ideas of our algorithm can be profitably integrated into existing XPath processors. Finally, we present two fragments of XPath for which linear-time query processing algorithms exist and another fragment with linear-space/quadratic-time query processing.
---
paper_title: Containment and Integrity Constraints for XPath Fragments
paper_content:
XPath is a W3C standard that plays a crucial role in several influential query, transformation, and schema standards for XML. Motivated by the larger challenge of XML query optimization, we investigate the problem of containment of XPath expressions under integrity constraints that are in turn formulated with the help of XPath expressions. Our core formalism consists of a fragment of XPath that we call simple and a corresponding class of integrity constraints that we call simple XPath integrity constraints (SXIC). SXIC's can express many database-style constraints, including key and foreign key constraints specified in the XML Schema standard proposal, as well as many constraints implied by DTDs. We identify a subclass of bounded SXIC's under which containment of simple XPath expressions is decidable, but we show that even modest use of unbounded SXIC's makes the problem undecidable. In particular, the addition of (unbounded) constraints implied by DTDs leads to undecidability. We give tight Πp2 bounds for the simple XPath containment problem and tight NP bounds for the disjunction-free subfragment, while even identifying a PTIME subcase. We also show that decidability of containment under SXIC's still holds if the expressions contain certain additional features (e.g., wildcard) although the complexity jumps to Πp2 even for the disjunction-free subfragment. We know that our results can be extended to some but not all of the XPath features that depend on document order. The decidability of containment of simple XPath expressions in the presence of DTDs only remains open (although we can show that the problem is PSPACE-hard) as well as the problem for full-fledged XPath expressions, even in the absence of integrity constraints.
---
paper_title: Comparing the succinctness of monadic query languages over finite trees
paper_content:
We study the succinctness of monadic second-order logic and a variety of monadic fixed point logics on trees. All these languages are known to have the same expressive power on trees, but some can express the same queries much more succinctly than others. For example, we show that, under some complexity theoretic assumption, monadic second-order logic is non-elementarily more succinct than monadic least fixed point logic, which in turn is non-elementarily more succinct than monadic datalog.
---
paper_title: Cache-conscious automata for XML filtering
paper_content:
Hardware cache behavior is an important factor in the performance of memory-resident, data-intensive systems such as XML filtering engines. A key data structure in several recent XML filters is the automaton, which is used to represent the long-running XML queries in the main memory. In this paper, we study the cache performance of automaton-based XML filtering through analytical modeling and system measurement. Furthermore, we propose a cache-conscious automaton organization technique, called the hot buffer, to improve the locality of automaton state transitions. Our results show that (1) our cache performance model for XML filtering automata is highly accurate and (2) the hot buffer improves the cache performance as well as the overall performance of automaton-based XML filtering.
---
paper_title: XPath processing in a nutshell
paper_content:
We provide a concise yet complete formal definition of the semantics of XPath 1 and summarize efficient algorithms for processing queries in this language. Our presentation is intended both for the reader who is looking for a short but comprehensive formal account of XPath as well as the software developer in need of material that facilitates the rapid implementation of XPath engines.
---
paper_title: XPath Containment in the Presence of Disjunction, DTDs, and Variables
paper_content:
XPath is a simple language for navigating an XML tree and returning a set of answer nodes. The focus in this paper is on the complexity of the containment problem for various fragments of XPath. In addition to the basic operations (child, descendant, filter, and wildcard), we consider disjunction, DTDs and variables. W.r.t. variables we study two semantics: (1) the value of variables is given by an outer context; (2) the value of variables is defined existentially. We establish an almost complete classification of the complexity of the containment problem w.r.t. these fragments.
---
paper_title: Containment and equivalence for a fragment of XPath
paper_content:
XPath is a language for navigating an XML document and selecting a set of element nodes. XPath expressions are used to query XML data, describe key constraints, express transformations, and reference elements in remote documents. This article studies the containment and equivalence problems for a fragment of the XPath query language, with applications in all these contexts.In particular, we study a class of XPath queries that contain branching, label wildcards and can express descendant relationships between nodes. Prior work has shown that languages that combine any two of these three features have efficient containment algorithms. However, we show that for the combination of features, containment is coNP-complete. We provide a sound and complete algorithm for containment that runs in exponential time, and study parameterized PTIME special cases. While we identify one parameterized class of queries for which containment can be decided efficiently, we also show that even with some bounded parameters, containment remains coNP-complete. In response to these negative results, we describe a sound algorithm that is efficient for all queries, but may return false negatives in some cases.
---
paper_title: The complexity of XPath query evaluation
paper_content:
In this paper, we study the precise complexity of XPath 1.0 query processing. Even though heavily used by its incorporation into a variety of XML-related standards, the precise cost of evaluating an XPath query is not yet wellunderstood. The first polynomial-time algorithm for XPath processing (with respect to combined complexity) was proposed only recently, and even to this day all major XPath engines take time exponential in the size of the input queries. From the standpoint of theory, the precise complexity of XPath query evaluation is open, and it is thus unknown whether the query evaluation problem can be parallelized.In this work, we show that both the data complexity and the query complexity of XPath 1.0 fall into lower (highly parallelizable) complexity classes, but that the combined complexity is PTIME-hard. Subsequently, we study the sources of this hardness and identify a large and practically important fragment of XPath 1.0 for which the combined complexity is LOGCFL-complete and, therefore, in the highly parallelizable complexity class NC 2 .
---
paper_title: XPath queries on streaming data
paper_content:
We present the design and implementation of the XSQ system for querying streaming XML data using XPath 1.0. Using a clean design based on a hierarchical arrangement of pushdown transducers augmented with buffers, XSQ supports features such as multiple predicates, closures, and aggregation. XSQ not only provides high throughput, but is also memory efficient: It buffers only data that must be buffered by any streaming XPath processor. We also present an empirical study of the performance characteristics of XPath features, as embodied by XSQ and several other systems.
---
paper_title: Monadic datalog and the expressive power of languages for Web information extraction
paper_content:
Research on information extraction from Web pages (wrapping) has seen much activity in recent times (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping.In this paper, we first study monadic datalog as a wrapping language (over ranked or unranked tree structures). Using previous work by Neven and Schwentick, we show that this simple language is equivalent to full monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and thus propose MSO as a yardstick for evaluating and comparing wrappers.Using the above result, we study the kernel fragment Elog- of the Elog wrapping language used in the Lixto system (a visual wrapper generator). The striking fact here is that Elog- exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified. We also formally compare Elog to other wrapping languages proposed in the literature.
---
paper_title: Processing XML Streams with Deterministic Automata
paper_content:
We consider the problem of evaluating a large number of XPath expressions on an XML stream. Our main contribution consists in showing that Deterministic Finite Automata (DFA) can be used effectively for this problem: in our experiments we achieve a throughput of about 5.4MB/s, independent of the number of XPath expressions (up to 1,000,000 in our tests). The major problem we face is that of the size of the DFA. Since the number of states grows exponentially with the number of XPath expressions, it was previously believed that DFAs cannot be used to process large sets of expressions. We make a theoretical analysis of the number of states in the DFA resulting from XPath expressions, and consider both the case when it is constructed eagerly, and when it is constructed lazily. Our analysis indicates that, when the automaton is constructed lazily, and under certain assumptions about the structure of the input XML data, the number of states in the lazy DFA is manageable. We also validate experimentally our findings, on both synthetic and real XML data sets.
---
paper_title: Containment for XPath Fragments under DTD Constraints
paper_content:
The containment and equivalence problems for various fragments of XPath have been studied by a number of authors. For some fragments, deciding containment (and even minimisation) has been shown to be in PTIME, while for minor extensions containment has been shown to be CONP-complete. When containment is with respect to trees satisfying a set of constraints (such as a schema or DTD), the problem seems to be more difficult. For example, containment under DTDs is CONP-complete for an XPath fragment denoted XP{[ ]} for which containment is in PTIME. It is also undecidable for a larger class of XPath queries when the constraints are so-called simple XPath integrity constraints (SXICs). In this paper, we show that containment is decidable for an important fragment of XPath, denoted XP{[ ], *, //}, when the constraints are DTDs. We also identify XPath fragments for which containment under DTDs can be decided in PTIME.
---
paper_title: XPath with conditional axis relations
paper_content:
This paper is about the W3C standard node-addressing language for XML documents, called XPath. XPath is still under development. Version 2.0 appeared in 2001 while the theoretical foundations of Version 1.0 (dating from 1998) are still being widely studied. The paper aims at bringing XPath to a “stable fixed point” in its development: a version which is expressively complete, still manageable computationally, with a user-friendly syntax and a natural semantics. We focus on an important axis relation which is not expressible in XPath 1.0 and is very useful in practice: the conditional axis. With it we can express paths specified by for instance “do a child step, while test is true at the resulting node”. We study the effect of adding conditional axis relations to XPath on its expressive power and the complexity of the query evaluation and query equivalence problems. We define an XPath dialect \(\mathcal{X}\)CPath which is expressively complete, has a linear time query evaluation algorithm and for which query equivalence given a DTD can be decided in exponential time.
---
paper_title: XPath and Modal Logics of Finite DAG‘s
paper_content:
XPath, CTL and the modal logics proposed by Blackburn et al, Palm and Kracht are variable free formalisms to describe and reason about (finite) trees. XPath expressions evaluated at the root of a tree correspond to existential positive modal formulas. The models of XPath expressions are finite ordered trees, or in the presence of XML’s ID/IDREF mechanism graphs. The ID/IDREF mechanism can be seen as a device for naming nodes. Naming devices have been studied in hybrid logic by nominals. We add nominals to the modal logic of Palm and interpret the language on directed acyclic graphs. We give an algorithm which decides the consequence problem of this logic in exponential time. This yields a complexity result for query containment of the corresponding extension of XPath.
---
paper_title: The complexity of first-order and monadic second-order logic revisited
paper_content:
The model-checking problem for a logic L on a class C of structures asks whether a given L-sentence holds in a given structure in C. In this paper, we give super-exponential lower bounds for fixed-parameter tractable model-checking problems for first-order and monadic second-order logic. We show that unless PTIME=NP, the model-checking problem for monadic second-order logic on finite words is not solvable in time f(k)/spl middot/p(n), for any elementary function f and any polynomial p. Here k denotes the size of the input sentence and n the size of the input word. We prove the same result for first-order logic under a stronger complexity theoretic assumption from parameterized complexity theory. Furthermore, we prove that the model-checking problems for first-order logic on structures of degree 2 and of bounded degree d/spl ges/3 are not solvable in time 2(2/sup o(k)/)/spl middot/p(n) (for degree 2) and 2(2/sup 2o(k)/)/spl middot/p(n) (for degree d), for any polynomial p, again under an assumption from parameterized complexity theory. We match these lower bounds by corresponding upper bounds.
---
paper_title: N-ary Queries by Tree Automata
paper_content:
We investigate n-ary node selection queries in trees by successful runs of tree automata. We show that run-based n-ary queries capture MSO, contribute algorithms for enumerating answers of n-ary queries, and study the complexity of the problem. We investigate the subclass of run-based n-ary queries by unambiguous tree automata.
---
paper_title: Looping caterpillars
paper_content:
There are two main paradigms for querying semi structured data: regular path queries and XPath. The aim of this paper is to provide a synthesis between these two. This synthesis is given by a small addition to tree walk automata and the corresponding caterpillar expressions. These are evaluated on unranked finite sibling-ordered trees. At the expression level we add an operator whose meaning is intersection with the identity relation. This language can express every first-order definable relation and its expressive power is characterized by pebble tree walk automata that cannot inspect pebbles. We also define an expansion of the caterpillar expressions whose expressive power is characterized by ordinary pebble tree walk automata. Combining results from Bloem-Engelfriet and Gottlob-Koch, we also define an XPath like query language which is complete for all MSO definable binary relations.
---
paper_title: First order paths in ordered trees
paper_content:
We give two sufficient conditions on XPath like languages for having first order expressivity, meaning that every first order definable set of paths in an ordered node-labeled tree is definable in that XPath language. They are phrased in terms of expansions of navigational (sometimes called “Core”) XPath. Adding either complementation, or the more elegant conditional paths is sufficient. A conditional path is an axis relation of the form (one_step_axis::n[F])+, denoting the transitive closure of the relation expressed by one_step_axis::n[F]. As neither is expressible in navigational XPath we also give characterizations in terms of first order logic of the answer sets and the sets of paths navigational XPath can define. The first in terms of a suitable two variable fragment, the second in terms of unions of conjunctive queries.
---
paper_title: A Formal Model for an Expressive Fragment of XSLT
paper_content:
The aim of this paper is two-fold. First, we want to show that the recent extension of XSL with variables and passing of data values between template rules has increased its expressiveness beyond that of most other current XML query languages. Second, in an attempt to increase the understanding of this already wide-spread but not so transparent language, we provide an essential and powerful fragment with a formal syntax and a precise semantics.
---
paper_title: Typechecking Top-Down Uniform Unranked Tree Transducers
paper_content:
We investigate the typechecking problem for XML queries: statically verifying that every answer to a query conforms to a given output schema, for inputs satisfying a given input schema. As typechecking quickly turns undecidable for query languages capable of testing equality of data values, we return to the limited framework where we abstract XML documents as labeled ordered trees. We focus on simple top-down recursive transformations motivated by XSLT and structural recursion on trees. We parameterize the problem by several restrictions on the transformations (deleting, non-deleting, bounded width) and consider both tree automata and DTDs as output schemas. The complexity of the typechecking problems in this scenario range from PTIME to EXPTIME.
---
paper_title: Syntax-Directed Semantics: Formal Models Based on Tree Transducers
paper_content:
From the Publisher: ::: This is a motivated presentation of recent results on tree transducers, applied to studying the general properties of formal models and for providing semantics to context-free languages.
---
paper_title: Typechecking for Semistructured Data
paper_content:
Semistructured data is used in data exchange applications, like B2B and EAI, and represents data in a flexible format. Every data item has a unique tag (also called label), and data items can be nested. Formally, a semistructured data instance is a tree whose nodes are labeled with tags and leaves are labeled with data values. XML [Con98] is a standard syntax for describing such trees; Fig. 1 shows a tree representing a semistructured data instance and its XML syntax. We will refer interchangeably to semistructured data instances as trees or XML trees.
---
paper_title: A comparison of pebble tree transducers with macro tree transducers
paper_content:
The n-pebble tree transducer was recently proposed as a model for XML query languages. The four main results on deterministic transducers are: First, (1) the translation $\tau$ of an n-pebble tree transducer can be realized by a composition of n+1 0-pebble tree transducers. Next, the pebble tree transducer is compared with the macro tree transducer, a well-known model for syntax-directed semantics, with decidable type checking. The -pebble tree transducer can be simulated by the macro tree transducer, which, by the first result, implies that (2) $\tau$ can be realized by an (n+1)-fold composition of macro tree transducers. Conversely, every macro tree transducer can be simulated by a composition of 0-pebble tree transducers. Together these simulations prove that (3) the composition closure of n-pebble tree transducers equals that of macro tree transducers (and that of 0-pebble tree transducers). Similar results hold in the nondeterministic case. Finally, (4) the output languages of deterministic n-pebble tree transducers form a hierarchy with respect to the number n of pebbles.
---
paper_title: Typechecking for XML transformers
paper_content:
We study the typechecking problem for XML (eXtensible Markup Language) transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.
---
paper_title: A comparison of pebble tree transducers with macro tree transducers
paper_content:
The n-pebble tree transducer was recently proposed as a model for XML query languages. The four main results on deterministic transducers are: First, (1) the translation $\tau$ of an n-pebble tree transducer can be realized by a composition of n+1 0-pebble tree transducers. Next, the pebble tree transducer is compared with the macro tree transducer, a well-known model for syntax-directed semantics, with decidable type checking. The -pebble tree transducer can be simulated by the macro tree transducer, which, by the first result, implies that (2) $\tau$ can be realized by an (n+1)-fold composition of macro tree transducers. Conversely, every macro tree transducer can be simulated by a composition of 0-pebble tree transducers. Together these simulations prove that (3) the composition closure of n-pebble tree transducers equals that of macro tree transducers (and that of 0-pebble tree transducers). Similar results hold in the nondeterministic case. Finally, (4) the output languages of deterministic n-pebble tree transducers form a hierarchy with respect to the number n of pebbles.
---
paper_title: The Macro Tree Transducer Hierarchy Collapses for Functions of Linear Size Increase
paper_content:
Every composition of macro tree transducers that is a function of linear size increase can be realized by just one macro tree transducer. For a given composition it is decidable whether or not it is of linear size increase; if it is, then an equivalent macro tree transducer can be constructed effectively.
---
paper_title: Transformation of XML data using an unranked tree transducer
paper_content:
Transformation of data documents is of special importance to use XML as the universal data interchange format on the Web. Data transformation is used in many tasks that require data to be transferred between existing, independently created Web-oriented applications. To perform such transformation one can use W3C’s XSLT or XQuery. But these languages are devoted to detailed programming of transformation procedures. In this paper we show how data transformation can by specify by means of high-level rule specifications based on uniform unranked tree transducers. We show that our approach is both descriptive and expressive, and we illustrate how it can be used to specify and perform transformations of XML documents.
---
paper_title: Structured document transformations based on XSL
paper_content:
Based on the recursion mechanism of the XML transformation language XSL, the document transformation language DTL is defined. First the instantiation DTLreg is considered that uses regular expressions as pattern language. This instantiation closely resembles the navigation mechanism of XSL. For DTLreg the complexity of relevant decision problems such as termination of programs, usefulness of rules and equivalence of selection patterns, is addressed. Next, a much more powerful abstraction of XSL is considered that uses monadic second-order logic formulas as pattern language (DTLmso). If DTLmso is restricted to top-down transformations (DTLdmso), then a computational model can be defined which is a natural generalization to unranked trees of topdown tree transducers with look-ahead. The look-ahead can be realized by a straightforward bottom-up pre-processing pass through the document. The size of the output of an XSL program is at most exponential in the size of the input. By restricting copying in XSL a decidable fragment of DTLdmso programs is obtained which induces transformations of linear size increase (safe DTLdmso). It is shown that the emptiness and finiteness problems are decidable for ranges of DTLdmso programs and that the ranges are closed under intersection with generalized Document Type Definitions (DTDs).
---
paper_title: A High-Level Language for Specifying XML Data Transformations
paper_content:
We propose a descriptive high-level language XDTrans devoted to specify transformations over XML data. The language is based on unranked tree automata approach. In contrast to W3C's XQuery or XSLT which require programming skills, our approach uses high-level ions reflecting intuitive understanding of tree oriented nature of XML data. XDTrans specifies transformations by means of rules which involve XPath expressions, node variables and non-terminal symbols denoting fragments of a constructed result. We propose syntax and semantics for the language as well as algorithms translating a class of transformations into XSLT.
---
paper_title: The equivalence problem for deterministic MSO tree transducers is decidable
paper_content:
It is decidable for deterministic MSO definable graph-to-string or graph-to-tree transducers whether they are equivalent on a context-free set of graphs.
---
paper_title: XML with data values: typechecking revisited
paper_content:
We investigate the type checking problem for XML queries: statically verifying that every answer to a query conforms to a given output DTD, for inputs satisfying a given input DTD. This problem had been studied by a subset of the authors in a simplified framework that captured the structure of XML documents but ignored data values. We revisit here the type checking problem in the more realistic case when data values are present in documents and tested by queries. In this extended framework, type checking quickly becomes undecidable. However, it remains decidable for large classes of queries and DTDs of practical interest. The main contribution of the present paper is to trace a fairly tight boundary of decidability for type checking with data values. The complexity of type checking in the decidable cases is also considered.
---
paper_title: Typechecking Top-Down Uniform Unranked Tree Transducers
paper_content:
We investigate the typechecking problem for XML queries: statically verifying that every answer to a query conforms to a given output schema, for inputs satisfying a given input schema. As typechecking quickly turns undecidable for query languages capable of testing equality of data values, we return to the limited framework where we abstract XML documents as labeled ordered trees. We focus on simple top-down recursive transformations motivated by XSLT and structural recursion on trees. We parameterize the problem by several restrictions on the transformations (deleting, non-deleting, bounded width) and consider both tree automata and DTDs as output schemas. The complexity of the typechecking problems in this scenario range from PTIME to EXPTIME.
---
paper_title: Typechecking XML views of relational databases
paper_content:
Motivated by the need to export relational databases as XML data in the context of the World Wide Web, we investigate the type-checking problem for transformations of relational data into tree data (i.e. XML). The problem consists of statically verifying that the output of every transformation belongs to a given output tree language (specified for XML by a document type definition), for input databases satisfying given integrity constraints. The type-checking problem is parameterized by the class of formulas defining the transformation, the class of output tree languages and the class of integrity constraints. While undecidable in its most general formulation, the type-checking problem has many special cases of practical interest that turn out to be decidable. The main contribution of this paper is to trace a fairly tight boundary of decidability for type-checking in this framework. In the decidable cases, we examine the complexity and show lower and upper bounds. We also exhibit a practically appealing restriction for which type-checking is in PTIME.
---
paper_title: Typechecking for XML transformers
paper_content:
We study the typechecking problem for XML (eXtensible Markup Language) transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.
---
paper_title: The Complexity of Compositions of Deterministic Tree Transducers
paper_content:
Macro tree transducers can simulate most models of tree transducers (e.g., top-down and bottom-up tree transducers, attribute grammars, and pebble tree transducers which, in turn, can simulate all known models of XML transformers). The string languages generated by compositions of macro tree transducers (obtained by reading the leaves of the output trees) form a large class which contains, e.g., the IO hierarchy and the EDT0L control hierarchy. Consider an arbitrary composition ? of (deterministic) macro tree transducers. How difficult is it, for a given input tree s, to compute the translation t = ? (s)? It is shown that this problem can be solved (on a RAM) in time linear in the sum of the sizes of s and t. Moreover, the problem to determine, for a given t of size n, whether or not there is an input tree s such that t = ? (s) is in DSPACE(n); this means that output languages of compositions of macro tree transducers are deterministic context-sensitive. The involved technique of compressing intermediate results of the composition, also gives a new proof of the fact that the finiteness problem for ?'s range is decidable.
---
paper_title: Frontiers of tractability for typechecking simple XML transformations
paper_content:
Typechecking consists of statically verifying whether the output of an XML transformation is always conform to an output type for documents satisfying a given input type. We focus on complete algorithms which always produce the correct answer. We consider top-down XML transformations incorporating XPath expressions and abstract document types by grammars and tree automata. By restricting schema languages and transformations, we identify several practical settings for which typechecking can be done in polynomial time. Moreover, the resulting framework provides a rather complete picture as we show that most scenarios cannot be enlarged without rendering the typechecking problem intractable. So, the present research sheds light on when to use fast complete algorithms and when to reside to sound but incomplete ones.
---
paper_title: Conjunctive queries over trees
paper_content:
We study the complexity and expressive power of conjunctive queries over unranked labeled trees represented using a variety of structure relations such as “child”, “descendant”, and “following” as well as unary relations for node labels. We establish a framework for characterizing structures representing trees for which conjunctive queries can be evaluated efficiently. Then we completely chart the tractability frontier of the problem and establish a dichotomy theorem for our axis relations, that is, we find all subset-maximal sets of axes for which query evaluation is in polynomial time and show that for all other cases, query evaluation is NP-complete. All polynomial-time results are obtained immediately using the proof techniques from our framework. Finally, we study the expressiveness of conjunctive queries over trees and show that for each conjunctive query, there is an equivalent acyclic positive query (i.e., a set of acyclic conjunctive queries), but that in general this query is not of polynomial size.
---
paper_title: Expressiveness of structured document query languages based on attribute grammars
paper_content:
Structured document databases can be naturally viewed as derivation trees of a context-free grammar. Under this view, the classical formalism of attribute grammars becomes a formalism for structured document query languages. From this perspective, we study the expressive power of BAGs: Boolean-valued attribute grammars with propositional logic formulas as semantic rules, and RAGs: relation-valued attribute grammars with first-order logic formulas as semantic rules. BAGs can express only unary queries; RAGs can express queries of any arity. We first show that the (unary) queries expressible by BAGs are precisely those definable in monadic second-order logic. We then show that the queries expressible by RAGs are precisely those definable by first-order inductions of linear depth, or, equivalently, those computable in linear time on a parallel machine with polynomially many processors. Further, we show that RAGs that only use synthesized attributes are strictly weaker than RAGs that use both synthesized and inherited attributes. We show that RAGs are more expressive than monadic second-order logic for queries of any arity. Finally, we discuss relational attribute grammars in the context of BAGs and RAGs. We show that in the case of BAGs this does not increase the expressive power, while different semantics for relational RAGs capture the complexity classes NP, coNP and UP ∩ coUP.
---
paper_title: N-ary Queries by Tree Automata
paper_content:
We investigate n-ary node selection queries in trees by successful runs of tree automata. We show that run-based n-ary queries capture MSO, contribute algorithms for enumerating answers of n-ary queries, and study the complexity of the problem. We investigate the subclass of run-based n-ary queries by unambiguous tree automata.
---
paper_title: On diving in trees
paper_content:
The paper is concerned with queries on tree-structured data. It defines fragments of first-order logic (FO) and FO extended by regular expressions along paths. These fragments have the same expressive power as the full logics themselves. On the other hand, they can be evaluated reasonably efficient, even if the formula which represents the query is considered as part of the input.
---
paper_title: Finite state machines for strings over infinite alphabets
paper_content:
Motivated by formal models recently proposed in the context of XML, we study automata and logics on strings over infinite alphabets. These are conservative extensions of classical automata and logics defining the regular languages on finite alphabets. Specifically, we consider register and pebble automata, and extensions of first-order logic and monadic second-order logic. For each type of automaton we consider one-way and two-way variants, as well as deterministic, nondeterministic, and alternating control. We investigate the expressiveness and complexity of the automata and their connection to the logics, as well as standard decision problems. Some of our results answer open questions of Kaminski and Francez on register automata.
---
paper_title: Counting in Trees for Free
paper_content:
It is known that MSO logic for ordered unranked trees is undecidable if Presburger constraints are allowed at children of nodes. We show here that a decidable logic is obtained if we use a modal fixpoint logic instead. We present a characterization of this logic by means of deterministic Presburger tree automata and show how it can be used to express numerical document queries. Surprisingly, the complexity of satisfiability for the extended logic is asymptotically the same as for the original fixpoint logic. The non-emptiness for Presburger tree automata (PTA) is pspace-complete, which is moderate given that it is already pspace-hard to test whether the complement of a regular expression is non-empty. We also identify a subclass of PTAs with a tractable non-emptiness problem. Further, to decide whether a tree t satisfies a formula ϕ is polynomial in the size of ϕ and linear in the size of t.
---
paper_title: On the power of walking for querying tree-structured data
paper_content:
XSLT is the prime example of an XML query language based on tree-walking. Indeed, stripped down, XSLT is just a tree-walking tree-transducer equipped with registers and look-ahead. Motivated by this connection, we want to pinpoint the computational power of devices based on tree-walking. We show that in the absence of unique identifiers even very powerful extensions of the tree-walking paradigm are not relationally complete. That is, these extensions do not capture all of first-order logic. In contrast, when unique identifiers are available, we show that various restrictions allow to capture LOGSPACE, PTIME, PSPACE, and EXPTIME . These complexity classes are defined w.r.t. a Turing machine model working directly on (attributed) trees. When no attributes are present, relational storage does not add power; whether look-ahead adds power is related to the open question whether tree-walking captures the regular tree languages.
---
paper_title: Numerical document queries
paper_content:
A query against a database behind a site like Napster may search, e.g., for all users who have downloaded more jazz titles than pop music titles. In order to express such queries, we extend classical monadic second-order logic by Presburger predicates which pose numerical restrictions on the children (content) of an element node and provide a precise automata-theoretic characterization. While the existential fragment of the resulting logic is decidable, it turns out that satisfiability of the full logic is undecidable. Decidable satisfiability and a querying algorithm even with linear data complexity can be obtained if numerical constraints are only applied to those contents of elements where ordering is irrelevant. Finally, it is sketched how these techniques can be extended also to answer questions like, e.g., whether the total price of the jazz music downloaded so far exceeds a user's budget.
---
paper_title: An Algebraic Approach to Data Languages and Timed Languages
paper_content:
Algebra offers an elegant and powerful approach to understand regular languages and finite automata. Such framework has been notoriously lacking for timed languages and timed automata. We introduce the notion of monoid recognizability for data languages, which includes timed languages as special case, in a way that respects the spirit of the classical situation. We study closure properties and hierarchies in this model and prove that emptiness is decidable under natural hypotheses. Our class of recognizable languages properly includes many families of deterministic timed languages that have been proposed until now, and the same holds for non-deterministic versions.
---
paper_title: Finite-memory automata
paper_content:
A model of computation dealing with infinite alphabets is proposed. The model is based on replacing the equality test by unification. It appears to be a natural generalization of the classical Rabin-Scott finite-state automata and possesses many of their properties. >
---
paper_title: A Transducer-Based XML Query Processor
paper_content:
The XML Stream Machine (XSM) system is a novel XQuery processing paradigm that is tuned to the efficient processing of sequentially accessed XML data (streams). The system compiles a given XQuery into an XSM, which is an XML stream transducer, i.e., an abstract device that takes as input one or more XML data streams and produces one or more output streams, potentially using internal buffers. We present a systematic way to translate XQueries into efficient XSMs: First the XQuery is translated into a network of XSMs that correspond to the basic operators of the XQuery language and exchange streams. The network is reduced to a single XSM by repeated application of an XSM composition operation that is optimized to reduce the number of tests and actions that the XSM performs as well as the number of intermediate buffers that it uses. Finally, the optimized XSM is compiled into a C program. First empirical results illustrate the performance benefits of the XSM-based processor.
---
paper_title: Schema-based Scheduling of Event Processors and Buffer Minimization for Queries on Structured Data Streams
paper_content:
We introduce an extension of the XQuery language, FluX, that supports event-based query processing and the conscious handling of main memory buffers. Purely event-based queries of this language can be executed on streaming XML data in a very direct way. We then develop an algorithm that allows to efficiently rewrite XQueries into the event-based FluX language. This algorithm uses order constraints from a DTD to schedule event handlers and to thus minimize the amount of buffering required for evaluating a query. We discuss the various technical aspects of query optimization and query evaluation within our framework. This is complemented with an experimental evaluation of our approach.
---
paper_title: Automaton meets query algebra: Towards a unified model for XQuery evaluation over XML data streams
paper_content:
In this work, we address the efficient evaluation of XQuery expressions over continuous XML data streams, which is essential for a broad range of applications including monitoring systems and information dissemination systems. While previous work has shown that automata theory is suited for on-the-fly pattern retrieval over XML data streams, we find that automata-based approaches suffer from being not as flexibly optimizable as algebraic query systems. In fact, they enforce a rigid data-driven paradigm of execution. We thus now propose a unified query model to augment automata-style processing with algebra-based query optimization techniques. The proposed model has been successfully applied in the Raindrop stream processing system. Our experimental study confirms considerable performance gains with both established optimization techniques and our novel query rewrite rules.
---
paper_title: Attribute grammars for scalable query processing on XML streams
paper_content:
We introduce the new notion of XML Stream Attribute Grammars (XSAGs). XSAGs are the first scalable query language for XML streams (running strictly in linear time with bounded memory consumption independent of the size of the stream) that allows for actual data transformations rather than just document filtering. XSAGs are also relatively easy to use for humans. Moreover, the XSAG formalism provides a strong intuition for which queries can or cannot be processed scalably on streams. We introduce XSAGs together with the necessary language-theoretic machinery, study their theoretical properties such as their expressiveness and complexity, and discuss their implementation.
---
| Title: Automata for XML — A Survey
Section 1: XML Processing Tasks
Description 1: This section describes the various processing tasks related to XML, including the different languages tailored to specific tasks like schema, navigation, query, and transformation languages.
Section 2: XML and Trees
Description 2: This section discusses how XML documents are represented as trees, introducing the concept of unranked trees and various methods like tree representation, terms representation, and binary encoding.
Section 3: Automata
Description 3: This section covers the three main categories of automata for XML processing: parallel tree automata, sequential tree automata, and document automata, including their definitions, expressive power, and complexity.
Section 4: Complexity of Algorithmic Problems Related to Tree Automata
Description 4: This section outlines the complexity of fundamental algorithmic problems such as membership, non-emptiness, containment, and equivalence for different types of tree automata.
Section 5: Related Issues
Description 5: This section delves into related topics like the minimization of automata, the decidability of expressibility in first-order logic, and cascading sequences of automata.
Section 6: Schemas
Description 6: This section examines different classes of schema languages for XML, including DTDs, regular tree languages, and XML Schema, focusing on validation and static analysis complexity.
Section 7: Navigation
Description 7: This section explains the distinction between querying and navigation, discussing regular node-selecting queries, XPath, complexity of navigational queries, and binary queries for stream processing of XML data.
Section 8: Transformations
Description 8: This section surveys various tree transformation languages and models like top-down transducers, pebble transducers, macro tree transducers, and logically defined transducers, along with complexity issues related to XML transformations.
Section 9: Queries
Description 9: This section addresses the foundations of general XML querying, higher-arity queries, the handling of data values and arithmetic, and the evaluation of queries on streaming XML data.
Section 10: Conclusion
Description 10: This section summarizes the importance of automata and formal language theory for XML processing, highlighting the robustness of regular tree languages, node-selecting and binary queries, and open questions regarding their utility for general querying. |
A Review On Securing Distributed Systems Using Symmetric Key Cryptography | 7 | ---
paper_title: Distributed Systems: Concepts and Design
paper_content:
Broad and up-to-date coverage of the principles and practice in the fast moving area of Distributed Systems. Distributed Systems provides students of computer science and engineering with the skills they will need to design and maintain software for distributed applications. It will also be invaluable to software engineers and systems designers wishing to understand new and future developments in the field. From mobile phones to the Internet, our lives depend increasingly on distributed systems linking computers and other devices together in a seamless and transparent way. The fifth edition of this best-selling text continues to provide a comprehensive source of material on the principles and practice of distributed computer systems and the exciting new developments based on them, using a wealth of modern case studies to illustrate their design and development. The depth of coverage will enable readers to evaluate existing distributed systems and design new ones.
---
paper_title: Selecting the Advanced Encryption Standard
paper_content:
The USA National Institute of Standards and Technology selected the Advanced Encryption Standard, a new standard symmetric key encryption algorithm, from 15 qualifying algorithms. NIST has also made efforts to update and extend their standard cryptographic modes of operation.
---
paper_title: Achieving Distributed System Information Security
paper_content:
Distributed computing infrastructure's data storage subsystems are usually physically scattered among several nodes and logically shared among several users and (local) administrators. It is therefore necessary to provide users with adequate mechanisms and tools for information and data security management, especially in large scale systems since the complexity of the problem increases with the number of users and the amounts of data. In this paper we propose a solution based on a lightweight cryptography algorithm combining the strong and highly secure asymmetric cryptography technique (RSA) with the symmetric cryptography (AES). We describe a possible implementation of our solution going into details of all the algorithms and the mechanisms specified.
---
paper_title: The First 10 Years of Advanced Encryption
paper_content:
On 2 October 2000, after a three-year study period in which 15 block ciphers competed, the US National Institute of Standards and Technology (NIST) announced that the block cipher Rijndael would become the Advanced Encryption.
---
paper_title: Achieving Distributed System Information Security
paper_content:
Distributed computing infrastructure's data storage subsystems are usually physically scattered among several nodes and logically shared among several users and (local) administrators. It is therefore necessary to provide users with adequate mechanisms and tools for information and data security management, especially in large scale systems since the complexity of the problem increases with the number of users and the amounts of data. In this paper we propose a solution based on a lightweight cryptography algorithm combining the strong and highly secure asymmetric cryptography technique (RSA) with the symmetric cryptography (AES). We describe a possible implementation of our solution going into details of all the algorithms and the mechanisms specified.
---
paper_title: The First 10 Years of Advanced Encryption
paper_content:
On 2 October 2000, after a three-year study period in which 15 block ciphers competed, the US National Institute of Standards and Technology (NIST) announced that the block cipher Rijndael would become the Advanced Encryption.
---
paper_title: Effect of Security Increment to Symmetric Data Encryption through AES Methodology
paper_content:
The selective application of technological and related procedural safeguards is an important responsibility of every organization in providing adequate security to its electronic data systems. Protection of data during transmission or while in storage may be necessary to maintain the confidentiality and integrity of the information represented by the data. The algorithm uniquely defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. Data encryptions standard (DES) use 64 bits block size as well as 64 bits key size that are vulnerable to brute-force, attack. But for both efficiency and security, a larger block size is desirable. The advanced encryption standard (AES,) that uses 128 bit block size as well as 128 bits key size was introduced by NIST In this paper, we showed the effect in security increment through AES methodology. To do this, we propose an algorithm which is higher secure than Rijndael algorithm (by comparing the key size) but less efficient than that. The difference of efficiency between Rijndael and our propose algorithm is very negligible. We explain all this term in this paper.
---
paper_title: Achieving Distributed System Information Security
paper_content:
Distributed computing infrastructure's data storage subsystems are usually physically scattered among several nodes and logically shared among several users and (local) administrators. It is therefore necessary to provide users with adequate mechanisms and tools for information and data security management, especially in large scale systems since the complexity of the problem increases with the number of users and the amounts of data. In this paper we propose a solution based on a lightweight cryptography algorithm combining the strong and highly secure asymmetric cryptography technique (RSA) with the symmetric cryptography (AES). We describe a possible implementation of our solution going into details of all the algorithms and the mechanisms specified.
---
paper_title: Selecting the Advanced Encryption Standard
paper_content:
The USA National Institute of Standards and Technology selected the Advanced Encryption Standard, a new standard symmetric key encryption algorithm, from 15 qualifying algorithms. NIST has also made efforts to update and extend their standard cryptographic modes of operation.
---
paper_title: The First 10 Years of Advanced Encryption
paper_content:
On 2 October 2000, after a three-year study period in which 15 block ciphers competed, the US National Institute of Standards and Technology (NIST) announced that the block cipher Rijndael would become the Advanced Encryption.
---
| Title: A Review On Securing Distributed Systems Using Symmetric Key Cryptography
Section 1: Introduction
Description 1: Introduce the rapid growth of distributed systems and emphasize the importance of securing them, explaining various components of software security.
Section 2: Symmetric Key Cryptography
Description 2: Discuss the concept of symmetric key cryptography, its processes, and different types of algorithms used within this approach.
Section 3: Data Encryption Standard
Description 3: Describe the Data Encryption Standard (DES) algorithm, its structure, and its historical context and applications.
Section 4: Advanced Encryption Standard
Description 4: Detail the Advanced Encryption Standard (AES) algorithm, including its design principles and benefits over DES.
Section 5: Existing Work
Description 5: Review previous work and studies comparing and combining symmetric and asymmetric cryptographic techniques, including specific implementations and their outcomes.
Section 6: DES vs. AES - The Final Comparison
Description 6: Provide a comparative analysis of DES and AES algorithms, highlighting their strengths, weaknesses, and performance differences in various environments.
Section 7: Conclusion
Description 7: Summarize the findings from the review and stress the advantages of AES over DES in securing distributed systems. |
An Overview of Online Exhibitions | 16 | ---
paper_title: Multimedia kiosks in retailing
paper_content:
Reviews the very wide potential for the application of multimedia kiosks. It is important that all retailing organizations understand the scope of these applications since kiosks have the potential for eroding the traditional boundaries between retailing, banking, education and training and the provision of information and advice, both to the general public, and also within organizations to employees. Potentially, applications in many of these previously distinct areas could be interlinked. Currently, multimedia kiosks are being tested in a number of different applications. Kiosks can be viewed as a medium through which it is possible to inform, educate, train, persuade or perform information‐based transactions. Their potential attraction in all of these roles consists of their relative novelty and the range of different media that can be used to reinforce the message. The future for multimedia kiosks remains unclear. The present tests in a wide range of different application areas should permit the ident...
---
paper_title: The Museum Wearable: real-time sensor-driven understanding of visitors' interests for personalized visually-augmented museum experiences
paper_content:
This paper describes the museum wearable: a wearabl e computer which orchestrates an audiovisual narration as a function of the visitor's interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience's interests and guide the public through the path of the exhibit.
---
paper_title: Designing Web Graphics.3
paper_content:
From the Publisher: ::: This full-color guide will teach you the most successful methods for designing and preparing graphics for the World Wide Web. Completely updated and expanded to include the latest on file formats, file sizes, compression methods, cross-platform web color, and browser-specific techniques, Designing Web Graphics.2 is the definitive graphics guide for all web publishers. Step-by-step instruction in a conversational and easy-to-read style from one of the leaders in the field will help you understand the best methods and techniques for preparing graphics and media for any web site.
---
paper_title: Social Classification and Folksonomy in Art Museums: early data from the steve.museum tagger prototype
paper_content:
The collections of art museums have been assembled over hundreds of years and described, organized and classified according to traditions of art historical research and discourse. Art museums, in their role as curators and interpreters of the cultural record, have developed standards for the description of works of art (such as the Categories for the Description of Works of Art, CDWA) that emphasize the physical nature of art as artefact, the authorial role of the creator, the temporal and cultural context of creation and ownership, and the scholarly significance of the work over time. Collections managers have recorded conservation, exhibition, loan and publication history, along with significant volumes of internal documentation of acquisition and storage, that support the custody and care of artefacts of significant cultural value. But the systems of documentation and classification that support the professional discourse of art history and the management of museum collections have failed to represent the interests, perspectives or passions of those who visit [use?] museum collections, both on-site and online. As museums move to reflect the breadth of their audiences and the diversity of their perspectives, so must museum documentation change to reflect concerns other than the traditionally art historical and museological. Social tagging offers a direct way for museums to learn what museum-goers see in works of art, what they judge as significant and where they find or make meaning. Within the steve collaboration(http://www.steve.museum), a group of art museums is collectively exploring the role of social tagging and studying the resulting folksonomy (Bearman and Trant, 2005; Chun, Cherry, Hiwiller, Trant, and Wyman, 2006; Trant and Wyman, 2006). Analysis of terms collected in the prototype steve tagger suggests that social tagging of art museum objects can in fact augment museum documentation with unique access points not found in traditional cataloguing. Terms collected through social tagging tools are being compared to museum documentation, to establish the actual contributions made by naive users to the accessibility of art museum collections and to see if social classification provides a way to bridge the semantic gap between art historians and art museums’ publics.
---
| <format>
Title: An Overview of Online Exhibitions
Section 1: INTRODUCTION
Description 1: Provide an introduction to the impact of technology on heritage information delivery and the need for heritage exhibitions.
Section 2: EXHIBITIONS
Description 2: Define exhibitions and describe the three major types: industrial, regional, and universal.
Section 3: The Need for Exhibitions
Description 3: Discuss the primary objectives and reasons for exhibitions, including communication and marketing goals.
Section 4: Physical Exhibitions
Description 4: Describe traditional physical exhibitions, their factors, and the importance of visitor interaction.
Section 5: Limitations of Physical Exhibitions
Description 5: Explain the limitations of physical exhibitions, such as size, location, mobility, and cost issues.
Section 6: The Shift from Physical to Online Exhibitions
Description 6: Describe the transition from physical to online exhibitions and the advent of interactive multimedia kiosks.
Section 7: Online vs Physical Exhibitions
Description 7: Compare the advantages of online exhibitions over physical ones, such as cost-effectiveness and accessibility.
Section 8: Benefits of Online Exhibitions
Description 8: Enumerate the benefits of online exhibitions for institutions, teachers, and students, including enhanced learning and broader access.
Section 9: Drawbacks of Online Exhibitions
Description 9: Discuss the potential drawbacks of online exhibitions, focusing on visitor experience and educational impact.
Section 10: Virtual Exhibitions
Description 10: Define virtual exhibitions and distinguish them from other online exhibitions, providing examples.
Section 11: Web-based Exhibitions as Educational Tools
Description 11: Explore how web-based exhibitions serve as educational tools, facilitating active learning and international collaboration.
Section 12: DESIGN AND DEVELOPMENT OF USER INTERFACES
Description 12: Detail the considerations and best practices for designing user interfaces for online exhibitions.
Section 13: METADATA IN ONLINE EXHIBITIONS
Description 13: Explain the role of metadata in categorizing and accessing online exhibition content.
Section 14: AUTHORING TOOLS
Description 14: Describe the tools used to create and manage online exhibitions, including software like CONTENTdm and Multi MIMSY 2000.
Section 15: REVIEW OF ONLINE EXHIBITIONS
Description 15: Review various online exhibitions across topics like art, history, and science, highlighting their unique features.
Section 16: FUTURE TRENDS IN ONLINE EXHIBITIONS
Description 16: Discuss future trends in online exhibitions, including the use of multimedia systems, virtual reality, and Web 2.0 tools.
Section 17: CONCLUSION
Description 17: Summarize the impact of technological advancements on online exhibitions and their potential for future development.
</format>
|
A Survey on Context-aware systems | 12 | ---
paper_title: User interactions with everyday applications as context for just-in-time information access
paper_content:
Our central claim is that user interactions with everyday productivity applications (e.g., word processors, Web browsers, etc.) provide rich contextual information that can be leveraged to support just-in-time access to task-relevant information. We discuss the requirements for such systems, and develop a general architecture for systems of this type. As evidence for our claim, we present Watson, a system which gathers contextual information in the form of the text of the document the user is manipulating in order to proactively retrieve documents from distributed information repositories. We close by describing the results of several experiments with Watson, which show it consistently provides useful information to its users.
---
paper_title: The active badge location system
paper_content:
A novel system for the location of people in an office environment is described. Members of staff wear badges that transmit signals providing information about their location to a centralized location service, through a network of sensors. The paper also examines alternative location techniques, system design issues and applications, particularly relating to telephone call routing. Location systems raise concerns about the privacy of an individual and these issues are also addressed.
---
paper_title: Context-awareness on mobile devices - the hydrogen approach
paper_content:
Information about the user's environment offers new opportunities and exposes new challenges in terms of time-aware, location-aware, device-aware and personalized applications. Such applications constantly need to monitor the environment - called context - to allow the application to react accordingly to this context. Context-awareness is especially interesting in mobile scenarios where the context of the application is highly dynamic and allows the application to deal with the constraints of mobile devices in terms of presentation and interaction abilities and communication restrictions. Current context-aware applications often realize sensing of context information in an ad hoc manner. The application programmer needs to deal with the supply of the context information including the sensing of the environment, its interpretation and its disposal for further processing in addition to the primary purpose of the application. The close interweavement of device specific context handling with the application obstructs its reuse with other hardware configurations. Recently, architectures providing support for context-aware applications have been developed. Up to now such architectures are not trimmed to the special requirements of mobile devices regarding particularly the limitations of network connections, limited computing power and the characteristics of mobile users. This paper proposes an architecture and a software framework - the hydrogen context framework -which support context-awareness for considering these constraints. It is extensible to consider all kind of context information and comprises a layered architecture. To prove the feasibility the framework has been implemented to run on mobile devices. A context-aware postbox is realized to demonstrate the capabilities of the framework.
---
paper_title: Context-Aware Computing : The CyberDesk Project
paper_content:
The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk system in a desktop setting and are currently using it to build an intelligent home environment.
---
paper_title: Towards situated computing
paper_content:
Situated computing concerns the ability of computing devices to detect, interpret and respond to aspects of the user's local environment. In this paper, we use our recent prototyping experience to identify a number of challenging issues that must be resolved in building wearable computers that support situated applications. The paper is organized into three areas: Sensing the local environment, interpreting sensor data, and realizing the value of situated applications. We conclude that while it is feasible to develop interesting prototypes, there remain many difficulties to overcome before robust systems may be widely deployed.
---
paper_title: Placing search in context: the concept revisited
paper_content:
Keyword-based search engines are in widespread use today as a popular means for Web-based information retrieval. Although such systems seem deceptively simple, a considerable amount of skill is required in order to satisfy non-trivial information needs. This paper presents a new conceptual paradigm for performing search in context, that largely automates the search process, providing even non-professional users with highly relevant results. This paradigm is implemented in practice in the IntelliZap system, where search is initiated from a text query marked by the user in a document she views, and is guided by the text surrounding the marked query in that document (“the context”). The context-driven information retrieval process involves semantic keyword extraction and clustering to automatically generate new, augmented queries. The latter are submitted to a host of general and domain-specific search engines. Search results are then semantically reranked, using context. Experimental results testify that using context to guide search, effectively offers even inexperienced users an advanced search tool on the Web.
---
paper_title: Disseminating active map information to mobile hosts
paper_content:
The article describes an active map service (AMS) that supports context-aware computing by providing clients with information about located-objects and how those objects change over time. The authors focus on the communication issues of disseminating information from an active map server to its clients, and in particular, address how to deal with various overload situations that can occur. Simple unicast callbacks to interested clients work well enough if only a few located-objects are moving at any given time and only a few clients wish to know about any given move. However, if many people are moving about in the same region and many clients are interested in their motion, then the AMS may experience overload due to the quadratic nature of the communications involved. This overload affects both the server as well as any slow communications links being used. Mobile distributed computing enables users to interact with many different mobile and stationary computers over the course of the day. Navigating a mobile environment can be aided by active maps that describe the location and characteristics of objects within some region as they change over time. >
---
paper_title: An Infrastructure Approach to Context-Aware Computing
paper_content:
The Context Toolkit (Dey, Abowd, and Salber, 2001 [this special issue]) is only one of many possible architectures for supporting context-aware applications. In this essay, we look at the tradeoffs involved with a service infrastructure approach to context-aware computing. We describe the advantages that a service infrastructure for context awareness has over other approaches, outline some of the core technical challenges that must be addressed before such an infrastructure can be built, and point out promising research directions for overcoming these challenges.
---
paper_title: A framework for developing mobile, context-aware applications
paper_content:
The emergence of truly ubiquitous computing, enabled by the availability of mobile, heterogenous devices that supply context information, is currently hampered by the lack of programming support for the design and development of context-aware applications. We have developed a framework which significantly eases the development of mobile, context-aware applications. The framework allows developers to fuse data from disparate sensors, represent application context, and reason efficiently about context, without the need to write complex code. An event based communication paradigm designed specifically for ad-hoc wireless environments is incorporated, which supports loose coupling between sensors, actuators and application components.
---
paper_title: Managing Context Information in Mobile Devices
paper_content:
We present a uniform mobile terminal software framework that provides systematic methods for acquiring and processing useful context information from a user's surroundings and giving it to applications. The framework simplifies the development of context-aware mobile applications by managing raw context information gained from multiple sources and enabling higher-level context abstractions.
---
paper_title: Context-awareness on mobile devices - the hydrogen approach
paper_content:
Information about the user's environment offers new opportunities and exposes new challenges in terms of time-aware, location-aware, device-aware and personalized applications. Such applications constantly need to monitor the environment - called context - to allow the application to react accordingly to this context. Context-awareness is especially interesting in mobile scenarios where the context of the application is highly dynamic and allows the application to deal with the constraints of mobile devices in terms of presentation and interaction abilities and communication restrictions. Current context-aware applications often realize sensing of context information in an ad hoc manner. The application programmer needs to deal with the supply of the context information including the sensing of the environment, its interpretation and its disposal for further processing in addition to the primary purpose of the application. The close interweavement of device specific context handling with the application obstructs its reuse with other hardware configurations. Recently, architectures providing support for context-aware applications have been developed. Up to now such architectures are not trimmed to the special requirements of mobile devices regarding particularly the limitations of network connections, limited computing power and the characteristics of mobile users. This paper proposes an architecture and a software framework - the hydrogen context framework -which support context-awareness for considering these constraints. It is extensible to consider all kind of context information and comprises a layered architecture. To prove the feasibility the framework has been implemented to run on mobile devices. A context-aware postbox is realized to demonstrate the capabilities of the framework.
---
paper_title: CASS – a middleware for mobile context-aware applications
paper_content:
Among the difficulties faced by designers of mobile context-aware applications is the increased burden of having to deal with context and also the processing and memory constraints of small mobile computers. Although progress has been made in the area of frameworks and toolkits for contextawareness, there is still a need for middleware that supports higher-level context abstractions and is both flexible and extensible in its treatment of
---
paper_title: Structuring Context Aware Applications: Five-Layer Model and Example Case
paper_content:
Structuring context aware applications is important for modularising their implementation. The modularization is necessary for re-use and sharing of software and hardware components. For this end, a five-layer model for structuring context aware applications is presented. The layers are Physical, Data, Semantic, Inference and Application. (An example case involving a context aware mobile terminal in various user roles has been devised and implemented and this case is then mapped on to the five-layer model giving an example of its use. NOT DISCUSSED IN THIS POSITION PAPER)
---
paper_title: The context toolkit: aiding the development of context-enabled applications
paper_content:
Context-enabled applications are just emerging and promisericher interaction by taking environmental context into account.However, they are difficult to build due to their distributednature and the use of unconventional sensors. The concepts oftoolkits and widget libraries in graphical user interfaces has beentremendously successtil, allowing programmers to leverage offexisting building blocks to build interactive systems more easily.We introduce the concept of context widgets that mediate betweenthe environment and the application in the same way graphicalwidgets mediate between the user and the application. We illustratethe concept of context widgets with the beginnings of a widgetlibrary we have developed for sensing presence, identity andactivity of people and things. We assess the success of ourapproach with two example context-enabled applications we havebuilt and an existing application to which we have addedcontext-sensing capabilities.
---
paper_title: An Intelligent Broker Architecture for Context-Aware Systems
paper_content:
Context-aware computing is an emerging paradigm to free everyday users from manually configuring and instructing computer systems. As the general trend of computing is progressing towards an open and dynamic infrastructure, building context-aware systems can be difficult and costly. In order to build successful context-aware systems, we must develop an architecture to reduce the difficulty and cost of building these systems. This PhD. dissertation proposal describes a research plan to develop a broker-centric agent architecture that is aimed to relieve the burden of capability-limited agents of acquiring and reasoning about contexts, and to protect the privacy of users in a contextaware environment. The implementation of the Context Broker Archiecture will explore Web Ontology Language for modeling contexts and privacy policies, Jess for building a hybrid reasoning mechanism and JADE/FIPA for realizing broker behaviors and agent communications.
---
paper_title: Managing Context Information in Mobile Devices
paper_content:
We present a uniform mobile terminal software framework that provides systematic methods for acquiring and processing useful context information from a user's surroundings and giving it to applications. The framework simplifies the development of context-aware mobile applications by managing raw context information gained from multiple sources and enabling higher-level context abstractions.
---
paper_title: An ontology for mobile device sensor-based context awareness
paper_content:
In mobile computing, the efficient utilisation of the information gained from the sensors embedded in the devices is difficult. Instead of using raw measurement data application specifically, as currently is customary, higher abstraction level semantic descriptions of the situation, context, can be used to develop mobile applications that are more usable. This article introduces an ontology of context constituents, which are derived from a set of sensors embedded in a mobile device. In other words, a semantic interface to the sensor data is provided. The ontology promotes the rapid development of mobile applications, more efficient use of resources, as well as reuse and sharing of information between communicating entities. A few mobile applications are presented to illustrate the possibilities of using the ontology.
---
paper_title: The anatomy of a context-aware application
paper_content:
We describe a sensor-driven, or sentient, platform for context-aware computing that enables applications to follow mobile users as they move around a building. The platform is particularly suitable for richly equipped, networked environments. The only item a user is required to carry is a small sensor tag, which identifies them to the system and locates them accurately in three dimensions. The platform builds a dynamic model of the environment using these location sensors and resource information gathered by telemetry software, and presents it in a form suitable for application programmers. Use of the platform is illustrated through a practical example, which allows a user's current working desktop to follow them as they move around the environment.
---
paper_title: E-graffiti: evaluating real-world use of a context-aware system
paper_content:
Abstract Much of the previous research in context-aware computing has sought to find a workable definition of context and to develop systems that could detect and interpret contextual characteristics of an user environment. However, less time has been spent studying the usability of these types of systems. This was the goal of our project. E-graffiti is a context-aware application that detects the user's location on a college campus and displays text notes to the user based on their location. Additionally, it allows them to create notes that they can associate with a specific location. We released E-graffiti to 57 students who were using laptops that could access the campus wireless network. Their use of E-graffiti was logged in a remote database and they were also required to fill out a questionnaire towards the end of the semester. The lessons learned from the evaluation of E-graffiti point to themes other designers of ubiquitous and context-aware applications may need to address in designing their own systems. Some of the issues that emerged in the evaluation stage included difficulties with a misleading conceptual model, lack of use due to the reliance on explicit user input, the need for a highly relevant contextual focus, and the potential benefits of rapid, ongoing prototype development in tandem with user evaluation.
---
paper_title: Context-aware mobile communication in hospitals
paper_content:
A, collaborative handheld system extends the instant messaging paradigm by adding context-awareness to support the intensive and distributed nature of information management within a hospital setting.
---
paper_title: Issues for context services for pervasive computing
paper_content:
Context-aware computing has the potential to greatly alleviate the human attention bottleneck. To facilitate the development of context-aware applications, we envision a context service that provides standardized support for applications. It supports both synchronous queries and asynchronous event notifications. It also addresses privacy concerns by providing controls that allow people to limit the release of their context information.
---
paper_title: A policy language for a pervasive computing environment
paper_content:
We describe a policy language designed for pervasive computing applications that is based on deontic concepts and grounded in a semantic language. The pervasive computing environments under consideration are those in which people and devices are mobile and use various wireless networking technologies to discover and access services and devices in their vicinity. Such pervasive environments lend themselves to policy-based security due to their extremely dynamic nature. Using policies allows the security functionality to be modified without changing the implementation of the entities involved. However, along with being extremely dynamic, these environments also tend to span several domains and be made up of entities of varied capabilities. A policy language for environments of this sort needs to be very expressive but lightweight and easily extensible. We demonstrate the feasibility of our policy language in pervasive environments through a prototype used as part of a secure pervasive system.
---
| Title: A Survey on Context-aware Systems
Section 1: Introduction
Description 1: Introduce the concept of context-aware systems, their historical background, and significance in enhancing the usability and effectiveness of computing devices through environmental context.
Section 2: Architecture
Description 2: Discuss various architectural designs for context-aware systems, detailing approaches like direct sensor access, middleware-based, and context-server-based systems.
Section 3: Context Models
Description 3: Explain the need for context models to define and store context in a machine-processable form and discuss the essential attributes and desirable properties of flexible and efficient context ontologies.
Section 4: Location-aware Systems
Description 4: Describe the widespread application of location-aware systems, focusing on various infrastructures available for collecting position data and highlighting detailed examples.
Section 5: Context-aware Systems
Description 5: Discuss systems that use multiple types of context data beyond location, combining them to create high-level context objects for more adaptive and user-friendly systems.
Section 6: Context-aware Frameworks
Description 6: Introduce various context-aware frameworks that simplify the development of context-aware applications, highlighting the design decisions and features of each framework.
Section 7: Resource Discovery
Description 7: Cover the mechanisms for discovering and managing sensors in a distributed network, ensuring that appropriate sensors can be dynamically found and utilized.
Section 8: Sensing
Description 8: Discuss approaches to managing different data sources and separating applications from context acquisition concerns through components like widgets and sensor nodes.
Section 9: Context Processing
Description 9: Explain the methods for processing raw context data, including aggregation and interpretation, to provide high-level context abstractions useful for application developers.
Section 10: Historical Context Data
Description 10: Address the importance of maintaining historical context data for trend analysis and future context prediction, along with the memory and storage considerations involved.
Section 11: Security and Privacy
Description 11: Highlight the necessity of protecting sensitive context information and overview the security and privacy mechanisms applied in context-aware systems, including context ownership and access control.
Section 12: Conclusion and Future Work
Description 12: Summarize the survey findings, stressing the importance of standardization in context encoding and communication protocols, and suggest future research directions focusing on service-oriented architectures and web services. |
Example-based machine translation: a review and commentary | 12 | ---
paper_title: Review Article: Example-based Machine Translation
paper_content:
In the last ten years there has been a significant amount ofresearch in Machine Translation within a ``new'' paradigm ofempirical approaches, often labelled collectively as``Example-based'' approaches. The first manifestation of thisapproach caused some surprise and hostility among observers moreused to different ways of working, but the techniques were quicklyadopted and adapted by many researchers, often creating hybridsystems. This paper reviews the various research efforts withinthis paradigm reported to date, and attempts a categorisation ofdifferent manifestations of the general approach.
---
paper_title: Shake-and-Bake Translation
paper_content:
Publisher Summary ::: This chapter presents a novel approach to Machine Translation (MT), called Shake-and-Bake translation, which is based on the idea of grammars as systems of constraints. Furthermore, in this approach translation equivalence is stipulated among bags of basic expressions of the languages concerned as presented in monolingual lexicalist grammars. Equivalence is constrained by equating variables in the logical forms of equivalent signs, providing a clear semantic basis for the notion of translation equivalence, and by any other constraints the bilingual lexicographer sees fit to include. Finally, the approach allows for the target language expressions to combine freely during generation, providing trivial accounts of phenomena for which structure-based approaches postulate complex transfer, noncompositional grammars, or undecidable logical inferences.
---
paper_title: Translating with Examples: A New Approach to Machine Translation
paper_content:
This paper proposes Example-Based Machine Translation (EBMT). EBMT retrieves similar examples (pairs of source texts and their translations) from the database, adapting the examples to translate a new source text. This paper compares the various costs of EBMT and conventional Rule-Based Machine Translation (RBMT). It explains EBMT's new features which RBMT lacks, and describes its configuration and the basic computing mechanism. In order to demonstrate EBMT's feasibility, the translation of Japanese noun phrases of the form "N1 no N2" to English noun phrases is explained in detail. Translation of other parts of Japanese sentences, including "da" sentences , aspect, and idiomatic expressions, as well as the integration of EBMT with RBMT are discussed.
---
paper_title: Non-Hybrid Example-Based Machine Translation Architectures
paper_content:
A general definition of rationalist and empiricist natural language processing is attempted. A classification of empiricist machine translation systems is given based on the rationalist/empiricist distinction. Examples of approaches falling into the two different strategies are discussed. Research results are reported from attempts to break new ground in what is referred to as "pure" or non-hybrid example-based machine translation.
---
paper_title: Learning Translation Templates From Bilingual Text
paper_content:
This paper proposes a two-phase example-based machine translation methodology which develops translation templates from examples and then translates using template matching. This method improves translation quality and facilitates customization of machine translation systems. This paper focuses on the automatic learning of translation templates. A translation template is a bilingual pair of sentences in which corresponding units (words and pharases) are coupled and replaced with variables. Correspondence between units is determined by suing a bilingual dictionary and by analyzing the syntactic structure of the sentences. Syntactic ambiguity and ambiguity in correspondence between units are simultaneously resolved. All of the translation templates generated from a bilingual corpus are grouped by their source language part, and then further refined to resolved conflicts among templates whose source language parts are the same but whose target language parts are different. By using the proposed method, not only transfer rules but also knowledge for lexical selection is effectively extracted from a bilingual corpus.
---
paper_title: Robust Large-Scale EBMT with Marker-Based Segmentation
paper_content:
Previous work on marker-based EBMT [Gough & Way, 2003, Way & Gough, 2004] suffered from problems such as data-sparseness and disparity between the training and test data. We have developed a large-scale robust EBMT system. In a comparison with the systems listed in [Somers, 2003], ours is the third largest EBMT system and certainly the largest English-French EBMT system. Previous work used the on-line MT system Logomedia to translate source language material as a means of populating the system’s database where bitexts were unavailable. We derive our sententially aligned strings from a Sun Translation Memory (TM) and limit the integration of Logomedia to the derivation of our word-level lexicon. We also use Logomedia to provide a baseline comparison for our system and observe that we ::: outperform Logomedia and previous marker-based EBMT systems in a number of tests.
---
paper_title: Combining Dictionary-Based and Example-Based Methods for Natural Language Analysis
paper_content:
We propose combining dictionary-based and example-based natural language (NL) processing techniques in a framework that we believe will provide substantive enhancements to NL analysis systems. The centerpiece of this framework is a relatively large-scale lexical knowledge base that we have constructed automatically from an online version of Longman's Dictionary of Contemporary English (LDOCE), and that is currently used in our NL analysis system to direct phrasal attachments. After discussing the effective use of example-based processing in hybrid NL systems, we compare recent dictionarybased and example-based work, and identify the aspects of this work that are included in the proposed framework. We then describe the methods employed in automatically creating our lexical knowledge base from LDOCE, and its current and planned use as a large-scale example base in our NL analysis system. This knowledge base is structured as a highly interconnected network of words linked by semantic relations such as is_a, has_part, location_of, typical_object, and is_for. We claim that within the proposed hybrid framework, it provides a uniquely rich source of information for use during NL analysis.
---
paper_title: Example-Based Controlled Translation
paper_content:
The first research on integrating controlled language data in an Example-Based Machine Translation (EBMT) system was published in [Gough & Way, 2003]. We improve on their sub-sentential alignment algorithm to populate the system’s databases with more than six times as many potentially useful fragments. Together with two simple novel improvements—correcting mistranslations in the lexicon, and allowing multiple translations in the lexicon—translation quality improves considerably when target language ::: translations are constrained. We also develop the first EBMT system which attempts to filter the source language data using controlled language specifications. We provide ::: detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms Logomedia in a number of tests. Finally, despite conflicting results from different automatic evaluation metrics, we observe a preference for controlling the source data rather than the target translations.
---
paper_title: Controlled Translation in an Example-based Environment: What do Automatic Evaluation Metrics Tell Us?
paper_content:
This paper presents an extended, harmonised account of our previous work on integrating controlled language data in an Example-based Machine Translation system. Gough and Way in MT Summit pp. 133---140 (2003) focused on controlling the output text in a novel manner, while Gough and Way (9th Workshop of the EAMT, (2004a), pp. 73---81) sought to constrain the input strings according to controlled language specifications. Our original sub-sentential alignment algorithm could deal only with 1:1 matches, but subsequent refinements enabled n:m alignments to be captured. A direct consequence was that we were able to populate the system's databases with more than six times as many potentially useful fragments. Together with two simple novel improvements --- correcting a small number of mistranslations in the lexicon, and allowing multiple translations in the lexicon --- translation quality improves considerably. We provide detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms the rule-based on-line system Logomedia on a range of automatic evaluation metrics, and that the `best' translation candidate is consistently highly ranked by our system. Finally, we note in a number of tests that the BLEU metric gives objectively different results than other automatic evaluation metrics and a manual evaluation. Despite these conflicting results, we observe a preference for controlling the source data rather than the target translations.
---
paper_title: Comparing example-based and statistical machine translation
paper_content:
In previous work (Gough and Way 2004), we showed that our Example-Based Machine Translation (EBMT) system improved with respect to both coverage and quality when seeded with increasing amounts of training data, so that it significantly outperformed the on-line MT system Logomedia according to a wide variety of automatic evaluation metrics. While it is perhaps unsurprising that system performance is correlated with the amount of training data, we address in this paper the question of whether a large-scale, robust EBMT system such as ours can outperform a Statistical Machine Translation (SMT) system. We obtained a large English-French translation memory from Sun Microsystems from which we randomly extracted a near 4K test set. The remaining data was split into three training sets, of roughly 50K, 100K and 200K sentence-pairs in order to measure the effect of increasing the size of the training data on the performance of the two systems. Our main observation is that contrary to perceived wisdom in the field, there appears to be little substance to the claim that SMT systems are guaranteed to outperform EBMT systems when confronted with ‘enough’ training data. Our tests on a 4.8 million word bitext indicate that while SMT appears to outperform our system for French-English on a number of metrics, for English-French, on all but one automatic evaluation metric, the performance of our EBMT system is superior to the baseline SMT model.
---
| Title: Example-based machine translation: a review and commentary
Section 1: General introduction
Description 1: Provide an overview of the evolution and contrast among rule-based, statistical, and example-based machine translation models.
Section 2: Contents of the collection
Description 2: Discuss the aims of the collection, the editors' perspective on defining EBMT, and the structure of the compiled articles.
Section 3: Foundations of EBMT
Description 3: Summarize foundational articles discussing the distinctive features of EBMT and comparisons with RBMT and SMT.
Section 4: Run-time approaches to EBMT
Description 4: Describe various EBMT systems that make direct use of bilingual data during the translation process.
Section 5: Template-driven EBMT
Description 5: Outline methods of building and using templates from bilingual corpora in EBMT systems.
Section 6: EBMT and derivation trees
Description 6: Detail approaches that use more structured templates derived from bilingual corpora.
Section 7: General comments and observations
Description 7: Address the overall coherence of EBMT systems, defining EBMT, and its relation to RBMT and SMT.
Section 8: Defining EBMT
Description 8: Explore attempts to define EBMT, its core processes, and how it differs from SMT and RBMT.
Section 9: Methods and problems in EBMT
Description 9: Discuss specific example-based methods, handling of templates, semantic disambiguation, and issues encountered.
Section 10: Complexity and expandability
Description 10: Evaluate the complexity of EBMT systems, scalability issues, and the potential for expansion.
Section 11: Hospitality and hybridization
Description 11: Examine the integration of various techniques within EBMT and its status as a hybrid MT approach.
Section 12: Evaluation and commercialization
Description 12: Assess the performance of EBMT systems, current evaluation methods, and prospects for commercialization. |
Privacy implications of accelerometer data: a review of possible inferences | 12 | ---
paper_title: SemaDroid: A Privacy-Aware Sensor Management Framework for Smartphones
paper_content:
While mobile sensing applications are booming, the sensor management mechanisms in current smartphone operating systems are left behind -- they are incomprehensive and coarse-grained, exposing a huge attack surface for malicious or aggressive third party apps to steal user's private information through mobile sensors. In this paper, we propose a privacy-aware sensor management framework, called SemaDroid, which extends the existing sensor management framework on Android to provide comprehensive and fine-grained access control over onboard sensors. SemaDroid allows the user to monitor the sensor usage of installed apps, and to control the disclosure of sensing information while not affecting the app's usability. Furthermore, SemaDroid supports context-aware and quality-of-sensing based access control policies. The enforcement and update of the policies are in real-time. Detailed design and implementation of SemaDroid on Android are presented to show that SemaDroid works compatible with the existing Android security framework. Demonstrations are also given to show the capability of SemaDroid on sensor management and on defeating emerging sensor-based attacks. Finally, we show the high efficiency and security of SemaDroid.
---
paper_title: Exploring privacy concerns about personal sensing
paper_content:
More and more personal devices such as mobile phones and multimedia players use embedded sensing. This means that people are wearing and carrying devices capable of sensing details about them such as their activity, location, and environment. In this paper, we explore privacy concerns about such personal sensing through interviews with 24 participants who took part in a three month study that used personal sensing to detect their physical activities. Our results show that concerns often depended on what was being recorded, the context in which participants worked and lived and thus would be sensed, and the value they perceived would be provided. We suggest ways in which personal sensing can be made more privacy-sensitive to address these concerns.
---
paper_title: Mobile Device Identification via Sensor Fingerprinting
paper_content:
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
---
paper_title: Sensor Guardian: prevent privacy inference on Android sensors
paper_content:
Privacy inference attacks based on sensor data is an emerging and severe threat on smart devices, in which malicious applications leverage data from innocuous sensors to infer sensitive information of user, e.g., utilizing accelerometers to infer user’s keystroke. In this paper, we present Sensor Guardian, a privacy protection system that mitigates this threat on Android by hooking and controlling applications’ access to sensors. Sensor Guardian inserts hooks into applications by statically instrumenting their APK (short for Android Package Kit) files and enforces control policies in these hooks at runtime. Our evaluation shows that Sensor Guardian can effectively and efficiently mitigate the privacy inference threat on Android sensors, with negligible overhead during both static instrumentation and runtime control.
---
paper_title: Doorjamb: unobtrusive room-level tracking of people in homes using doorway sensors
paper_content:
Indoor tracking systems will be an essential part of the home of the future, enabling location-aware and individually-tailored services. However, today there are no tracking solutions that are practical for "every day" use in the home. In this paper, we introduce the Doorjamb tracking system that uses ultrasonic range finders mounted above each doorway, pointed downward to sense people as they walk through the doorway. The system differentiates people by measuring their heights, infers their walking direction using signal processing, and identifies their room locations based on the sequence of doorways through which they pass. Doorjamb provides room-level tracking without requiring any user participation, wearable devices, privacy-intrusive sensors, or high-cost sensors. We create a proof-of-concept implementation and empirically evaluate Doorjamb with experiments that include over 3000 manually-recorded doorway crossings. Results indicate that the system can perform room-level tracking with 90% accuracy on average.
---
paper_title: Actitracker: A Smartphone-Based Activity Recognition System for Improving Health and Well-Being
paper_content:
Actitracker is a smartphone-based activity-monitoring service to help people ensure they receive sufficient activity to maintain proper health. This free service allowed people to set personal activity goals and monitor their progress toward these goals. Actitracker uses machine learning methods to recognize a user's activities. It initially employs a "universal" model generated from labeled activity data from a panel of users, but will automatically shift to a much more accurate personalized model once a user completes a simple training phase. Detailed activity reports and statistics are maintained and provided to the user. Actitracker is a research-based system that began in 2011, before fitness trackers like Fitbit were popular, and was deployed for public use from 2012 until 2015, during which period it had 1,000 registered users. This paper describes the Actitracker system, its use of machine learning, and user experiences. While activity recognition has now entered the mainstream, this paper provides insights into applied activity recognition, something that commercial companies rarely share.
---
paper_title: uWave: Accelerometer-based personalized gesture recognition and its applications
paper_content:
The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statistical methods, uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures and physical manipulations. We evaluate uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research. It shows that uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples. Our evaluation data set is the largest and most extensive in published studies, to the best of our knowledge. We also present applications of uWave in gesture-based user authentication and interaction with three-dimensional mobile user interfaces using user created gestures.
---
paper_title: Detection of Activities by Wireless Sensors for Daily Life Surveillance: Eating and Drinking
paper_content:
This paper introduces a two-stage approach to the detection of people eating and/or drinking for the purposes of surveillance of daily life. With the sole use of wearable accelerometer sensor attached to somebody’s (man or a woman) wrists, this two-stage approach consists of feature extraction followed by classification. At the first stage, based on the limb’s three dimensional kinematics movement model and the Extended Kalman Filter (EKF), the realtime arm movement features described by Euler angles are extracted from the raw accelerometer measurement data. In the latter stage, the Hierarchical Temporal Memory (HTM) network is adopted to classify the extracted features of the eating/drinking activities based on the space and time varying property of the features, by making use of the powerful modelling capability of HTM network on dynamic signals which is varying with both space and time. The proposed approach is tested through the real eating and drinking activities using the three dimensional accelerometers. Experimental results show that the EKF and HTM based two-stage approach can perform the activity detection successfully with very high accuracy.
---
paper_title: Driving Behavior and Traffic Safety: An Acceleration-Based Safety Evaluation Procedure for Smartphones
paper_content:
Traffic safety and energy efficiency of vehicles are strictly related to driver’s behavior. The scientific literature has investigated on some specific dynamic parameters that, among the others, can be used as a measure of unsafe or aggressive driving style such as longitudinal and lateral acceleration of vehicle. Moreover, the use of modern mobile devices (smartphones and tablets), and their internal sensors (GPS receivers, three-axes accelerometers), allows road users to receive real time information and feedback that can be useful to increase awareness of drivers and promote safety. This paper focuses on the development of a prototype mobile application that can evaluate the grade of safety that drivers are keeping on the road by measuring of accelerations (longitudinal and lateral) and warning for users when it can be convenient to correct their driving style. The aggressiveness is evaluated by plotting vehicle’s acceleration on a g-g diagram specially studied and designed, where horizontal and lateral acceleration is displayed inside areas of “Good Driving Style”. Several experimental tests were carried out with different drivers and cars in order to estimate the system accuracy and the usability of the application. This work is part of the wider research project M2M, Mobile to Mobility: Information and communication technology systems for road traffic safety (PON National Operational Program for Research and Competitiveness 2007-2013) which is based on the use of mobile sensor computing systems for giving real-time information in order to reduce risks and to make the transportation system more safe and comfortable.
---
paper_title: The Validity of a New Consumer-Targeted Wrist Device in Sleep Measurement: An Overnight Comparison Against Polysomnography in Children and Adolescents
paper_content:
STUDY OBJECTIVES ::: The validity of consumer-targeted wrist-worn sleep measurement systems has been little studied in children and adolescents. We examined the validity of a new fitness tracker (PFT) manufactured by Polar Electro Oy and the previously validated Actiwatch 2 (AW2) from Philips Respironics against polysomnography (PSG) in children and adolescents. ::: ::: ::: METHODS ::: Seventeen children (age 11.0 ± 0.8 years) and 17 adolescents (age 17.8 ± 1.8 years) wore the PFT and AW2 concurrently with an ambulatory PSG in their own home for 1 night. We compared sleep onset, offset, sleep interval (time from sleep on to offset), actual sleep time (time scored as sleep between sleep on and offset), and wake after sleep onset (WASO) between accelerometers and PSG. Sensitivity, specificity, and accuracy were calculated from the epoch-by-epoch data. ::: ::: ::: RESULTS ::: Both devices performed adequately against PSG, with excellent sensitivity for both age groups (> 0.91). In terms of specificity, the PFT was adequate in both groups (> 0.77), and AW2 adequate in children (0.68) and poor in adolescents (0.58). In the younger group, the PFT underestimated actual sleep time by 29.9 minutes and AW2 underestimated actual sleep time by 43.6 minutes. Both overestimated WASO, PFT by 24.4 minutes and AW2 by 20.9 minutes. In the older group, both devices underestimated actual sleep time (PFT by 20.6 minutes and AW2 by 26.8 minutes) and overestimated WASO (PFT by 12.5 minutes and AW2 by 14.3 minutes). Both devices were accurate in defining sleep onset. ::: ::: ::: CONCLUSIONS ::: This study suggests that this consumer-targeted wrist-worn device performs as well as, or even better than, the previously validated AW2 against PSG in children and adolescents. Both devices underestimated sleep but to a lesser extent than seen in many previous validation studies on research-targeted accelerometers.
---
paper_title: Using mobile phone sensors to detect driving behavior
paper_content:
In India, an increasing number of vehicles on the roads, in recent past, have led to an increase in the number of road accidents. There have been alarming statistics regarding the number of accidents per day in India. At least 1,42,000 people died due to road accidents in India in the year 2011. Bad driving, lax traffic control, and poor road conditions are the main reason for this. In this work, we present a mobile phone application that uses combination of in-built sensors, GPS, micro-phone and accelerometer, to detect driving behavior along with road and traffic conditions. This application will prove helpful in detecting bad driving as well as road and traffic conditions in order to assist a willing individual to change his or her driving behavior. The law and enforcement agencies may also use this data in analyzing the ground realities for increasing number of road accidents.
---
paper_title: Estimating load carriage from a body-worn accelerometer
paper_content:
Heavy loads increase the risk of musculoskeletal injury for foot soldiers and first responders. Continuous monitoring of load carriage in the field has proven difficult. We propose an algorithm for estimating load from a single body-worn accelerometer. The algorithm utilizes three different methods for characterizing torso movement dynamics, and maps the extracted dynamics features to load estimates using two machine learning multivariate regression techniques. The algorithm is applied, using leave-one-subject-out cross-validation, to two field collections of soldiers and civilians walking with varying loads. Rapid, accurate estimates of load are obtained, demonstrating robustness to changes in equipment configuration, walking conditions, and walking speeds. On soldier data with loads ranging from 45 to 89 lbs, load estimates result in mean absolute error (MAE) of 6.64 lbs and correlation of r = 0.81. On combined soldier and civilian data, with loads ranging from 0 to 89 lbs, results are MAE = 9.57 lbs and r = 0.91.
---
paper_title: Smart Elderly Home Monitoring System with an Android Phone
paper_content:
Falls, heart attack and stroke are among the leading causes of hospitalization for the elderly and illness individual. The chances of surviving a fall, heart attack or stroke are much greater if the senior gets help within an hour. In this project, a smart elderly home monitoring system (SEHMS) is designed and developed. An Android-based smart phone with 3-axial accelerometer is used as the telehealth device which could detect a fall of the carrier. The smart phone is then connected to the monitoring system by using the TCP/IP networking method via Wi-Fi. A graphical user interface (GUI) is developed as the monitoring system which exhibits the information gathered from the system. In addition, the concept of a remote panic button has been tested and implemented in this project by using the same android based smart phone. With the developed system, elderly and chronically ill patients could stay independently in their own home with care facilities and secure in the knowledge that they are being monitored.
---
paper_title: Activity classification using a single wrist-worn accelerometer
paper_content:
Automatic identification of human activity has led to a possibility of providing personalised services in different domains i.e. healthcare, security and sport etc. With advancement in sensor technology, automatic activity recognition can be done in an unobtrusive and non-intrusive way. The placement of the sensor and wearability are ones of vital keys in the successful activity recognition of free space livings. Experiments were carried out to investigate the use of a single wrist-worn accelerometer for automatic activity classification. The performances of two classification algorithms namely Decision Tree C4.5 and Artificial Neural Network were compared using four different sets of features to classify five daily living activities. The result revealed that Decision Tree C4.5 has outperformed Neural Network regardless of the different sets of features used. The best classification result was achieved using the set containing the most popular and accurate features i.e. mean, minimum, energy and sample differences etc. The best accuracy of 94.13% was achieved using only wrist-worn accelerometer showing a possibility of automatic activity classification with no movement constrain, discomfort and stigmatisation caused by the sensor.
---
paper_title: Feature Selection and Activity Recognition System Using a Single Triaxial Accelerometer
paper_content:
Activity recognition is required in various applications such as medical monitoring and rehabilitation. Previously developed activity recognition systems utilizing triaxial accelerometers have provided mixed results, with subject-to-subject variability. This paper presents an accurate activity recognition system utilizing a body worn wireless accelerometer, to be used in the real-life application of patient monitoring. The algorithm utilizes data from a single, waist-mounted triaxial accelerometer to classify gait events into six daily living activities and transitional events. The accelerometer can be worn at any location around the circumference of the waist, thereby reducing user training. Feature selection is performed using Relief-F and sequential forward floating search (SFFS) from a range of previously published features, as well as new features introduced in this paper. Relevant and robust features that are insensitive to the positioning of accelerometer around the waist are selected. SFFS selected almost half the number of features in comparison to Relief-F and provided higher accuracy than Relief-F. Activity classification is performed using Naive Bayes and $k$ -nearest neighbor $(k$ -NN) and the results are compared. Activity recognition results on seven subjects with leave-one-person-out error estimates show an overall accuracy of about 98% for both the classifiers. Accuracy for each of the individual activity is also more than 95%.
---
paper_title: Mobile phone based drunk driving detection
paper_content:
Drunk driving, or officially Driving Under the Influence (DUI) of alcohol, is a major cause of traffic accidents throughout the world. In this paper, we propose a highly efficient system aimed at early detection and alert of dangerous vehicle maneuvers typically related to drunk driving. The entire solution requires only a mobile phone placed in vehicle and with accelerometer and orientation sensor. A program installed on the mobile phone computes accelerations based on sensor readings, and compares them with typical drunk driving patterns extracted from real driving tests. Once any evidence of drunk driving is present, the mobile phone will automatically alert the driver or call the police for help well before accident actually happens. We implement the detection system on Android G1 phone and have it tested with different kinds of driving behaviors. The results show that the system achieves high accuracy and energy efficiency.
---
paper_title: Physical activity monitoring by use of accelerometer-based body-worn sensors in older adults: A systematic literature review of current knowledge and applications
paper_content:
Abstract Objectives To systematically review the literature on physical activity variables derived from body-worn sensors during long term monitoring in healthy and in-care older adults. Methods Using pre-designed inclusion and exclusion criteria, a PubMed search strategy was designed to trace relevant reports of studies. Last search date was March 8, 2011. Study selection Studies that included persons with mean or median age of >65 years, used accelerometer-based body-worn sensors with a monitoring length of >24 h, and reported values on physical activity in the samples assessed. Results 1403 abstracts were revealed and 134 full-text papers included in the final review. A variety of variables derived from activity counts or recognition of performed activities were reported in healthy older adults as well as in in-care older adults. Three variables were possible to compare across studies, level of Energy Expenditure in kcal per day and activity recognition in terms of total time in walking and total activity. However, physical activity measured by these variables demonstrated large variation between studies and did not distinguish activity between healthy and in-care samples. Conclusion There is a rich variety in methods used for data collection and analysis as well as in reported variables. Different aspects of physical activity can be described, but the variety makes it challenging to compare across studies. There is an urgent need for developing consensus on activity monitoring protocols and which variables to report.
---
paper_title: Accelerometer Data Collection and Processing Criteria to Assess Physical Activity and Other Outcomes: A Systematic Review and Practical Considerations
paper_content:
BACKGROUND ::: Accelerometers are widely used to measure sedentary time, physical activity, physical activity energy expenditure (PAEE), and sleep-related behaviors, with the ActiGraph being the most frequently used brand by researchers. However, data collection and processing criteria have evolved in a myriad of ways out of the need to answer unique research questions; as a result there is no consensus. ::: ::: ::: OBJECTIVES ::: The purpose of this review was to: (1) compile and classify existing studies assessing sedentary time, physical activity, energy expenditure, or sleep using the ActiGraph GT3X/+ through data collection and processing criteria to improve data comparability and (2) review data collection and processing criteria when using GT3X/+ and provide age-specific practical considerations based on the validation/calibration studies identified. ::: ::: ::: METHODS ::: Two independent researchers conducted the search in PubMed and Web of Science. We included all original studies in which the GT3X/+ was used in laboratory, controlled, or free-living conditions published from 1 January 2010 to the 31 December 2015. ::: ::: ::: RESULTS ::: The present systematic review provides key information about the following data collection and processing criteria: placement, sampling frequency, filter, epoch length, non-wear-time, what constitutes a valid day and a valid week, cut-points for sedentary time and physical activity intensity classification, and algorithms to estimate PAEE and sleep-related behaviors. The information is organized by age group, since criteria are usually age-specific. ::: ::: ::: CONCLUSION ::: This review will help researchers and practitioners to make better decisions before (i.e., device placement and sampling frequency) and after (i.e., data processing criteria) data collection using the GT3X/+ accelerometer, in order to obtain more valid and comparable data. ::: ::: ::: PROSPERO REGISTRATION NUMBER ::: CRD42016039991.
---
paper_title: Mining for motivation: using a single wearable accelerometer to detect people's interests
paper_content:
This paper presents a novel investigation of how motion as measured with just a single wearable accelerometer is informative of people's interests and motivation during crowded social events. We collected accelerometer readings on a large number of people (32 and 46 people in two crowded social events involving up to hundreds of people). In our experiments, we demonstrate how people's movements are informative of their particular interests: during talks, their interests in particular topics, and during networking events, their interest to participate successfully to make new contacts and foster existing ones. To our knowledge, using a single body worn accelerometer to measure and automatically infer these aspects of social behaviour has never been attempted before. Our experiments show that despite the challenge of the proposed task, useful automated predictions are possible and demonstrate the potential for further research in this area.
---
paper_title: Activity Recognition Using a Single Accelerometer Placed at the Wrist or Ankle
paper_content:
AB Purpose: Large physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities. Methods: Participants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated. Results: With 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%. Conclusions: A classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency.
---
paper_title: Validity of 10 electronic pedometers for measuring steps, distance, and energy cost.
paper_content:
ABSTRACTCROUTER, S. E., P. L. SCHNEIDER, M. KARABULUT, and D. R. BASSETT, JR. Validity of 10 Electronic Pedometers for Measuring Steps, Distance, and Energy Cost. Med. Sci. Sports Exerc., Vol. 35, No. 8, pp. 1455–1460, 2003.PurposeThis study examined the effects of walking speed on the accuracy and
---
paper_title: puffMarker: a multi-sensor approach for pinpointing the timing of first lapse in smoking cessation
paper_content:
Recent researches have demonstrated the feasibility of detecting smoking from wearable sensors, but their performance on real-life smoking lapse detection is unknown. In this paper, we propose a new model and evaluate its performance on 61 newly abstinent smokers for detecting a first lapse. We use two wearable sensors --- breathing pattern from respiration and arm movements from 6-axis inertial sensors worn on wrists. In 10-fold cross-validation on 40 hours of training data from 6 daily smokers, our model achieves a recall rate of 96.9%, for a false positive rate of 1.1%. When our model is applied to 3 days of post-quit data from 32 lapsers, it correctly pinpoints the timing of first lapse in 28 participants. Only 2 false episodes are detected on 20 abstinent days of these participants. When tested on 84 abstinent days from 28 abstainers, the false episode per day is limited to 1/6.
---
paper_title: A Triaxial Accelerometer-Based Physical-Activity Recognition via Augmented-Signal Features and a Hierarchical Recognizer
paper_content:
Physical-activity recognition via wearable sensors can provide valuable information regarding an individual's degree of functional ability and lifestyle. In this paper, we present an accelerometer sensor-based approach for human-activity recognition. Our proposed recognition method uses a hierarchical scheme. At the lower level, the state to which an activity belongs, i.e., static, transition, or dynamic, is recognized by means of statistical signal features and artificial-neural nets (ANNs). The upper level recognition uses the autoregressive (AR) modeling of the acceleration signals, thus, incorporating the derived AR-coefficients along with the signal-magnitude area and tilt angle to form an augmented-feature vector. The resulting feature vector is further processed by the linear-discriminant analysis and ANNs to recognize a particular human activity. Our proposed activity-recognition method recognizes three states and 15 activities with an average accuracy of 97.9% using only a single triaxial accelerometer attached to the subject's chest.
---
paper_title: ACComplice: Location inference using accelerometers on smartphones
paper_content:
The security and privacy risks posed by smartphone sensors such as microphones and cameras have been well documented. However, the importance of accelerometers have been largely ignored. We show that accelerometer readings can be used to infer the trajectory and starting point of an individual who is driving. This raises concerns for two main reasons. First, unauthorized access to an individual's location is a serious invasion of privacy and security. Second, current smartphone operating systems allow any application to observe accelerometer readings without requiring special privileges. We demonstrate that accelerometers can be used to locate a device owner to within a 200 meter radius of the true location. Our results are comparable to the typical accuracy for handheld global positioning systems.
---
paper_title: We Can Track You if You Take the Metro: Tracking Metro Riders Using Accelerometers on Smartphones
paper_content:
Motion sensors, especially accelerometers, on smartphones have been discovered to be a powerful side channel for spying on users’ privacy. In this paper, we reveal a new accelerometer-based side-channel attack which is particularly serious: malware on smartphones can easily exploit the accelerometers to trace metro riders stealthily. We first address the challenge to automatically filter out metro-related data from a mass of miscellaneous accelerometer readings, and then propose a basic attack which leverages an ensemble interval classifier built from supervised learning to infer the riding trajectory of the user. As the supervised learning requires the attacker to collect labeled training data for each station interval, this attack confronts the scalability problem in big cities with a huge metro network. We thus further present an improved attack using semi-supervised learning, which only requires the attacker to collect labeled data for a very small number of distinctive station intervals. We conduct real experiments on a large self-built dataset, which contains more than 120 h of data collected from six metro lines of three major cities. The results show that the inferring accuracy could reach 89% and 94% if the user takes the metro for four and six stations, respectively. We finally discuss possible countermeasures against the proposed attack.
---
paper_title: On the Anonymity of Home/Work Location Pairs
paper_content:
Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.
---
paper_title: Mobile Device Identification via Sensor Fingerprinting
paper_content:
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
---
paper_title: Whose move is it anyway? Authenticating smart wearable devices using unique head movement patterns
paper_content:
In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57% while keeping the average false acceptance rate of 4.43%. We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.
---
paper_title: Authentication of Smartphone Users Based on the Way They Walk Using k-NN Algorithm
paper_content:
Accelerometer-based biometric gait recognition offers a convenient way to authenticate users on their mobile devices. Modern smartphones contain in-built accelerometers which can be used as sensors to acquire the necessary data while the subjects are walking. Hence, no additional costs for special sensors are imposed to the user. In this publication we extract several features from the gait data and use the k-Nearest Neighbour algorithm for classification. We show that this algorithm yields a better biometric performance than the machine learning algorithms we previously used for classification, namely Hidden Markov Models and Support Vector Machines. We implemented the presented method on a smartphone and demonstrate that it is efficient enough to be applied in practice.
---
paper_title: Context-Aware Active Authentication Using Smartphone Accelerometer Measurements
paper_content:
While body movement patterns recorded by a smartphone accelerometer are now well understood to be discriminative enough to separate users, little work has been done to address the question of if or how the position in which the phone is held affects user authentication. In this work, we show through a combination of supervised learning methods and statistical tests, that there are certain users for whom exploitation of information of how a phone is held drastically improves classification performance. We propose a two-stage authentication framework that identifies the location of the phone before performing authentication, and show its benefits based on a dataset of 30 users. Our work represents a first step towards bridging the gap between accelerometer-based authentication systems analyzed from the context of a laboratory environment and a real accelerometer-based authentication system in the wild where phone positioning cannot be assumed.
---
paper_title: Evaluating the Privacy Risk of Location-Based Services
paper_content:
In modern mobile networks, users increasingly share their location with third-parties in return for location-based services. Previous works show that operators of location-based services may identify users based on the shared location information even if users make use of pseudonyms. In this paper, we push the understanding of the privacy risk further. We evaluate the ability of location-based services to identify users and their points of interests based on different sets of location information. We consider real life scenarios of users sharing location information with location-based services and quantify the privacy risk by experimenting with real-world mobility traces.
---
paper_title: Cell phone-based biometric identification
paper_content:
Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biometrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.
---
paper_title: TouchLogger: Inferring keystrokes on touch screen from smartphone motion
paper_content:
Attacks that use side channels, such as sound and electromagnetic emanation, to infer keystrokes on physical keyboards are ineffective on smartphones without physical keyboards. We describe a new side channel, motion, on touch screen smartphones with only soft keyboards. Since typing on different locations on the screen causes different vibrations, motion data can be used to infer the keys being typed. To demonstrate this attack, we developed TouchLogger, an Android application that extracts features from device orientation data to infer keystrokes. TouchLogger correctly inferred more than 70% of the keys typed on a number-only soft keyboard on a smartphone. We hope to raise the awareness of motion as a significant side channel that may leak confidential data.
---
paper_title: Deep-Spying: Spying using Smartwatch and Deep Learning
paper_content:
Wearable technologies are today on the rise, becoming more common and broadly available to mainstream users. In fact, wristband and armband devices such as smartwatches and fitness trackers already took an important place in the consumer electronics market and are becoming ubiquitous. By their very nature of being wearable, these devices, however, provide a new pervasive attack surface threatening users privacy, among others. ::: In the meantime, advances in machine learning are providing unprecedented possibilities to process complex data efficiently. Allowing patterns to emerge from high dimensional unavoidably noisy data. ::: The goal of this work is to raise awareness about the potential risks related to motion sensors built-in wearable devices and to demonstrate abuse opportunities leveraged by advanced neural network architectures. ::: The LSTM-based implementation presented in this research can perform touchlogging and keylogging on 12-keys keypads with above-average accuracy even when confronted with raw unprocessed data. Thus demonstrating that deep neural networks are capable of making keystroke inference attacks based on motion sensors easier to achieve by removing the need for non-trivial pre-processing pipelines and carefully engineered feature extraction strategies. Our results suggest that the complete technological ecosystem of a user can be compromised when a wearable wristband device is worn.
---
paper_title: TapLogger: inferring user inputs on smartphone touchscreens using on-board motion sensors
paper_content:
Today's smartphones are shipped with various embedded motion sensors, such as the accelerometer, gyroscope, and orientation sensors. These motion sensors are useful in supporting the mobile UI innovation and motion-based commands. However, they also bring potential risks of leaking user's private information as they allow third party applications to monitor the motion changes of smartphones. In this paper, we study the feasibility of inferring a user's tap inputs to a smartphone with its integrated motion sensors. Specifically, we utilize an installed trojan application to stealthily monitor the movement and gesture changes of a smartphone using its on-board motion sensors. When the user is interacting with the trojan application, it learns the motion change patterns of tap events. Later, when the user is performing sensitive inputs, such as entering passwords on the touchscreen, the trojan application applies the learnt pattern to infer the occurrence of tap events on the touchscreen as well as the tapped positions on the touchscreen. For demonstration, we present the design and implementation of TapLogger, a trojan application for the Android platform, which stealthily logs the password of screen lock and the numbers entered during a phone call (e.g., credit card and PIN numbers). Statistical results are presented to show the feasibility of such inferences and attacks.
---
paper_title: Two Novel Defenses against Motion-Based Keystroke Inference Attacks
paper_content:
Nowadays smartphones come embedded with multiple motion sensors, such as an accelerometer, a gyroscope and an orientation sensor. With these sensors, apps can gather more information and therefore provide end users with more functionality. However, these sensors also introduce the potential risk of leaking a user's private information because apps can access these sensors without requiring security permissions. By monitoring a device's motion, a malicious app may be able to infer sensitive information about the owner of the device. For example, related work has shown that sensitive information entered by a user on a device's touchscreen, such as numerical PINs or passwords, can be inferred from accelerometer and gyroscope data. ::: In this paper, we study these motion-based keystroke inference attacks to determine what information they need to succeed. Based on this study, we propose two novel approaches to defend against keystroke inference attacks: 1) Reducing sensor data accuracy; 2) Random keyboard layout generation. We present the design and the implementation of these two defences on the Android platform and show how they significantly reduce the accuracy of keystroke inference attacks. We also conduct multiple user studies to evaluate the usability and feasibility of these two defences. Finally, we determine the impact of the defences on apps that have legitimate reasons to access motion sensors and show that the impact is negligible.
---
paper_title: ACCessory: password inference using accelerometers on smartphones
paper_content:
We show that accelerometer readings are a powerful side channel that can be used to extract entire sequences of entered text on a smart-phone touchscreen keyboard. This possibility is a concern for two main reasons. First, unauthorized access to one's keystrokes is a serious invasion of privacy as consumers increasingly use smartphones for sensitive transactions. Second, unlike many other sensors found on smartphones, the accelerometer does not require special privileges to access on current smartphone OSes. We show that accelerometer measurements can be used to extract 6-character passwords in as few as 4.5 trials (median).
---
paper_title: The Validity of a New Consumer-Targeted Wrist Device in Sleep Measurement: An Overnight Comparison Against Polysomnography in Children and Adolescents
paper_content:
STUDY OBJECTIVES ::: The validity of consumer-targeted wrist-worn sleep measurement systems has been little studied in children and adolescents. We examined the validity of a new fitness tracker (PFT) manufactured by Polar Electro Oy and the previously validated Actiwatch 2 (AW2) from Philips Respironics against polysomnography (PSG) in children and adolescents. ::: ::: ::: METHODS ::: Seventeen children (age 11.0 ± 0.8 years) and 17 adolescents (age 17.8 ± 1.8 years) wore the PFT and AW2 concurrently with an ambulatory PSG in their own home for 1 night. We compared sleep onset, offset, sleep interval (time from sleep on to offset), actual sleep time (time scored as sleep between sleep on and offset), and wake after sleep onset (WASO) between accelerometers and PSG. Sensitivity, specificity, and accuracy were calculated from the epoch-by-epoch data. ::: ::: ::: RESULTS ::: Both devices performed adequately against PSG, with excellent sensitivity for both age groups (> 0.91). In terms of specificity, the PFT was adequate in both groups (> 0.77), and AW2 adequate in children (0.68) and poor in adolescents (0.58). In the younger group, the PFT underestimated actual sleep time by 29.9 minutes and AW2 underestimated actual sleep time by 43.6 minutes. Both overestimated WASO, PFT by 24.4 minutes and AW2 by 20.9 minutes. In the older group, both devices underestimated actual sleep time (PFT by 20.6 minutes and AW2 by 26.8 minutes) and overestimated WASO (PFT by 12.5 minutes and AW2 by 14.3 minutes). Both devices were accurate in defining sleep onset. ::: ::: ::: CONCLUSIONS ::: This study suggests that this consumer-targeted wrist-worn device performs as well as, or even better than, the previously validated AW2 against PSG in children and adolescents. Both devices underestimated sleep but to a lesser extent than seen in many previous validation studies on research-targeted accelerometers.
---
paper_title: Estimating Carrier’s Height by Accelerometer Signals of a Smartphone
paper_content:
The aim of this study is to estimate height of a carrier (owner) of a smartphone by using single three-axis accelerometer signals. We found that the accelerometer signals collected while a carrier goes up the stairs contain features that are correlated with height of the carrier as high as \(r = 0.801\). Apart from potentially useful applications of the result, there might arise a privacy issue in using smartphone since approximate height could be acquired in this way. Although, in general, height is not regarded as private information to be worried about, if someone thinks his/her height belongs to privacy, it should be protected against leakage.
---
paper_title: Supervised learning in voice type discrimination using neck-skin vibration signals: Preliminary results on single vowels
paper_content:
Discrimination between normal and pathological voice is a critical component in laryngeal pathology diagnosis and vocal rehabilitative treatment. In the present study, a portable miniature glottal notch accelerometer (GNA) device with supervised machine learning techniques was proposed to discriminate between three human voice types: normal, breathy, and pressed voice. Fourteen native American English speakers who were wearing a GNA device produced five different English single vowels in each of the three voice types. Acoustic features of the GNA signals were extracted using spectral analysis. Preliminary assessments of feature discrepancy among different voice types were made to present physical clues of discrimination. The linear discriminant analysis technique was applied to reduce the dimensionality of the raw-feature vector of the GNA signals. Maximization of between-class distance and minimization of within-class distance were synchronously achieved. The voice types were then classified using severa...
---
paper_title: A Novel, Open Access Method to Assess Sleep Duration Using a Wrist-Worn Accelerometer
paper_content:
Wrist-worn accelerometers are increasingly being used for the assessment of physical activity in population studies, but little is known about their value for sleep assessment. We developed a novel method of assessing sleep duration using data from 4,094 Whitehall II Study (United Kingdom, 2012-2013) participants aged 60-83 who wore the accelerometer for 9 consecutive days, filled in a sleep log and reported sleep duration via questionnaire. Our sleep detection algorithm defined (nocturnal) sleep as a period of sustained inactivity, itself detected as the absence of change in arm angle greater than 5 degrees for 5 minutes or more, during a period recorded as sleep by the participant in their sleep log. The resulting estimate of sleep duration had a moderate (but similar to previous findings) agreement with questionnaire based measures for time in bed, defined as the difference between sleep onset and waking time (kappa = 0.32, 95%CI:0.29,0.34) and total sleep duration (kappa = 0.39, 0.36,0.42). This estimate was lower for time in bed for women, depressed participants, those reporting more insomnia symptoms, and on weekend days. No such group differences were found for total sleep duration. Our algorithm was validated against data from a polysomnography study on 28 persons which found a longer time window and lower angle threshold to have better sensitivity to wakefulness, while the reverse was true for sensitivity to sleep. The novelty of our method is the use of a generic algorithm that will allow comparison between studies rather than a "count" based, device specific method.
---
paper_title: A physiological sound sensing system using accelerometer based on flip-chip piezoelectric technology and asymmetrically gapped cantilever
paper_content:
This paper focuses on the sensing of physiological sound on human body using accelerometers. The physiological sound sensing have demanding requirements on the sensitivity/noise performance of accelerometers since the physiological sounds are usually very weak. In this paper, a piezoelectric accelerometer based on the asymmetrically gapped cantilever structure, which exhibits significantly improved sensitivity, is presented. Furthermore, in order to reduce the package size of the accelerometer, a flip-chip piezoelectric technology is proposed. The accelerometer has a resonant frequency of 1580 Hz which is much higher than the heart sound frequency range, and a quality factor of 9.2. Using a coherent scaling method, the scaled noise level of the accelerometer between 10 Hz and 400 Hz is 1 µV/√Hz. Preliminarily test results show that the signal-to-noise ratio of the heart sound signal measured by the designed accelerometer is about four times higher than that by a high-end stethoscope.
---
paper_title: The Impact of Daily Sleep Duration on Health: A Review of the Literature
paper_content:
A healthy amount of sleep is paramount to leading a healthy and productive lifestyle. Although chronic sleep loss is common in today's society, many people are unaware of the potential adverse health effects of habitual sleep restriction. Under strict experimental conditions, short-term restriction of sleep results in a variety of adverse physiologic effects, including hypertension, activation of the sympathetic nervous system, impairment of glucose control, and increased inflammation. A variety of epidemiologic studies have also suggested an association between self-reported sleep duration and long-term health. Individuals who report both an increased (>8 h/d) or reduced (<7 h/d) sleep duration are at modestly increased risk of all-cause mortality, cardiovascular disease, and developing symptomatic diabetes. Although the data are not definitive, these studies suggest that sleep should not be considered a luxury, but an important component of a healthful lifestyle.
---
paper_title: Cross-sectional Relationship of Pedometer-Determined Ambulatory Activity to Indicators of Health
paper_content:
Objective: To describe the cross-sectional relationship between an objective measure of walking (pedometer-determined steps/day) and general indicators of health, a prior diagnosis of one or more components of the metabolic syndrome, and self-reported occupational activity in a generally sedentary working population. ::: ::: ::: ::: Research Methods and Procedures: Steps/day were compared with previous diagnosis of one or more components of the metabolic syndrome (by self-administered questionnaire) and with general health indicators including BMI, waist circumference, resting heart rate, and blood pressure in 182 subjects in Prince Edward Island, Canada. Study participants were volunteer employees recruited from five workplaces where, in general, the job types were moderately or highly sedentary. ::: ::: ::: ::: Results: Steps/day were 7230 ± SD 3447 for women (n = 153) and 8265 ± 2849 (n = 21) for men. Pedometer-determined steps/day were associated inversely with BMI (r = −0.4005, p < 0.0001) in all participants and waist circumference in females only (r = −0.4303, p < 0.0001). There was a low correlation between steps/day and diastolic blood pressure in the whole sample (r = −0.2140, p = 0.0383). Participants who reported a prior diagnosis of one or more components of the metabolic syndrome (hypertension, hypercholesterolemia, heart disease, or type 2 diabetes) took fewer steps/day than healthy participants (p = 0.0254). Pedometer-determined steps/day were positively associated with self-reported occupational activity (p = 0.0002). ::: ::: ::: ::: Discussion: Fewer steps/day are associated with increased BMI, waist circumference, diastolic blood pressure, and components of the metabolic syndrome. Low occupational activity is a contributing factor to low total ambulatory activity.
---
paper_title: Identifying user traits by mining smart phone accelerometer data
paper_content:
Smart phones are quite sophisticated and increasingly incorporate diverse and powerful sensors. One such sensor is the tri-axial accelerometer, which measures acceleration in all three spatial dimensions. The accelerometer was initially included for screen rotation and advanced game play, but can support other applications. In prior work we showed how the accelerometer could be used to identify and/or authenticate a smart phone user [11]. In this paper we extend that prior work to identify user traits such as sex, height, and weight, by building predictive models from labeled accelerometer data using supervised learning methods. The identification of such traits is often referred to as "soft biometrics" because these traits are not sufficiently distinctive or invariant to uniquely identify an individual---but they can be used in conjunction with other information for identification purposes. While our work can be used for biometric identification, our primary goal is to learn as much as possible about the smart phone user. This mined knowledge can be then be used for a number of purposes, such as marketing or making an application more intelligent (e.g., a fitness app could consider a user's weight when calculating calories burned).
---
paper_title: Daily Patterns of Accelerometer Activity Predict Changes in Sleep, Cognition, and Mortality in Older Men
paper_content:
Background ::: There is growing interest in the area of "wearable tech" and its relationship to health. A common element of many of these devices is a triaxial accelerometer that can yield continuous information on gross motor activity levels; how such data might predict changes in health is less clear. ::: ::: ::: Methods ::: We examined accelerometry data from 2,976 older men who were part of the Osteoporotic Fractures in Men (MrOS) study. Using a shape-naive technique, functional principal component analysis, we examined the patterns of motor activity over the course of 4-7 days and determined whether these patterns were associated with changes in polysomnographic-determined sleep and cognitive function (Trail Making Test-Part B [Trails B], Modified Mini-Mental State Examination [3MS]), as well as mortality over 6.5-8 years of follow-up. ::: ::: ::: Results ::: In comparing baseline to 6.5 years later, multivariate modeling indicated that low daytime activity at baseline was associated with worsening of sleep efficiency (p < .05), more wake after sleep onset (p < .05), and a decrease in cognition (Trails B; p < .001), as well as a 1.6-fold higher rate of all-cause mortality (hazard ratio = 1.64 [1.34-2.00]). Earlier wake and bed times were associated with a decrease in cognition (3MS; p < .05). Having a late afternoon peak in activity was associated with a 1.4-fold higher rate of all-cause mortality (hazard ratio = 1.46 [1.21-1.77]). Those having a longer duration of their daytime activity with a bimodal activity pattern also had over a 1.4-fold higher rate of cardiovascular-related mortality (hazard ratio = 1.42 [1.02-1.98]). ::: ::: ::: Conclusions ::: Patterns of daily activity may be useful as predictive biomarkers for changes in clinically relevant outcomes, including mortality and changes in sleep and cognition in older men.
---
paper_title: Estimating sleep efficiency in 10‐ to‐ 13‐year‐olds using a waist‐worn accelerometer
paper_content:
Abstract Objective In field settings, wrist- and waist-worn accelerometers are typically used to assess sleep characteristics and movement behaviors, respectively. There has been a shift in movement behavior studies to wear accelerometers 24 h/d. Sleep characteristics could be assessed in these studies if sleep algorithms were available for waist-worn accelerometers. The objective of this study was to develop and provide validity data for an algorithm/sleep likelihood score cut-off to estimate sleep efficiency in children using the waist-worn Actical accelerometer. Design Cross-sectional study. Participants Fifty healthy children aged 10-13 years. Measurements Children wore an Actical on their waist and an Actiwatch 2 on their nondominant wrist for 8 nights at home in their normal sleep environment. Participants were randomized into algorithm/sleep likelihood score "development" and "test" groups (n=25 per group). Within the development group, we assessed sleep efficiency with the Actical using the same algorithm that the Actiwatch 2 uses and selected the sleep likelihood score cut-off value that was the most accurate at predicting sleep efficiency at the nightly level compared with the Actiwatch 2. We applied this algorithm and cut-off value to the test group. Results Mean (SD) sleep efficiency estimates for the test group from the Actical and Actiwatch 2 were 89.0% (3.9%) and 88.7% (3.1%), respectively. Bland-Altman plots and absolute difference scores revealed considerable agreement between devices for both nightly and weekly estimates of sleep efficiency. Conclusion A waist-worn Actical accelerometer can accurately predict sleep efficiency in field settings among healthy 10- to 13-year-olds.
---
paper_title: Investigating gender recognition in smartphones using accelerometer and gyroscope sensor readings
paper_content:
This paper presents an approach for gender recognition using behavioral biometrics in smartphones. Specifically, this work investigates gender recognition using gait data acquired from the inbuilt accelerometer and gyroscope sensors of a smartphone. The proposed approach involves computation of curvature of the gait signals. In order to capture the local variations of estimated curvatures, we employed histogram features of multi-level local pattern (MLP) and local binary pattern (LBP). In this work, support vector machine (SVM) and aggregate bootstrapping (bagging) classifiers are employed for identification of gender based on the extracted features. Performance evaluation of the proposed approach on a database of 252 gait data collected from 42 subjects yielded promising results. Our experimental results also show that MLP performs better than LBP for feature extraction, while bagging outperforms SVM for classification.
---
paper_title: Adolescent patterns of physical activity differences by gender, day, and time of day.
paper_content:
BACKGROUND ::: More information about the physical activity of adolescents is needed. This study used objective measurement to investigate differences in activity patterns related to gender, body mass index (BMI), day, and time of day. ::: ::: ::: METHODS ::: Eighth-grade adolescents (37 boys, 44 girls) wore the Manufacturing Technologies Inc. (MTI) accelerometer for 4 days and kept a previous-day physical activity recall diary in the fall of 2002. Minutes per hour in sedentary, light, and moderate/vigorous activity, as recorded by the MTI, and in nine activity categories, as recorded by the diary, were calculated for three time periods (6:00 am to 2:59 pm, 3:00 pm to 6:59 pm, 7:00 pm to midnight) on each day (Thursday through Sunday). ::: ::: ::: RESULTS ::: Doubly multivariate analysis of variance revealed significant gender by day by time differences in sedentary (p =0.005) and moderate/vigorous (p <0.001) activity, but no significant BMI interactions. Except on Sunday, boys were less sedentary and more active than girls during the late afternoon period. Significant gender by category (p <0.001) and day by category (p <0.001) interactions were also found in the log data. Boys spent more time engaged in TV/electronics and sports, while girls spent more time in personal care. Three activity categories (sports, social interaction, active transportation) stayed at consistent levels across days, while others varied widely by day of the week. ::: ::: ::: CONCLUSIONS ::: Except on Sunday, consistent gender differences were found in activity levels, especially for the late afternoon period. Significant increases in sitting, TV/electronic games, and chores were seen for weekend days. Results support strategies to reduce sitting and electronic recreation, which may increase physical activity.
---
paper_title: Identifying user traits by mining smart phone accelerometer data
paper_content:
Smart phones are quite sophisticated and increasingly incorporate diverse and powerful sensors. One such sensor is the tri-axial accelerometer, which measures acceleration in all three spatial dimensions. The accelerometer was initially included for screen rotation and advanced game play, but can support other applications. In prior work we showed how the accelerometer could be used to identify and/or authenticate a smart phone user [11]. In this paper we extend that prior work to identify user traits such as sex, height, and weight, by building predictive models from labeled accelerometer data using supervised learning methods. The identification of such traits is often referred to as "soft biometrics" because these traits are not sufficiently distinctive or invariant to uniquely identify an individual---but they can be used in conjunction with other information for identification purposes. While our work can be used for biometric identification, our primary goal is to learn as much as possible about the smart phone user. This mined knowledge can be then be used for a number of purposes, such as marketing or making an application more intelligent (e.g., a fitness app could consider a user's weight when calculating calories burned).
---
paper_title: Female gait patterns in shoes with different heel heights
paper_content:
The gait patterns of twelve women were compared during walking in flat shoes and in shoes with high heels. In addition, four anthropometric measurements were recorded. Results showed that the wearing of high heels caused a significant decrease in step length and stride length, whereas there was a minimum change in the stride width and foot angle
---
paper_title: Age group detection using smartphone motion sensors
paper_content:
Side-channel attacks revealing the sensitive user data through the motion sensors (such as accelerometer, gyroscope, and orientation sensors) emerged as a new trend in the smartphone security. In this respect, recent studies have examined feasibility of inferring user's tap input by utilizing the motion sensor readings and propounded that some user secrets can be deduced by adopting the different side-channel attacks. More precisely, in this kind of attacks, a malware processes outputs of these sensors to exfiltrate victims private information such as PINs, passwords or unlock patterns. In this paper, we describe a new side-channel attack on smartphones that aims to predict the age interval of the user. Unlike the previous works, our attack does not directly deal with recovering a target user's some secret, rather its sole purpose is determining whether she is a child or an adult. The main idea behind our study relies on the key observation that the characteristics of children and adults differ in hand holding and touching the smartphones. Consequently, we show that there is an apparent correlation between the motion sensor readings and these characteristics that build up our attack strategy. In order to exhibit efficiency of the proposed attack, we have developed an Android application named as BalloonLogger that evaluates accelerometer sensor data and perform child/adult detection with a success rate of 92.5%. To the best of our knowledge, in this work, for the first time, we point out such a security breach.
---
paper_title: Automatic Stress Detection in Working Environments from Smartphones' Accelerometer Data: A First Step
paper_content:
Increase in workload across many organizations and consequent increase in occupational stress are negatively affecting the health of the workforce. Measuring stress and other human psychological dynamics is difficult due to subjective nature of selfreporting and variability between and within individuals. With the advent of smartphones, it is now possible to monitor diverse aspects of human behavior, including objectively measured behavior related to psychological state and consequently stress. We have used data from the smartphone's built-in accelerometer to detect behavior that correlates with subjects stress levels. Accelerometer sensor was chosen because it raises fewer privacy concerns (e.g., in comparison to location, video, or audio recording), and because its low-power consumption makes it suitable to be embedded in smaller wearable devices, such as fitness trackers. About 30 subjects from two different organizations were provided with smartphones. The study lasted for eight weeks and was conducted in real working environments, with no constraints whatsoever placed upon smartphone usage. The subjects reported their perceived stress levels three times during their working hours. Using combination of statistical models to classify selfreported stress levels, we achieved a maximum overall accuracy of 71% for user-specific models and an accuracy of 60% for the use of similar-users models, relying solely on data from a single accelerometer.
---
paper_title: Emotion recognition based on customized smart bracelet with built-in accelerometer
paper_content:
BACKGROUND ::: Recently, emotion recognition has become a hot topic in human-computer interaction. If computers could understand human emotions, they could interact better with their users. This paper proposes a novel method to recognize human emotions (neutral, happy, and angry) using a smart bracelet with built-in accelerometer. ::: ::: ::: METHODS ::: In this study, a total of 123 participants were instructed to wear a customized smart bracelet with built-in accelerometer that can track and record their movements. Firstly, participants walked two minutes as normal, which served as walking behaviors in a neutral emotion condition. Participants then watched emotional film clips to elicit emotions (happy and angry). The time interval between watching two clips was more than four hours. After watching film clips, they walked for one minute, which served as walking behaviors in a happy or angry emotion condition. We collected raw data from the bracelet and extracted a few features from raw data. Based on these features, we built classification models for classifying three types of emotions (neutral, happy, and angry). ::: ::: ::: RESULTS AND DISCUSSION ::: For two-category classification, the classification accuracy can reach 91.3% (neutral vs. angry), 88.5% (neutral vs. happy), and 88.5% (happy vs. angry), respectively; while, for the differentiation among three types of emotions (neutral, happy, and angry), the accuracy can reach 81.2%. ::: ::: ::: CONCLUSIONS ::: Using wearable devices, we found it is possible to recognize human emotions (neutral, happy, and angry) with fair accuracy. Results of this study may be useful to improve the performance of human-computer interaction.
---
paper_title: Smartphone accelerometer data used for detecting human emotions
paper_content:
The paper outlines work on the classification of emotions using smartphone accelerometer data. Such classification can be used, in conjunction with other methods of emotion detection, to adapt services to the user's emotional state. The data is collected from individuals who have been carrying their phone in a pocket while walking. An Android app was developed in order to monitor the smartphone accelerometer of the individuals who participated in the study and occasionally requested them to judge and submit their emotional state. This way, data is collected from a natural environment rather than a laboratory setting. The recorded data is then processed and used to train different classifiers to be compared. The machine learning algorithms decision tree, support vector machine and multilayer perceptron are used for this purpose. Emotions are classified in two dimensions: pleasantness and arousal (activation). While the recognition rate for the arousal dimension is promising at 75%, pleasantness is harder to predict, with a recognition rate of 51%. These findings indicate that by only analyzing accelerometer data recorded from a smartphone, it is possible to make predictions of a person's state of activation.
---
paper_title: Be Active and Become Happy: An Ecological Momentary Assessment of Physical Activity and Mood
paper_content:
The positive effects of physical activity on mood are well documented in cross-sectional studies. To date there have been only a few studies analyzing within-subject covariance between physical activity and mood in everyday life. This study aims to close this gap using an ambulatory assessment of mood and physical activity. Thirteen participants completed a standardized diary over a 10-week period, resulting in 1,860 measurement points. Valence, energetic arousal, and calmness are the three subscales of mood that were assessed. Participants rated their mood promptly after self-selected activities. A multilevel analysis indicates that the three dimensions of mood were positively affected by episodes of physical activity, such as walking or gardening—valence: t(12) = 5.6, p < .001; energetic arousal: t(12) = 2.4, p = .033; calmness: t(12) = 2.8, p = .015. Moreover, the association is affected by the individual baseline mood level, with the greatest effect seen when mood is depressed.
---
paper_title: Mining for motivation: using a single wearable accelerometer to detect people's interests
paper_content:
This paper presents a novel investigation of how motion as measured with just a single wearable accelerometer is informative of people's interests and motivation during crowded social events. We collected accelerometer readings on a large number of people (32 and 46 people in two crowded social events involving up to hundreds of people). In our experiments, we demonstrate how people's movements are informative of their particular interests: during talks, their interests in particular topics, and during networking events, their interest to participate successfully to make new contacts and foster existing ones. To our knowledge, using a single body worn accelerometer to measure and automatically infer these aspects of social behaviour has never been attempted before. Our experiments show that despite the challenge of the proposed task, useful automated predictions are possible and demonstrate the potential for further research in this area.
---
paper_title: Personality and physical activity: A systematic review and meta-analysis.
paper_content:
Abstract Whether personality determines physical activity or its outcomes is relevant for theory and public health but has been understudied. We estimated the population correlations between Big-Five personality factors and physical activity and examined whether they varied according to sample characteristics and study features. Database searches were conducted according to PRISMA guidelines, for articles published in the English language prior to November 1st, 2013. Sixty-four studies including a total of 88,400 participants yielded effects (k) for Extraversion (88), Neuroticism (82), Conscientiousness (69), Openness (51) and Agreeableness (52). Significant mean r was found for Extraversion (r = .1076), Neuroticism (r = −.0710), Conscientiousness (r = .1037) and Openness (r = .0344), but not Agreeableness (r = .0020). Effects were moderately heterogeneous (I2 range = 44–65%) and varied by sample characteristics (e.g., age, gender, or clinical status) and/or study features (e.g., measure quality or item format). This analysis expands results of previous reviews and provides new support for a relationship between physical activity and Openness. Future studies should use better measures of physical activity and prospective designs, adjust for statistical artifacts, and consider advances in the conceptualization of personality.
---
paper_title: Personality and Actigraphy-Measured Physical Activity in Older Adults
paper_content:
Most studies on personality and physical activity have relied on self-report measures. This study examined the relation between Five Factor Model personality traits and objective physical activity in older adults. Sixty-nine participants (Mage = 80.2 years; SD = 7.1) wore the ActiGraph monitor for 7 days and completed the NEO Personality Inventory-3 First Half. Extraversion, Agreeableness, and Conscientiousness were associated with more moderate physical activity and more steps per day whereas Neuroticism was inversely related to these physical activity measures (βs > .20). The associations for Neuroticism and Conscientiousness were attenuated by approximately 20-40% when accounting for disease burden and body mass index but were essentially unchanged for Extraversion and Agreeableness. These findings confirm self-report evidence that personality traits are associated with physical activity levels in older adults. (PsycINFO Database Record
---
paper_title: Detection of Activities by Wireless Sensors for Daily Life Surveillance: Eating and Drinking
paper_content:
This paper introduces a two-stage approach to the detection of people eating and/or drinking for the purposes of surveillance of daily life. With the sole use of wearable accelerometer sensor attached to somebody’s (man or a woman) wrists, this two-stage approach consists of feature extraction followed by classification. At the first stage, based on the limb’s three dimensional kinematics movement model and the Extended Kalman Filter (EKF), the realtime arm movement features described by Euler angles are extracted from the raw accelerometer measurement data. In the latter stage, the Hierarchical Temporal Memory (HTM) network is adopted to classify the extracted features of the eating/drinking activities based on the space and time varying property of the features, by making use of the powerful modelling capability of HTM network on dynamic signals which is varying with both space and time. The proposed approach is tested through the real eating and drinking activities using the three dimensional accelerometers. Experimental results show that the EKF and HTM based two-stage approach can perform the activity detection successfully with very high accuracy.
---
paper_title: Driving Behavior and Traffic Safety: An Acceleration-Based Safety Evaluation Procedure for Smartphones
paper_content:
Traffic safety and energy efficiency of vehicles are strictly related to driver’s behavior. The scientific literature has investigated on some specific dynamic parameters that, among the others, can be used as a measure of unsafe or aggressive driving style such as longitudinal and lateral acceleration of vehicle. Moreover, the use of modern mobile devices (smartphones and tablets), and their internal sensors (GPS receivers, three-axes accelerometers), allows road users to receive real time information and feedback that can be useful to increase awareness of drivers and promote safety. This paper focuses on the development of a prototype mobile application that can evaluate the grade of safety that drivers are keeping on the road by measuring of accelerations (longitudinal and lateral) and warning for users when it can be convenient to correct their driving style. The aggressiveness is evaluated by plotting vehicle’s acceleration on a g-g diagram specially studied and designed, where horizontal and lateral acceleration is displayed inside areas of “Good Driving Style”. Several experimental tests were carried out with different drivers and cars in order to estimate the system accuracy and the usability of the application. This work is part of the wider research project M2M, Mobile to Mobility: Information and communication technology systems for road traffic safety (PON National Operational Program for Research and Competitiveness 2007-2013) which is based on the use of mobile sensor computing systems for giving real-time information in order to reduce risks and to make the transportation system more safe and comfortable.
---
paper_title: Estimating Carrier’s Height by Accelerometer Signals of a Smartphone
paper_content:
The aim of this study is to estimate height of a carrier (owner) of a smartphone by using single three-axis accelerometer signals. We found that the accelerometer signals collected while a carrier goes up the stairs contain features that are correlated with height of the carrier as high as \(r = 0.801\). Apart from potentially useful applications of the result, there might arise a privacy issue in using smartphone since approximate height could be acquired in this way. Although, in general, height is not regarded as private information to be worried about, if someone thinks his/her height belongs to privacy, it should be protected against leakage.
---
paper_title: Feature Selection and Activity Recognition System Using a Single Triaxial Accelerometer
paper_content:
Activity recognition is required in various applications such as medical monitoring and rehabilitation. Previously developed activity recognition systems utilizing triaxial accelerometers have provided mixed results, with subject-to-subject variability. This paper presents an accurate activity recognition system utilizing a body worn wireless accelerometer, to be used in the real-life application of patient monitoring. The algorithm utilizes data from a single, waist-mounted triaxial accelerometer to classify gait events into six daily living activities and transitional events. The accelerometer can be worn at any location around the circumference of the waist, thereby reducing user training. Feature selection is performed using Relief-F and sequential forward floating search (SFFS) from a range of previously published features, as well as new features introduced in this paper. Relevant and robust features that are insensitive to the positioning of accelerometer around the waist are selected. SFFS selected almost half the number of features in comparison to Relief-F and provided higher accuracy than Relief-F. Activity classification is performed using Naive Bayes and $k$ -nearest neighbor $(k$ -NN) and the results are compared. Activity recognition results on seven subjects with leave-one-person-out error estimates show an overall accuracy of about 98% for both the classifiers. Accuracy for each of the individual activity is also more than 95%.
---
paper_title: Mobile phone based drunk driving detection
paper_content:
Drunk driving, or officially Driving Under the Influence (DUI) of alcohol, is a major cause of traffic accidents throughout the world. In this paper, we propose a highly efficient system aimed at early detection and alert of dangerous vehicle maneuvers typically related to drunk driving. The entire solution requires only a mobile phone placed in vehicle and with accelerometer and orientation sensor. A program installed on the mobile phone computes accelerations based on sensor readings, and compares them with typical drunk driving patterns extracted from real driving tests. Once any evidence of drunk driving is present, the mobile phone will automatically alert the driver or call the police for help well before accident actually happens. We implement the detection system on Android G1 phone and have it tested with different kinds of driving behaviors. The results show that the system achieves high accuracy and energy efficiency.
---
paper_title: Detection of posture and motion by accelerometry: a validation study in ambulatory monitoring
paper_content:
The suitable placement of a small number of calibrated piezoresistive accelerometer devices may suffice to assess postures and motions reliably. This finding, which was obtained in a previous investigation, led to the further development of this methodology and to an extension from the laboratory to conditions of daily life. The intention was to validate the accelerometric assessment against behavior observation and to examine the retest reliability. Twenty-four participants were recorded, according to a standard protocol consisting of nine postures/motions (repeated once) which served as reference patterns. The recordings were continued outside the laboratory. A participant observer classified the postures and motions. Four sensor placements (sternum, wrist, thigh, and lower leg) were used. The findings indicated that the detection of posture and motion based on accelerometry is highly reliable. The correlation between behavior observation and kinematic analysis was satisfactory, although some participants showed discrepancies regarding specific motions.
---
paper_title: Whose move is it anyway? Authenticating smart wearable devices using unique head movement patterns
paper_content:
In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57% while keeping the average false acceptance rate of 4.43%. We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.
---
paper_title: We Can Track You if You Take the Metro: Tracking Metro Riders Using Accelerometers on Smartphones
paper_content:
Motion sensors, especially accelerometers, on smartphones have been discovered to be a powerful side channel for spying on users’ privacy. In this paper, we reveal a new accelerometer-based side-channel attack which is particularly serious: malware on smartphones can easily exploit the accelerometers to trace metro riders stealthily. We first address the challenge to automatically filter out metro-related data from a mass of miscellaneous accelerometer readings, and then propose a basic attack which leverages an ensemble interval classifier built from supervised learning to infer the riding trajectory of the user. As the supervised learning requires the attacker to collect labeled training data for each station interval, this attack confronts the scalability problem in big cities with a huge metro network. We thus further present an improved attack using semi-supervised learning, which only requires the attacker to collect labeled data for a very small number of distinctive station intervals. We conduct real experiments on a large self-built dataset, which contains more than 120 h of data collected from six metro lines of three major cities. The results show that the inferring accuracy could reach 89% and 94% if the user takes the metro for four and six stations, respectively. We finally discuss possible countermeasures against the proposed attack.
---
paper_title: Accelerometer's position free human activity recognition using a hierarchical recognition model
paper_content:
Monitoring of physical activities is a growing field with potential applications such as lifecare and healthcare. Accelerometry shows promise in providing an inexpensive but effective means of long-term activity monitoring of elderly patients. However, even for the same physical activity the output of any body-worn Triaxial Accelerometer (TA) varies at different positions of a subject's body, resulting in a high within-class variance. Thus almost all existing TA-based human activity recognition systems require firm attachment of TA to a specific body part, making them impractical for long-term activity monitoring during unsupervised free living. Therefore, we present a novel hierarchical recognition model that can recognize human activities independent of TA's position along a human body. The proposed model minimizes the high within-class variance significantly and allows subjects to carry TA freely in any pocket without attaching it firmly to a body-part. We validated our model using six daily physical activities: resting (sit/stand), walking, walk-upstairs, walk-downstairs, running, and cycling. Activity data is collected from four most probable body positions of TA: chest pocket, front trousers pocket, rear trousers pocket, and inner jacket pocket. The average accuracy of about 95% illustrates the effectiveness of the proposed method.
---
paper_title: Authentication of Smartphone Users Based on the Way They Walk Using k-NN Algorithm
paper_content:
Accelerometer-based biometric gait recognition offers a convenient way to authenticate users on their mobile devices. Modern smartphones contain in-built accelerometers which can be used as sensors to acquire the necessary data while the subjects are walking. Hence, no additional costs for special sensors are imposed to the user. In this publication we extract several features from the gait data and use the k-Nearest Neighbour algorithm for classification. We show that this algorithm yields a better biometric performance than the machine learning algorithms we previously used for classification, namely Hidden Markov Models and Support Vector Machines. We implemented the presented method on a smartphone and demonstrate that it is efficient enough to be applied in practice.
---
paper_title: Context-Aware Active Authentication Using Smartphone Accelerometer Measurements
paper_content:
While body movement patterns recorded by a smartphone accelerometer are now well understood to be discriminative enough to separate users, little work has been done to address the question of if or how the position in which the phone is held affects user authentication. In this work, we show through a combination of supervised learning methods and statistical tests, that there are certain users for whom exploitation of information of how a phone is held drastically improves classification performance. We propose a two-stage authentication framework that identifies the location of the phone before performing authentication, and show its benefits based on a dataset of 30 users. Our work represents a first step towards bridging the gap between accelerometer-based authentication systems analyzed from the context of a laboratory environment and a real accelerometer-based authentication system in the wild where phone positioning cannot be assumed.
---
paper_title: Estimating sleep efficiency in 10‐ to‐ 13‐year‐olds using a waist‐worn accelerometer
paper_content:
Abstract Objective In field settings, wrist- and waist-worn accelerometers are typically used to assess sleep characteristics and movement behaviors, respectively. There has been a shift in movement behavior studies to wear accelerometers 24 h/d. Sleep characteristics could be assessed in these studies if sleep algorithms were available for waist-worn accelerometers. The objective of this study was to develop and provide validity data for an algorithm/sleep likelihood score cut-off to estimate sleep efficiency in children using the waist-worn Actical accelerometer. Design Cross-sectional study. Participants Fifty healthy children aged 10-13 years. Measurements Children wore an Actical on their waist and an Actiwatch 2 on their nondominant wrist for 8 nights at home in their normal sleep environment. Participants were randomized into algorithm/sleep likelihood score "development" and "test" groups (n=25 per group). Within the development group, we assessed sleep efficiency with the Actical using the same algorithm that the Actiwatch 2 uses and selected the sleep likelihood score cut-off value that was the most accurate at predicting sleep efficiency at the nightly level compared with the Actiwatch 2. We applied this algorithm and cut-off value to the test group. Results Mean (SD) sleep efficiency estimates for the test group from the Actical and Actiwatch 2 were 89.0% (3.9%) and 88.7% (3.1%), respectively. Bland-Altman plots and absolute difference scores revealed considerable agreement between devices for both nightly and weekly estimates of sleep efficiency. Conclusion A waist-worn Actical accelerometer can accurately predict sleep efficiency in field settings among healthy 10- to 13-year-olds.
---
paper_title: Activity Recognition Using a Single Accelerometer Placed at the Wrist or Ankle
paper_content:
AB Purpose: Large physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities. Methods: Participants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated. Results: With 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%. Conclusions: A classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency.
---
paper_title: Cell phone-based biometric identification
paper_content:
Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biometrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.
---
paper_title: ACCessory: password inference using accelerometers on smartphones
paper_content:
We show that accelerometer readings are a powerful side channel that can be used to extract entire sequences of entered text on a smart-phone touchscreen keyboard. This possibility is a concern for two main reasons. First, unauthorized access to one's keystrokes is a serious invasion of privacy as consumers increasingly use smartphones for sensitive transactions. Second, unlike many other sensors found on smartphones, the accelerometer does not require special privileges to access on current smartphone OSes. We show that accelerometer measurements can be used to extract 6-character passwords in as few as 4.5 trials (median).
---
paper_title: puffMarker: a multi-sensor approach for pinpointing the timing of first lapse in smoking cessation
paper_content:
Recent researches have demonstrated the feasibility of detecting smoking from wearable sensors, but their performance on real-life smoking lapse detection is unknown. In this paper, we propose a new model and evaluate its performance on 61 newly abstinent smokers for detecting a first lapse. We use two wearable sensors --- breathing pattern from respiration and arm movements from 6-axis inertial sensors worn on wrists. In 10-fold cross-validation on 40 hours of training data from 6 daily smokers, our model achieves a recall rate of 96.9%, for a false positive rate of 1.1%. When our model is applied to 3 days of post-quit data from 32 lapsers, it correctly pinpoints the timing of first lapse in 28 participants. Only 2 false episodes are detected on 20 abstinent days of these participants. When tested on 84 abstinent days from 28 abstainers, the false episode per day is limited to 1/6.
---
paper_title: A Triaxial Accelerometer-Based Physical-Activity Recognition via Augmented-Signal Features and a Hierarchical Recognizer
paper_content:
Physical-activity recognition via wearable sensors can provide valuable information regarding an individual's degree of functional ability and lifestyle. In this paper, we present an accelerometer sensor-based approach for human-activity recognition. Our proposed recognition method uses a hierarchical scheme. At the lower level, the state to which an activity belongs, i.e., static, transition, or dynamic, is recognized by means of statistical signal features and artificial-neural nets (ANNs). The upper level recognition uses the autoregressive (AR) modeling of the acceleration signals, thus, incorporating the derived AR-coefficients along with the signal-magnitude area and tilt angle to form an augmented-feature vector. The resulting feature vector is further processed by the linear-discriminant analysis and ANNs to recognize a particular human activity. Our proposed activity-recognition method recognizes three states and 15 activities with an average accuracy of 97.9% using only a single triaxial accelerometer attached to the subject's chest.
---
| Title: Privacy implications of accelerometer data: a review of possible inferences
Section 1: INTRODUCTION
Description 1: Provide an overview of accelerometers, their common applications in mobile devices, and the potential privacy concerns associated with them.
Section 2: POSSIBLE INFERENCES
Description 2: Introduce and summarize the different types of sensitive information that can be inferred from accelerometer data, categorized into sub-sections.
Section 2.1: Activity and Behavior Tracking
Description 2.1: Discuss how accelerometer data can be used to track various activities and behaviors of a user, including physical activities, sleep patterns, gestures, and more.
Section 2.2: Location Tracking
Description 2.2: Explain how accelerometer data can be utilized for tracking a user's location and deducing travel trajectories, even without GPS.
Section 2.3: User Identification
Description 2.3: Describe methods of identifying users based on their movement patterns and other unique biometric features captured through accelerometers.
Section 2.4: Keystroke Logging
Description 2.4: Detail how accelerometer data can be used to infer text inputs from touchscreen and keyboard interactions, posing privacy risks.
Section 2.5: Inference of Health Parameters and Body Features
Description 2.5: Elaborate on how health and body-related information can be derived from accelerometer data, including weight, height, and various health conditions.
Section 2.6: Inference of Demographics
Description 2.6: Discuss how demographic variables such as age and gender can be estimated using accelerometer data.
Section 2.7: Mood and Emotion Recognition
Description 2.7: Explore the potential of using accelerometer data to infer a user's mood and emotional states.
Section 2.8: Inference of Personality Traits
Description 2.8: Discuss methods for deducing personality traits from body movements and physical activity patterns measured by accelerometers.
Section 3: DISCUSSION AND IMPLICATIONS
Description 3: Reflect on the findings and discuss the broader implications for user privacy, potential threats, and technical and legal protection measures.
Section 4: CONCLUSION
Description 4: Summarize the key points of the paper and highlight the importance of reconsidering the privacy implications of accelerometer data in mobile devices. |
Using Scrum in Global Software Development: A Systematic Literature Review | 10 | ---
paper_title: Incorporating social software into distributed agile development environments
paper_content:
The use of social software applications, such as wikis and blogs, has emerged as a practical and economical option to consider as global teams may use them to organize, track, publish their work, and then, share knowledge. We intend to push further the application of social software principles and technologies into collaborative development environments for agile and distributed projects. As a first step, in this paper we first present a survey of social software, as well as tools and environments for collaborative development. Then, we present some opportunities and challenges of incorporating social software aspects in agile distributed development.
---
paper_title: Incorporating social software into distributed agile development environments
paper_content:
The use of social software applications, such as wikis and blogs, has emerged as a practical and economical option to consider as global teams may use them to organize, track, publish their work, and then, share knowledge. We intend to push further the application of social software principles and technologies into collaborative development environments for agile and distributed projects. As a first step, in this paper we first present a survey of social software, as well as tools and environments for collaborative development. Then, we present some opportunities and challenges of incorporating social software aspects in agile distributed development.
---
| Title: Using Scrum in Global Software Development: A Systematic Literature Review
Section 1: INTRODUCTION
Description 1: Introduce the topic of the paper, including the motivation, research gap, and objectives of the review.
Section 2: BACKGROUND AND MOTIVATION
Description 2: Provide an overview of Scrum methodology, its relevance to GSD, and the specific motivation for conducting this review.
Section 3: RESEARCH METHOD
Description 3: Describe the systematic literature review methodology, including the development of review protocol, identification and selection of primary studies, and data extraction and synthesis.
Section 4: Data Sources and Search Strategies
Description 4: Detail the databases searched, search strategies used, and initial results of the search.
Section 5: Managing Studies and Inclusion Decisions
Description 5: Explain the process for managing studies, inclusion and exclusion criteria, and the final selection of studies.
Section 6: Final Selection
Description 6: Describe the criteria for final selection of papers and the quality assessment process.
Section 7: Data Extraction and Synthesis
Description 7: Provide an overview of the data extraction process and how data was synthesized to address the research questions.
Section 8: Findings about Research Questions
Description 8: Present the main findings of the review, organized according to the research questions identified.
Section 9: DISCUSSION
Description 9: Discuss the implications of the findings, draw conclusions, and identify the need for further empirical research on the topic.
Section 10: LIMITATION
Description 10: Discuss the limitations of the study and how they may have impacted the findings.
Section 11: CONCLUSIONS AND FUTURE RESEARCH
Description 11: Summarize the main conclusions of the review and propose directions for future research. |
The Relationships between Symmetry and Attractiveness and Mating Relevant Decisions and Behavior: A Review | 16 | ---
paper_title: Human pheromones and sexual attraction
paper_content:
Olfactory communication is very common amongst animals, and since the discovery of an accessory olfactory system in humans, possible human olfactory communication has gained considerable scientific interest. The importance of the human sense of smell has by far been underestimated in the past. Humans and other primates have been regarded as primarily 'optical animals' with highly developed powers of vision but a relatively undeveloped sense of smell. In recent years this assumption has undergone major revision. Several studies indicate that humans indeed seem to use olfactory communication and are even able to produce and perceive certain pheromones; recent studies have found that pheromones may play an important role in the behavioural and reproduction biology of humans. In this article we review the present evidence of the effect of human pheromones and discuss the role of olfactory cues in human sexual behaviour.
---
paper_title: Female judgment of male attractiveness and desirability for relationships: Role of waist-to-hip ratio and financial status.
paper_content:
Two studies were conducted to examine the role of male body shape ( as defined by waist-to-hip ratio [WHR]) in female mate choice. In Study I, college-age women judged normal-weight male figures with WHR in the typical male range as most attractive, healthy, and possessing many positive personal qualities. In Study 2, 18-69-year-old women rated normal-weight male figures with differing WHRs and purported income for casual (having coffee) to most-committed (marriage) relationships. All women, regardless of their age, education level, or family income, rated figures with WHRs in the typical male range and higher financial status more favorably. These findings are explained within an evolutionary mate selection context.
---
paper_title: Adaptive significance of female physical attractiveness: Role of waist-to-hip ratio.
paper_content:
Evidence is presented showing that body fat distribution as measured by waist-to-hip ratio (WHR) is correlated with youthfulness, reproductive endocrinologic status, and long-term health risk in women. Three studies show that men judge women with low WHR as attractive. Study 1 documents that minor changes in WHRs of Miss America winners and Playboy playmates have occurred over the past 30-60 years. Study 2 shows that college-age men find female figures with low WHR more attractive, healthier, and of greater reproductive value than figures with a higher WHR. In Study 3, 25- to 85-year-old men were found to prefer female figures with lower WHR and assign them higher ratings of attractiveness and reproductive potential. It is suggested that WHR represents an important bodily feature associated with physical attractiveness as well as with health and reproductive potential. A hypothesis is proposed to explain how WHR influences female attractiveness and its role in mate selection. Evolutionary theories of human mate selection contend that both men and women select mating partners who enable them to enhance reproductive success. Differential reproductive conditions and physiological constraints in men and women, however, induce different gender-specific sexual and reproductive strategies. In general, a woman can increase her reproductive success by choosing a high-status man who controls resources and, hence, can provide material security to successfully raise
---
paper_title: Does Women's Hair Signal Reproductive Potential?
paper_content:
This study explores the possibility that women's hair signals their reproductive potential. Evolutionary psychology and related approaches are considered as rationales for the belief that women's hair is a signal for mate selection and attraction. A sample of women were approached in public places and surveyed as to their age, hair quality, marital status, hair length, children, and overall health. A significant correlation between hair length and age indicated that younger women tend to have longer hair than older women. Hair quality was correlated with women's health. Consistent with the principle of intersexual selection, the results of this study indicate that hair length and quality can act as a cue to a woman's youth and health and, as such, signify reproductive potential. Future directions for research on women's hair are discussed.
---
paper_title: Vocal and visual attractiveness are related in women
paper_content:
Abstract We investigated the relation between visual and vocal attractiveness in women as judged by men. We recorded 34 women speaking four vowels and measured the peak frequency, the first five harmonic frequencies, the first five formant frequencies and formant dispersion. The women were also photographed (head shot), several body measures were taken and their ages were recorded. The voices were played to male judges who were asked to assess the women's age and vocal attractiveness from the recording. The men were then asked to assess the attractiveness of the photographs. Men were in strong agreement on which was an attractive voice and face; and women with attractive faces had attractive voices. Higher-frequency voices were assessed as being more attractive and as belonging to younger women (the lowest frequency produced is a good indicator of age in women in general). Larger women had lower voices and were judged as having less attractive faces and voices. Taller women had narrower formant dispersion as predicted. The results imply that different measures of attractiveness are in agreement and signal similar qualities, such as female age, body size and possibly hormonal profile. Copyright 2003 Published by Elsevier Science Ltd on behalf of The Association for the Study of Animal Behaviour.
---
paper_title: WAIST-TO-HIP RATIO AND ATTRACTIVENESS : REPLICATION AND EXTENSION
paper_content:
Abstract In several experiments Singh ( Journal of Personality and Social Psychology, 65 , 292–307, 1993a; Human Nature, 4 , 297–321, 1993b; Personality and Individual Differences, 16 , 123–132, 1994; Singh & Luis, Human Nature, 6 , 51–65, 1995) presented evidence that the attractiveness of the female figure is related to the waist-to-hip ratio and the apparent overall body weight. The present paper presents a replication and extension of the Singh studies. Both female and male figures are considered. In addition to different aspects of physical attractiveness, the so called Big Five factors of personality are considered. Both similarities and differences between the Singh studies and the present study are discussed.
---
paper_title: Age preferences in mates reflect sex differences in human reproductive strategies.
paper_content:
Social psychologists have held that when selecting mates women prefer older men and men prefer younger women. In a form of economic exchange based on conventional sex role norms both men and women seek potential partners who are similar to themselves. Behavioral psychologists have proposed an evolutionary model to explain age preferences. The social exchange model does not explain what cross cultural research which is that men and women in other cultures differ in ways that are parallel to gender differences in US society. The evolutionary model submits that men and women pursue distinct reproductive strategies. It also theorizes a more complicated relationship between gender and age preferences than the social exchange model. 2 behavioral psychologists have conducted 6 studies to determine whether the evolutionary model holds true. The hypothesis states that during the early years men prefer women who are only somewhat younger than they are but as they are they prefer women who are considerably younger than they are. On the other hand young women prefer men who are slightly older than they are and this preference does not change much with age. The psychologists examined age preferences in personal advertisements from newspapers in Arizona West Germany the Netherlands and India and in singles advertisements by financially successful US women and men in the Washington D.C. They found the results consistently fit well with the evolutionary model. They also studied marriage statistics from Seattle Washington and Phoenix Arizona which also supported the hypothesis. They conducted a cross-generational analysis using 1923 marriage statistics from Phoenix which indicated consistency across generations. A study of marriages that took place between 1913 and 1939 on the small island of Poro in the Philippines also supported the theory. Thus psychologists should expand previous models of age preferences to incorporate the life history position.
---
paper_title: Cross-cultural consensus for waist–hip ratio and women's attractiveness
paper_content:
In women of reproductive age, a gynoid body fat distribution as measured by the size of waist–hip ratio (WHR) is a reliable indicator of their sex hormone profile, greater success in pregnancy and less risk for major diseases. According to evolutionary mate selection theory, such indicators of health and fertility should be judged as attractive. Previous research has confirmed this prediction. In this current research, we use the same stimulus for diverse racial groups (Bakossiland, Cameroon, Africa; Komodo Island, Indonesia; Samoa; and New Zealand) to examine the universality of relationships between WHR and attractiveness. As WHR is positively correlated with body mass index (BMI), we controlled BMI by using photographs of women who have gone through micrograft surgery for cosmetic reasons. Results show that in each culture participants selected women with low WHR as attractive, regardless of increases or decreases in BMI. This cross-cultural consensus suggests that the link between WHR and female attractiveness is due to adaptation shaped by the selection process.
---
paper_title: Evolution and Social Cognition: Contrast Effects as a Function of Sex, Dominance, and Physical Attractiveness
paper_content:
Previous research indicates that males, compared with females, evaluate their relationships less favorably after exposure to physically attractive members of the other sex. An evolutionary model predicts a converse effect after exposure to opposite-sex individuals high in dominance, which should lead females to evaluate their current relationships less favorably than males. Women and men rated their current relationships after being exposed to opposite-sex targets varying in both dominance and physical attractiveness. Consistent with earlier research, males exposed to physically attractive, as compared with average, targets rated their current relationships less favorably. Males' relationship evaluations were not directly influenced by the targets' dominance, although the effect of physical attractiveness was significant only for men exposed to women low in dominance. However; females' evaluations of their relationships were unaffected by exposure to physically attractive males but were lower after exposu...
---
paper_title: The Evolution of Human Sexuality
paper_content:
An adduct is prepared by heating together, in the presence of an acid catalyst, (1) a perfluoroalkyl alcohol of formula Rf(CH2)n-OH such as perfluoroalkylethyl alcohol; (2) a polyalkylene glycol such as polyethylene glycol having a molecular weight of about 1000; and (3) a poly(alkoxymethyl)melamine such as hexa(methoxymethyl)melamine. These adducts are effective in promoting oil and water repellency and oily soil release from textiles.
---
paper_title: Evolutionary Theory and African American Self-Perception: Sex Differences in Body-Esteem Predictors of Self-Perceived Physical and Sexual Attractiveness, and Self-Esteem
paper_content:
Evolutionary biological theory has been shown to be relevant to an understanding of how individuals assess others' physical and sexual attractiveness. This research used the Body-Esteem Scale and multiple regression to determine if this theory is also relevant to an understanding of self-perceived physical and sexual attractiveness and self-esteem for a sample of 91 African Americans. The hypotheses that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness for women. This research indicates that evolutionary biological theory can provide relevant insig...
---
paper_title: "Their ideas of beauty are, on the whole, the same as ours": Consistency and variability in the cross-cultural perception of female physical attractiveness.
paper_content:
The consistency of physical attractiveness ratings across cultural groups was examined. In Study 1, recently arrived native Asian and Hispanic students and White Americans rated the attractiveness of Asian, Hispanic, Black, and White photographed women. The mean correlation between groups in attractiveness ratings was r =.93. Asians, Hispanics, and Whites were equally influenced by many facial features, but Asians were less influenced by some sexual maturity and expressive features. In Study 2, Taiwanese attractiveness ratings correlated with prior Asian, Hispanic, and American ratings, mean r =.91. Supporting Study 1, the Taiwanese also were less positively influenced by certain sexual maturity and expressive features. Exposure to Western media did not influence attractiveness ratings in either study. In Study 3, Black and White American men rated the attractiveness of Black female facial photos and body types. Mean facial attractiveness ratings were highly correlated (r = .94), but as predicted Blacks and Whites varied in judging bodies
---
paper_title: A cross-cultural investigation of the role of foot size in physical attractiveness
paper_content:
Disparate cultural practices suggest that small foot size may contribute to female attractiveness. Two hypotheses potentially explain such a pattern. Sexual dimorphism in foot size may lead observers to view small feet as feminine and large feet as masculine. Alternately, because small female feet index both youth and nulliparity, evolution may have favored a male preference for this attribute in order to maximize returns on male reproductive investment. Whereas the observational hypothesis predicts symmetrical polarizing preferences, with small feet being preferred in women and large feet being preferred in men, the evolutionary hypothesis predicts asymmetrical preferences, with the average phenotype being preferred in men. Using line drawings that varied only in regard to relative foot size, we examined judgments of attractiveness in nine cultures. Small foot size was generally preferred for females, while average foot size was preferred for males. These results provide preliminary support for the hypothesis that humans possess an evolved preference for small feet in females.
---
paper_title: Dominance and heterosexual attraction
paper_content:
Four experiments examined the relation between behavioral expressions of dominance and the heterosexual attractiveness of males and females. Predictions concerning the relation between dominance and heterosexual attraction were derived from a consideration of sex role norms and from the comparative biological literature. All four experiments indicated an interaction between dominance and sex of target. Dominance behavior increased the attractiveness of males, but had no effect on the attractiveness of females. The third study indicated that the effect did not depend on the sex of the rater or on the sex of those with whom the dominant target interacted. The fourth study showed that the effect was specific to dominance as an independent variable and did not occur for related constructs (aggressive or domineering). This study also found that manipulated dominance enhanced only a male's sexual attractiveness and not his general Usability. The results were discussed in terms of potential biological and cultural causal mechanisms. Concepts that refer to an individual's relative position in a social hierarchy occupy prominent positions in current models of personality and social behavior (Edelmarv & Omark, 1973; Hogan, 1979,1982;StrayerS his analysis of personality descriptions in different language groups indicates that dominance-submission is a universal lexical feature of human languages. The research reported here concerns the relation between behavioral expressions of dominance and the sexual attractiveness of males and females. Specific relations between dominance and attraction are predicted both by sociobiological theories that emphasize evolutionarily determined behavior tendencies and by scciocultural theories that emphasize socialization practices and sex role expectations.
---
paper_title: Height and reproductive success in a cohort of british men
paper_content:
Two recent studies have shown a relationship between male height and number of offspring in contemporary developed-world populations. One of them argues as a result that directional selection for male tallness is both positive and unconstrained. This paper uses data from a large and socially representative national cohort of men who were born in Britain in March 1958. Taller men were less likely to be childless than shorter ones. They did not have a greater mean number of children. If anything, the pattern was the reverse, since men from higher socioeconomic groups tended to be taller and also to have smaller families. However, clear evidence was found that men who were taller than average were more likely to find a long-term partner, and also more likely to have several different long-term partners. This confirms the finding that tall men are considered more attractive and suggests that, in a noncontracepting environment, they would have more children. There is also evidence of stabilizing selection, since extremely tall men had an excess of health problems and an increased likelihood of childlessness. The conclusion is that male tallness has been selected for in recent human evolution but has been constrained by developmental factors and stabilizing selection on the extremely tall.
---
paper_title: What do women want? Facialmetric assessment of multiple motives in the perception of male facial physical attractiveness
paper_content:
The multiple motive hypothesis of physical attractiveness suggests that women are attracted to men whose appearances elicit their nurturant feelings, who appear to possess sexual maturity and dominance characteristics, who seem sociable, approacheable, and of high social status. Those multiple motives may cause people to be attracted to individuals who display an optimal combination of neotenous, mature, and expressive facial features, plus desirable grooming attributes. Three quasi-experiments demonstrated that men who possessed the neotenous features of large eyes, the mature features of prominent cheekbones and a large chin, the expressive feature of a big smile, and high-status clothing were seen as more attractive than other men. Further supporting the multiple motive hypothesis, the 2nd and 3rd studies indicated that impressions of attractiveness had strong relations with selections of men to date and to marry but had a curvilinear relation with perceptions of a baby face vs. a mature face.
---
paper_title: Is thin really beautiful and good? Relationship between waist-to-hip ratio (WHR) and female attractiveness
paper_content:
Abstract Two studies were conducted to determine the relative role played by overall body fat and body fat distribution as indicated by the measure of waist-to-hip ratio (WHR) in determining female perceived attractiveness and associated personality attributes. Contrary to popular belief, thin female figures were neither perceived most attractive nor assigned many desirable personality traits, except youthfulness. The measure of body fat distribution, the WHR, was found to be the critical variable associated with attractiveness. Normal weight female figures with low WHR were judged to be most attractive and were assigned many desirable qualities.
---
paper_title: Mating context and menstrual phase affect women's preferences for male voice pitch
paper_content:
Abstract Fundamental frequency ( F 0 ) is the vocal acoustic parameter closest to what we perceive as pitch. Men speak at a lower F 0 than do women, even controlling for body size. Although the developmental and anatomical reasons for this sex difference are known, the evolutionary reasons are not. By examining fertility-related variation in women's preferences for men's voices, the present study tests the hypothesis that female choice for good genes influenced the evolution of male voice pitch (VP). Unlike previous correlational studies that did not consider the effects of menstrual phase and mating context on women's preferences for male VP, the present study includes these variables and utilizes experimental pitch (P) manipulations. Results indicate that low VP is preferred mainly in short-term mating contexts rather than in long-term, committed ones, and this mating context effect is greatest when women are in the fertile phase of their ovulatory cycles. Moreover, lower male F 0 correlated with higher self-reported mating success. These findings are consistent with the hypothesis that an association between low male VP and heritable fitness led to the evolution of the observed patterns in women's P preferences and men's mating success and that these patterns influenced the evolution of low VP in men. However, alternative explanations are considered.
---
paper_title: Reproductive strategies and disease susceptibility: an evolutionary viewpoint.
paper_content:
Arguments about which constitutes the 'weaker sex' notwithstanding, sex differences in mortality and disease susceptibility have been noted in many species of animals, including humans. In this article, Marlene Zuk examines the possible reasons for these differences, relating them to reproductive strategies, and suggests how they may have resulted from selective pressures.
---
paper_title: The associations between obesity, adipose tissue distribution and disease.
paper_content:
Recent research has shown the marked differences in association with disease between obesity localized to the abdominal respectively to the gluteal-femoral regions. In this review systematic analyses were performed of the associations between obesity (body mass index, BMI) or abdominal obesity (increased waist-over-hip circumference ratio, WHR) on the one hand, and a number of disease end points, and their risk factors, as well as other factors on the other, WHR was associated with cardiovascular disease, premature death, stroke, non-insulin-dependent diabetes mellitus and female carcinomas. In contrast, BMI tended to be negatively correlated to cardiovascular disease, premature death, and stroke, but positively to diabetes. The established risk factors for these end points were found to correlate to WHR, while this was often not the case with BMI. BMI was positively correlated only to insulin, triglycerides and blood pressure. Together with diabetes mellitus, this seems to constitute a metabolic group of conditions which are thus associated with BMI. Androgens (in women), and perhaps cortisol, seem to be positively, and progesterone negatively correlated to WHR. The WHR was also positively associated with sick leave, several psychological maladjustments, psychosomatic and psychiatric disease. Attempts were made to interpret these findings. In a first alternative an elevation of FFA concentration, produced from abdominal adipose tissue, was considered to be the trigger factor for the pathologic aberrations associated with abdominal distribution of body fat. When obesity is added, the metabolic aberrations may be exaggerated. In a second alternative adrenal cortex hyperactivity was tested as the cause. When combined with the FFA hypothesis, this might explain many but not all of the findings. It seems possible to produce an almost identical syndrome in primates by defined experimental stress. Women with high WHR were found to have a number of symptoms of poor coping to stress. It was therefore suggested that part of the background to this syndrome might be a hypothalamic arousal syndrome developing with stress. It was concluded that obesity and abdominal distribution of adipose tissue constitute two separate entities with different pathogenesis, clinical consequences and probably treatment.
---
paper_title: Parasites, Bright Males, and the Immunocompetence Handicap
paper_content:
It has been argued that females should be able to choose parasite-resistant mates on the basis of the quality of male secondary sexual characters and that such signals must be costly handicaps in order to evolve. To a large extent, handicap hypotheses have relied on energetic explanations for these costs. Here, we have presented a phenomenological model, operating on an intraspecific level, which views the cost of secondary sexual development from an endocrinological perspective. The primary androgenic hormone, testosterone, has a dualistic effect; it stimulates development of characteristics used in sexual selection while reducing immu- nocompetence. This "double-edged sword" creates a physiological trade-off that influences and is influenced by parasite burden. We propose a negative-feedback loop between signal intensity and parasite burden by suggesting that testosterone-dependent signal intensity is a plastic re- sponse. This response is modified in accordance with the competing demands of the potential costs of parasite infection versus that of increased reproductive success afforded by exaggerated signals. We clarify how this trade-off is intimately involved in the evolution of secondary sexual characteristics and how it may explain some of the equivocal empirical results that have surfaced in attempts to quantify parasite's effect on sexual selection.
---
paper_title: The mystery of female beauty
paper_content:
Evolutionary psychology suggests that a woman's sexual attractiveness might be based on cues of reproductive potential. It has been proposed that a major determinant of physical attractiveness is the ratio between her waist and hip measurements (the waist-to-hip ratio, or WHR): for example, a woman with a curvaceous body and a WHR of 0.7 is considered to be optimally attractive1,2,3, presumably because this WHR is the result of a fat distribution that maximizes reproductive potential4. It follows that the preference for a curvaceous body shape in women should be universal among men and not be culturally based, because natural selection presumably favours cues indicative of the most fertile body shape.
---
paper_title: Female health, attractiveness, and desirability for relationships: Role of breast asymmetry and waist-to-hip ratio
paper_content:
Abstract Fluctuating asymmetry (FA) of ordinarily bilaterally symmetrical traits in humans has been proposed to indicate developmental anomaly. Recent research has shown that individuals with minimal FA are judged to be attractive, and preferred as sexual partners (Thornhill and Gangestad 1993). Waist-to-hip ratios (WHR) have been also shown to reflect health and reproductive capability of woman and those with low WHRs are judged more attractive and healthy (Singh 1993a,b). The present study examines the relative roles of WHR and FA in female breasts in judgments of female attractiveness, health, and desirability for short-term and long-term relationships. Male college students were asked to judge attractiveness of female figures that differed in WHR (high and low) and breast symmetry (none, low, or high). In the first test, paired comparison method was used in which each figure was paired one at a time with all other figures. In the second test, subjects examined all figures simultaneously, estimated their age, and rated each figure for attractiveness, feminine looks, health, overall degree of body symmetry, and willingness to engage in short- and long-term relationship. Results from both tests show that figures with low WHRs were judged to be more attractive than figures with high WHRs, regardless of their degree of breast asymmetry. The figure with low WHR and symmetrical breasts was judged to be most attractive and youngest of all other figures. It appears that men use both WHR and breast asymmetry in judging attractiveness and being willing to develop romantic relationships. It is proposed that WHR and breast asymmetry may signal different aspects of overall female mate quality.
---
paper_title: Adaptive preferences for leg length in a potential partner
paper_content:
It has been shown that height is one of the morphological traits that influence a person's attractiveness. To date, few studies have addressed the relationship between different components of height and physical attractiveness. Here, we study how leg length influences attractiveness in men and women. Stimuli consisted of seven different pictures of a man and seven pictures of a woman in which the ratio between leg length and height was varied from the average phenotype by elongating and shortening the legs. One hundred men and 118 women were asked to assess the attractiveness of the silhouettes using a seven-point scale. We found that male and female pictures with shorter than average legs were perceived as less attractive by both sexes. Although longer legs appeared to be more attractive, this was true only for the slight (5%) leg length increase; excessively long legs decreased body attractiveness for both sexes. Because leg length conveys biological quality, we hypothesize that such preferences reflect the workings of evolved mate-selection mechanisms. Short and/or excessively long legs might indicate maladaptive biological conditions such as genetic diseases, health problems, or weak immune responses to adverse environmental factors acting during childhood and adolescence.
---
paper_title: Body weight, waist-to-hip ratio, breasts, and hips: Role in judgments of female attractiveness and desirability for relationships
paper_content:
Abstract Morphological features such as overall body fat, body fat distribution, as measured by waist-to-hip ratio, breast size, and hip width have been proposed to influence female attractiveness and desirability. To determine how the variations of these morphological features interact and affect the judgment of female age, attractiveness, and desirability for romantic relationships, two studies were conducted. In Study 1, college-age men rated female figures differing in body weight, waist-to-hip ratio, and breast size for age, attractiveness, health, and desirability for short-and long-term relationships. Female figures with slender bodies, low waist-to-hip ratios, and large brasts were rated as most attractive, feminine looking, healthy, and desirable for casual and long-term romantic relationships. In Study 2, female figures with similar body weight and waist-to-hip ratios but differing hip widths and breast sizes were rated for the same attributes as in Study 1. Female figures with large breasts and narrow hips were rated as most youthful, attractive, and desirable for casual and long-term romantic relationships. It seems that larger body size, a high waist-to-hip ratio, and larger hips make the female figure appear older, unattractive, and less desirable for engaging in romantic relationships. Discussion focuses on the functional significance of interactions among various morphological features in determining female attractiveness.
---
paper_title: Pupillometry: A sexual selection approach
paper_content:
We attempted to clarify prior reported discrepancies between males judging females and females judging males in the attraction value of pupil size. Our hypothesis was that attraction values of pupil size will be described by an interaction effect, such that males will be most attracted by large pupils in females and females by medium size pupils in males. The rationale for the hypothesis was that the reproductive strategies of males are best served by unequivocal female sexual interest and arousal, whereas the strategies of females will predispose them to favor more moderate sexual attentions. As expected, the relationship of attraction to pupil size was positive and linear for males viewing females. Females, however, rather than showing the predicted inverted U function, showed consistent preferences for either medium or large pupils in males. Further investigation revealed that females attracted by large pupils also reported preferences for proverbial bad boys as dating partners. Analogous findings in the literature on female romantic partner preferences are discussed.
---
paper_title: Men's voices and women's choices
paper_content:
I investigated the relationship between male human vocal characteristics and female judgements about the speaker. Thirty-four males were recorded uttering five vowels and measures were taken, from power spectrums, of the first five harmonic frequencies, overall peak frequency and formant frequencies (emphasized, resonance, frequencies within the vowel). Male body measures were also taken (age, weight, height, and hip and shoulder width) and the men were asked whether they had chest hair. The recordings were then played to female judges, who were asked to rate the males' attractiveness, age, weight and height, and to estimate the muscularity of the speaker and whether he had a hairy chest. Men with voices in which there were closely spaced, low-frequency harmonics were judged as being more attractive, older and heavier, more likely to have a hairy chest and of a more muscular body type. There was no relationship between any vocal and body characteristic. The judges' estimates were incorrect except for weight. They showed extremely strong agreement on all judgements. The results imply that there could be sexual selection through female choice for male vocal characteristics, deeper voices being preferred. However, the function of the preference is unclear given that the estimates were generally incorrect.
---
paper_title: Women's height, reproductive success and the evolution of sexual dimorphism in modern humans
paper_content:
Recent studies have shown that, in contemporary populations, tall men have greater reproductive success than shorter men. This appears to be due to their greater ability to attract mates. To our knowledge, no comparable results have yet been reported for women. This study used data from Britain's National Child Development Study to examine the life histories of a nationally representative group of women. Height was weakly but significantly related to reproductive success. The relationship was U-shaped, with deficits at the extremes of height. This pattern was largely due to poor health among extremely tall and extremely short women. However, the maximum reproductive success was found below the mean height for women. Thus, selection appears to be sexually disruptive in this population, favouring tall men and short women. Over evolutionary time, such a situation tends to maintain sexual dimorphism. Men do not use stature as a positive mate-choice criterion as women do. It is argued that there is good evolutionary reason for this, because men are orientated towards cues of fertility, and female height, being positively related to age of sexual maturity, is not such a cue.
---
paper_title: Hairstyle as an adaptive means of displaying phenotypic quality
paper_content:
Although facial features that are considered beautiful have been investigated across cultures using the framework of sexual selection theory, the effects of head hair on esthetic evaluations have rarely been examined from an evolutionary perspective. In the present study the effects of six hair-styles (short, medium-length, long, disheveled, knot [hair bun], unkempt) on female facial attractiveness were examined in four dimensions (femininity, youth, health, sexiness) relative to faces without visible head hair (“basic face”). Three evolutionary hypotheses were tested (covering hypothesis, healthy mate theory, and good genes model); only the good genes model was supported by our data. According to this theory, individuals who can afford the high costs of long hair are those who have good phenotypic and genetic quality. In accordance with this hypothesis, we found that only long and medium-length hair had a significant positive effect on ratings of women’s attractiveness; the other hairstyles did not influence the evaluation of their physical beauty. Furthermore, these two hairstyles caused a much larger change in the dimension of health than in the rest of the dimensions. Finally, male raters considered the longer-haired female subjects’ health status better, especially if the subjects were less attractive women. The possible relationships between facial attractiveness and hair are discussed, and alternative explanations are presented.
---
paper_title: Ethnic and gender consensus for the effect of waist-to-hip ratio on judgment of women's attractiveness
paper_content:
The western consensus is that obese women are considered attractive by Afro-Americans and by many societies from nonwestern developing countries. This belief rests mainly on results of nonstandardized surveys dealing only with body weight and size, ignoring body fat distribution. The anatomical distribution of female body fat as measured by the ratio of waist to hip circumference (WHR) is related to reproductive age, fertility, and risk for various major diseases and thus might play a role in judgment of attractiveness. Previous research (Singh 1993a, 1993b) has shown that in the United States Caucasian men and women judge female figures with feminine WHRs as attractive and healthy. To investigate whether young Indonesian and Afro-American men and women rate such figures similarly, female figures representing three body sizes (underweight, normal weight, and overweight) and four WHRs (two feminine and two masculine) were used. Results show that neither Indonesian nor Afro-American subjects judge overweight figures as attractive and healthy regardless of the size of WHR. They judged normal weight figures with feminine WHRs as most attractive, healthy, and youthful. The consensus on women’s attractiveness among Indonesian, Afro-American, and U.S. Caucasian male and female subjects suggests that various cultural groups have similar criteria for judging the ideal woman’s shape.
---
paper_title: Evolutionary Theory and Self-perception: Sex Differences in Body Esteem Predictors of Self-perceived Physical and Sexual Attractiveness and Self-Esteem
paper_content:
Responses to the body esteem scale (Franzoi & Shields, 1984) and multiple regression were used to determine if evolutionary biological theory is relevant to an understanding of self-perceived physical and sexual attractiveness and selfesteem and to determine if physical and sexual attractiveness are the same construct. It was hypothesized that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory. These hypotheses were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness. Physical and sexual attractiveness are not the same constructs. This research indicates that evolutionary bological theory can provide relev...
---
paper_title: FLUCTUATING ASYMMETRY: AN EPIGENETIC MEASURE OF STRESS
paper_content:
(1) Fluctuating asymmetry (FA) is a useful trait for monitoring stress in the laboratory and in natural environments. (2) Both genomic and environmental changes can increase FA which represents a deterioration in developmental homeostasis apparent in adult morphology. Genetic perturbations include intense directional selection and certain specific genes. Environmental perturbations include temperature extremes in particular, protein deprivation, audiogenic stress, and exposure to pollutants. (3) There is a negative association between FA and heterozygosity in a range of taxa especially fish, a result consistent with FA being a measure of fitness. (4) Scattered reports on non-experimental populations are consistent with experiments under controlled laboratory conditions. FA tends to increase as habitats become ecologically marginal; this includes exposure to environmental toxicants. (5) In our own species, FA of an increasing range of traits has been related to both environmental and genomic stress. (6) Domestication increases FA of the strength of homologous long bones of vertebrate species due to a relaxation of natural selection. (7) FA levels are paralleled by the incidence of skeletal abnormalities in stressful environments. (8) Increased FA is a reflection of poorer developmental homeostasis at the molecular, chromosomal and epigenetic levels.
---
paper_title: Fluctuating Asymmetry: Measurement, Analysis, Patterns
paper_content:
With these words Darwin opened a brief paragraph citing observations antithetical to his supposition: anecdotal reports of the inheritance of characters missing from one side of the body. His initial hunch, however, has stood the test of time: Genetic studies have confirmed that where only small, random deviations from bilateral symmetry exist, the deviations in a particular direction have little or no measurable heritability (17, 47, 51, 65a, 73a, 74, 91). The genetic basis of bilateral symmetry thus appears to differ fundamentally from that of virtually all other morphological features. Why, then, all the recent (and not so recent) interest in such minor, nondirectional deviations from bilateral symmetry [fluctuating asymmetry (FA); 60 cited in 99]? Four reasons. First, FA relates in a curious way to what is perhaps the major unsolved general problem in modem biology: the orderly expression of genotypes as complex, three-dimensional phenotypes. As was emphasized in a flurry of activity in the mid to late 1950s, and many times since, FA provides an appealing measure of 'developmental noise,' or minor environmentally induced departures from some ideal developmental program (101). Its appeal derives from an a priori knowledge of the ideal: perfect bilateral symmetry. For unilateral characters, the ideal is rarely known (but see 1, 2, and 59 for one possible approach). A second reason for interest in
---
paper_title: Patterns of fluctuating asymmetry in avian feather ornaments: implications for models of sexual selection
paper_content:
Extravagant secondary sexual characters, i.e. sexual ornaments, are exaggerated, often bilaterally symmetrical traits of great intricacy of design. The full expression of such traits is likely to be very costly and close to the limits of production. Any kind of environmental stress is therefore more likely to affect the expression of ornaments than that of any other morphological trait not subjected to strong directional selection. One measure of the ability of individuals to produce extravagant sexual traits is their degree of fluctuating asymmetry. This occurs when symmetry is the normal state and there is no tendency for the trait on one side of the body to have larger character values than that on the other. The degree of fluctuating asymmetry has been shown to reflect the ability of individuals to cope with a wide array of environmental stress (review in Parsons (1990)). We predicted that sexual ornaments should show a larger degree of fluctuating asymmetry than other morphological traits or than homologous traits in non-ornamented species. If ornaments honestly indicate the quality of individuals, high quality individuals should develop little asymmetry and large traits. Thus, we predicted a general negative relation between the degree of asymmetry and the size of ornaments. This should not be the case for other traits or for homologous traits in conspecific females or in either sex of monomorphic species. We tested these predictions on elaborate feather ornaments in birds, as these have been shown to be used as cues during female choice. We made pairwise comparisons between males and females of ornamented species and between males of ornamented and of non-ornamented, confamilial species. Sexual ornaments showed both a larger absolute and relative degree of fluctuating asymmetry than did wing length or did traits homologous to the feather ornament in females and in males of non-ornamented species. The degree of fluctuating asymmetry for tail ornaments generally showed a negative relation with the size of the ornament, whereas that was not the case for wing length or for traits homologous to the feather ornament in females and in males of non-ornamented species. The large degree of fluctuating asymmetry in ornaments and the negative relation between ornament size and degree of asymmetry suggest that fluctuating asymmetry in ornaments reliably reflects male phenotypic quality.
---
paper_title: Human (Homo sapiens) facial attractiveness and sexual selection: the role of symmetry and averageness.
paper_content:
We hypothesized from the parasite theory of sexual selection that men (Homo sapiens) would prefer averageness and symmetry in women's faces, that women would prefer averageness and symmetry in men's faces, and that women would prefer largeness (not averageness) of the secondary sexual traits of men's faces. We generated computer images of men's and women's faces and of composites of the faces of each sex, and then had men and women rate opposite-sex faces for 4 variables (attractive, dominant, sexy, and healthy). Symmetry, averageness, and the sizes of facial features were measured on the computerized faces. The hypotheses were supported, with the exception of the hypothesized effects of averageness of female and male faces on attractiveness ratings. This is the first study to show that facial symmetry has a positive influence on facial attractiveness ratings. Although adult facial attractiveness ratings are replicable, even cross-culturally (see reviews and discussions in Jones & Hill, 1993, and Langlois & Roggman, 1990), there has been considerable controversy around attempts to identify in research the facial features that actually cause faces to be judged attractive or unattractive. As discussed by Langlois and Roggman, studies of individual facial features (e.g., nose size) often have yielded inconsistent results between studies. Faces created by combining individual faces into composites have been shown to be more attractive than the individual faces, which is felt to be a preference for average facial features (Langlois & Roggman, 1990; Symons, 1979). Averageness effaces can be calculated metrically or constructed photogrammetrically. Gallon (1879) constructed composites of individual pictures with the photographic method of simply projecting them one over the other on a negative. According to Gallon, this method "enables us to obtain with mechanical precision a generalized picture; one that represents no man in particular, but portrays an imaginary figure possessing the average features of any given group of man" (1879, p. 341). Indeed, Treu (1914) had the impression that these composites are "singularly beautiful" (p. 441). However, as Alley and Cunningham (1991; see also Benson & Perrett, 1991) pointed out, composites are also more symmetrical and rather free of
---
paper_title: Study of genetic variance in the fluctuating asymmetry of anthropometrical traits
paper_content:
SummaryWe have studied the fluctuating asymmetry (FA) of 8 bilateral morphometric traits in two-parent families, comprising 216 families with one newborn baby, and 60 families with two children (age range 5–18 years). Heritability was assessed by: (1) multiple regression analyses of the children's measurements on the mother's and father's measurements; (2) midparent-child regressions; and (3) sibling correlations. The extent of genetic determination of individual FA measurements was generally low, albeit statistically significant in some cases. However, even these correlations were inconsistent between samples and relatives. However, the mean FA values for all 8 studied traits showed positive and significant correlation between parents and children in two samples and in total. Additive genetic variance, calculated from multiple regression analyses and midparent-child correlations, was estimated to be between 0·25–0·30. Three multiple regressions (two for the separate group and one for the total sample) yi...
---
paper_title: Facial attractiveness, developmental stability, and fluctuating asymmetry
paper_content:
Abstract Despite robust cross-cultural reliability of human facial attractiveness ratings, research on facial attractiveness has only superficially addressed the connection between facial attractiveness and the history of sexual selection in Homo sapiens . There are reasons to believe that developmental stability and phenotypic quality are related. Recent studies of nonhuman animals indicate that developmental stability, measured as fluctuating asymmetry in generally bilateral symmetrical traits, is predictive of performance in sexual selection: Relatively symmetrical males are advantaged under sexual selection. This pattern is suggested by our study of facial attractiveness and fluctuating asymmetry in seven bilateral body traits in a student population. Overall, facial attractiveness negatively correlated with fluctuating asymmetry; the relation for men, but not for women, was statistically reliable. Possible confounding factors were controlled for in the analysis.
---
paper_title: Attractiveness of facial averageness and symmetry in non-Western cultures: In search of biologically based standards of beauty
paper_content:
Averageness and symmetry are attractive in Western faces and are good candidates for biologically based standards of beauty. A hallmark of such standards is that they are shared across cultures. We examined whether facial averageness and symmetry are attractive in non-Western cultures. Increasing the averageness of individual faces, by warping those faces towards an averaged composite of the same race and sex, increased the attractiveness of both Chinese (experiment 1) and Japanese (experiment 2) faces, for Chinese and Japanese participants, respectively. Decreasing averageness by moving the faces away from an average shape decreased attractiveness. We also manipulated the symmetry of Japanese faces by blending each original face with its mirror image to create perfectly symmetric versions. Japanese raters preferred the perfectly symmetric versions to the original faces (experiment 2). These findings show that preferences for facial averageness and symmetry are not restricted to Western cultures, consistent with the view that they are biologically based. Interestingly, it made little difference whether averageness was manipulated by using own-race or other-race averaged composites and there was no preference for own-race averaged composites over other-race or mixed-race composites (experiment 1). We discuss the implications of these results for understanding what makes average faces attractive. We also discuss some limitations of our studies, and consider other lines of converging evidence that may help determine whether preferences for average and symmetric faces are biologically based.
---
paper_title: Symmetry, averageness, and feature size in the facial attractiveness of women.
paper_content:
Female facial attractiveness was investigated by comparing the ratings made by male judges with the metric characteristics of female faces. Three kinds of facial characteristics were considered: facial symmetry, averageness, and size of individual features. The results suggested that female face attractiveness is greater when the face is symmetrical, is close to the average, and has certain features (e.g., large eyes, prominent cheekbones, thick lips, thin eyebrows, and a small nose and chin). Nevertheless, the detrimental effect of asymmetry appears to result solely from the fact that an asymmetrical face is a face that deviates from the norm. In addition, a factor analysis indicated that averageness best accounts for female attractiveness, but certain specific features can also be enhancing.
---
paper_title: Human (Homo sapiens) facial attractiveness and sexual selection: the role of symmetry and averageness.
paper_content:
We hypothesized from the parasite theory of sexual selection that men (Homo sapiens) would prefer averageness and symmetry in women's faces, that women would prefer averageness and symmetry in men's faces, and that women would prefer largeness (not averageness) of the secondary sexual traits of men's faces. We generated computer images of men's and women's faces and of composites of the faces of each sex, and then had men and women rate opposite-sex faces for 4 variables (attractive, dominant, sexy, and healthy). Symmetry, averageness, and the sizes of facial features were measured on the computerized faces. The hypotheses were supported, with the exception of the hypothesized effects of averageness of female and male faces on attractiveness ratings. This is the first study to show that facial symmetry has a positive influence on facial attractiveness ratings. Although adult facial attractiveness ratings are replicable, even cross-culturally (see reviews and discussions in Jones & Hill, 1993, and Langlois & Roggman, 1990), there has been considerable controversy around attempts to identify in research the facial features that actually cause faces to be judged attractive or unattractive. As discussed by Langlois and Roggman, studies of individual facial features (e.g., nose size) often have yielded inconsistent results between studies. Faces created by combining individual faces into composites have been shown to be more attractive than the individual faces, which is felt to be a preference for average facial features (Langlois & Roggman, 1990; Symons, 1979). Averageness effaces can be calculated metrically or constructed photogrammetrically. Gallon (1879) constructed composites of individual pictures with the photographic method of simply projecting them one over the other on a negative. According to Gallon, this method "enables us to obtain with mechanical precision a generalized picture; one that represents no man in particular, but portrays an imaginary figure possessing the average features of any given group of man" (1879, p. 341). Indeed, Treu (1914) had the impression that these composites are "singularly beautiful" (p. 441). However, as Alley and Cunningham (1991; see also Benson & Perrett, 1991) pointed out, composites are also more symmetrical and rather free of
---
paper_title: Facial Asymmetry and Attractiveness Judgment in Developmental Perspective
paper_content:
This study examined the role of facial symmetry in the judgment of physical attractiveness. Four experiments investigated people's preference for either somewhat asymmetrical portraits or their symmetrical chimeric composites when presented simultaneously. Experiment 1 found a higher selection rate for symmetrical faces with neutral expression for portraits of old people, and Experiment 2 indicated this may be because symmetry serves as cue for youth in old age. In Experiments 3 and 4 participants examined portraits with emotional expressions. Experiment 3 found a higher selection rate for asymmetrical faces, and Experiment 4 indicated this may be because observers perceived them as more genuine and natural. This study suggests that the low degree of facial asymmetry found in normal people does not affect attractiveness ratings (except for old age), probably because observers are not tuned to perceive it.
---
paper_title: Symmetry, sexual dimorphism in facial proportions and male facial attractiveness
paper_content:
Facial symmetry has been proposed as a marker of developmental stability that may be important in human mate choice. Several studies have demonstrated positive relationships between facial symmetry and attractiveness. It was recently proposed that symmetry is not a primary cue to facial attractiveness, as symmetrical faces remain attractive even when presented as half faces (with no cues to symmetry). Facial sexual dimorphisms ('masculinity') have been suggested as a possible cue that may covary with symmetry in men following data on trait size/symmetry relationships in other species. Here, we use real and computer graphic male faces in order to demonstrate that (i) symmetric faces are more attractive, but not reliably more masculine than less symmetric faces and (ii) that symmetric faces possess characteristics that are attractive independent of symmetry, but that these characteristics remain at present undefined.
---
paper_title: Facial symmetry and judgements of attractiveness, health and personality
paper_content:
Bilateral symmetry of physical traits is thought to reflect an individual’s phenotypic quality, especially their ability to resist environmental perturbations during development. Therefore, facial symmetry may signal the ability of an individual to cope with the challenges of their environment. Studies concerning the relationship between symmetry and attractiveness lead to the conclusion that preferences for symmetric faces may have some adaptive value. We hypothesized that if symmetry is indeed indicative of an individual’s overall quality, faces high in symmetry should receive higher ratings of attractiveness and health, but also be perceived as demonstrating certain positive personality attributes. College students’ attributions of a set of 20 female faces varying in facial symmetry were recorded. As predicted, faces high in symmetry received significantly higher ratings of attractiveness, health, and certain personality attributes (i.e., sociable, intelligent, lively, self-confident, balanced). Faces low in symmetry were rated as being more anxious. These differences were not caused by an attractiveness stereotype. The present results lend further support to the notions that (i) facial symmetry is perceived as being attractive, presumably reflecting health certification
---
paper_title: Evidence against perceptual bias views for symmetry preferences in human faces
paper_content:
Symmetrical human faces are attractive. Two explanations have been proposed to account for symmetry preferences: (i) the evolutionary advantage view, which posits that symmetry advertises mate quality and (ii) the perceptual bias view, which posits that symmetry preferences are a consequence of greater ease of processing symmetrical images in the visual system. Here, we show that symmetry preferences are greater when face images are upright than when inverted. This is evidence against a simple perceptual bias view, which suggests symmetry preference should be constant across orientation about a vertical axis. We also show that symmetry is preferred even in familiar faces, a finding that is unexpected by perceptual bias views positing that symmetry is only attractive because it represents a familiar prototype of that particular class of stimuli.
---
paper_title: Facial attractiveness signals different aspects of “quality” in women and men
paper_content:
We explored the relationships between facial attractiveness and several variables thought to be related to genotypic and phenotypic quality in humans (namely fluctuating asymmetry (FA), body mass index (BMI), health, age). To help resolve some controversy around previous studies, we used consistent measurement and statistical methods and relatively large samples of both female (n = 94) and male (n = 95) subjects (to be evaluated and measured), and female (n = 226) and male (n = 153) viewers (to rate attractiveness). We measured the asymmetry of 22 traits from three trait families (eight facial, nine body and five fingerprint traits) and constructed composite asymmetry indices of traits showing significant repeatability. Facial attractiveness was negatively related to an overall asymmetry index in both females and males, with almost identical slopes. Female facial attractiveness was best predicted by BMI and past health problems, whereas male facial attractiveness was best predicted by the socioeconomic status (SES) of their rearing environment. Composite FA indices accounted for a small ( < 4%) but usually significant percentage of the variation in facial attractiveness in both sexes, when factors related to asymmetry were controlled statistically. We conclude that, although facial attractiveness is negatively related to developmental instability (as measured by asymmetry), attractiveness also signals different aspects of ‘‘quality’’ in the two sexes, independent of FA. D 2001 Elsevier Science Inc. All rights reserved.
---
paper_title: Fluctuating asymmetry and sexual selection.
paper_content:
Behavioral ecologists are being attracted to the study of within-individual morphological variability, manifested in random deviations from bilateral symmetry, as a means of ascertaining the stress susceptibility of developmental regulatory mechanisms. Several early successes Indicate that incorporating measures of symmetry into sexual-selection studies may help link individual sexual success to a basic component of viability - developmental stability.
---
paper_title: Fluctuating asymmetry: a biological monitor of environmental and genomic stress
paper_content:
Increased fluctuating asymmetry (FA) of morphological traits occurs under environmental and genomic stress. Such conditions will therefore lead to a reduction in developmental homeostasis. Based upon temperature extreme experiments, relatively severe stress is needed to increase FA under field conditions. Increasing asymmetry tends, therefore, to occur in stressed marginal habitats. Genetic perturbations implying genomic stress include certain specific genes, directional selection, inbreeding, and chromosome balance alterations. It is for these reasons that transgenic organisms may show increased FA. As there is evidence that the effects of genomic and environmental stress are cumulative, organisms in a state of genomic stress may provide sensitive biological monitors of environmental stress.
---
paper_title: Fluctuating asymmetry, metabolic rate and sexual selection in human males
paper_content:
Abstract Fluctuating asymmetry (FA), a measure of developmental stability, may be important in human sexual selection, that is in mate choice and male-male competition. It is shown that in males (but not in females) resting metabolic rate (RMR) is positively related to FA. This is explained in terms of the balanced energy equation of males. Sexual selection for large body size (resulting from male-male competition) and low FA (a consequence of mate choice) results in a stress on the provision of energy for growth and the maintenance of symmetry. Viewed in this way, sexual selection is similar to any other stress such as overcrowding and starvation. High-quality males are better able to withstand the stress of sexual selection than low-quality males. The former have “energy-thrifty genotypes” and are able to allocate more energy to growth and reducing FA than the latter.
---
paper_title: Fluctuating asymmetry as an indicator of stress: Implications for conservation biology
paper_content:
Abstract Extinction can be attributed broadly to environmental or genetic stress. The ability to detect such stresses before they seriously affect a population can enhance the effectiveness of conservation programs. Recent studies have shown that within-individual morphological variability may provide a valuable early indicator of environmental and genetic stress.
---
paper_title: Is symmetry a visual cue to attractiveness in the human female body
paper_content:
Small deviations from bilateral symmetry (a phenomenon called fluctuating asymmetry [FA]) are believed to arise due to an organism's inability to implement a developmental program when challenged by developmental stress. FA thus provides an index of an organism's exposure to adverse environmental effects and its ability to resist these effects. If one wishes to choose an individual with good health and fertility, FA could be used as an index of a potential partner's suitability. To explore whether this theory can be applied to human female bodies (excluding heads), we used a specially developed software package to create images with perfect symmetry. We then compared the relative attractiveness of the normal (asymmetric image) with the symmetric image. When male and female observers rated the images for attractiveness on a scale of 1 to 10, there was no significant difference in attractiveness between the symmetric and asymmetric images. However, in a two-alternative forced-choice experiment, the symmetric image was significantly more popular. The evidence suggests a role for symmetry in the perception of the attractiveness of the human female body.
---
paper_title: Facial attractiveness signals different aspects of “quality” in women and men
paper_content:
We explored the relationships between facial attractiveness and several variables thought to be related to genotypic and phenotypic quality in humans (namely fluctuating asymmetry (FA), body mass index (BMI), health, age). To help resolve some controversy around previous studies, we used consistent measurement and statistical methods and relatively large samples of both female (n = 94) and male (n = 95) subjects (to be evaluated and measured), and female (n = 226) and male (n = 153) viewers (to rate attractiveness). We measured the asymmetry of 22 traits from three trait families (eight facial, nine body and five fingerprint traits) and constructed composite asymmetry indices of traits showing significant repeatability. Facial attractiveness was negatively related to an overall asymmetry index in both females and males, with almost identical slopes. Female facial attractiveness was best predicted by BMI and past health problems, whereas male facial attractiveness was best predicted by the socioeconomic status (SES) of their rearing environment. Composite FA indices accounted for a small ( < 4%) but usually significant percentage of the variation in facial attractiveness in both sexes, when factors related to asymmetry were controlled statistically. We conclude that, although facial attractiveness is negatively related to developmental instability (as measured by asymmetry), attractiveness also signals different aspects of ‘‘quality’’ in the two sexes, independent of FA. D 2001 Elsevier Science Inc. All rights reserved.
---
paper_title: Human hips, breasts and buttocks: Is fat deceptive?
paper_content:
Abstract In humans, reproductive-age females, unlike other ages and classes of individuals, deposit fat preferentially on the breasts, hips, and buttocks. This suggests that such fat deposition is a deceptive sexual signal, mimicking other signals of high reproductive value and potential.
---
paper_title: Evolutionary Theory and African American Self-Perception: Sex Differences in Body-Esteem Predictors of Self-Perceived Physical and Sexual Attractiveness, and Self-Esteem
paper_content:
Evolutionary biological theory has been shown to be relevant to an understanding of how individuals assess others' physical and sexual attractiveness. This research used the Body-Esteem Scale and multiple regression to determine if this theory is also relevant to an understanding of self-perceived physical and sexual attractiveness and self-esteem for a sample of 91 African Americans. The hypotheses that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness for women. This research indicates that evolutionary biological theory can provide relevant insig...
---
paper_title: Female health, attractiveness, and desirability for relationships: Role of breast asymmetry and waist-to-hip ratio
paper_content:
Abstract Fluctuating asymmetry (FA) of ordinarily bilaterally symmetrical traits in humans has been proposed to indicate developmental anomaly. Recent research has shown that individuals with minimal FA are judged to be attractive, and preferred as sexual partners (Thornhill and Gangestad 1993). Waist-to-hip ratios (WHR) have been also shown to reflect health and reproductive capability of woman and those with low WHRs are judged more attractive and healthy (Singh 1993a,b). The present study examines the relative roles of WHR and FA in female breasts in judgments of female attractiveness, health, and desirability for short-term and long-term relationships. Male college students were asked to judge attractiveness of female figures that differed in WHR (high and low) and breast symmetry (none, low, or high). In the first test, paired comparison method was used in which each figure was paired one at a time with all other figures. In the second test, subjects examined all figures simultaneously, estimated their age, and rated each figure for attractiveness, feminine looks, health, overall degree of body symmetry, and willingness to engage in short- and long-term relationship. Results from both tests show that figures with low WHRs were judged to be more attractive than figures with high WHRs, regardless of their degree of breast asymmetry. The figure with low WHR and symmetrical breasts was judged to be most attractive and youngest of all other figures. It appears that men use both WHR and breast asymmetry in judging attractiveness and being willing to develop romantic relationships. It is proposed that WHR and breast asymmetry may signal different aspects of overall female mate quality.
---
paper_title: Evolutionary Theory and Self-perception: Sex Differences in Body Esteem Predictors of Self-perceived Physical and Sexual Attractiveness and Self-Esteem
paper_content:
Responses to the body esteem scale (Franzoi & Shields, 1984) and multiple regression were used to determine if evolutionary biological theory is relevant to an understanding of self-perceived physical and sexual attractiveness and selfesteem and to determine if physical and sexual attractiveness are the same construct. It was hypothesized that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory. These hypotheses were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness. Physical and sexual attractiveness are not the same constructs. This research indicates that evolutionary bological theory can provide relev...
---
paper_title: FLUCTUATING ASYMMETRY AND SEXUAL SELECTION
paper_content:
Fluctuating asymmetry occurs when an individual is unable to undergo identical development on both sides of a bilaterally symmetrical trait. Fluctuating asymmetry measures the sensitivity of development to a wide array of genetic and environmental stresses. We propose that fluctuating asymmetry is used in many signalling contexts for assessment of an individual's ability to cope with its environment. We hypothesize that fluctuating asymmetry is used in sexual selection, both in fighting and mate choice, and in competition for access to resources. Evidence is reviewed showing that the patterns of fluctuating asymmetry in secondary sexual characters differ from those seen in other morphological traits. Secondary sexual characters show much higher levels of fluctuating asymmetry. Also, there is often a negative relationship between fluctuating asymmetry and the absolute size of ornaments, whereas the relationship is typically U-shaped in other morphological traits. The common negative relationship between fluctuating asymmetry and ornament size suggests that many ornaments reliably reflect individual quality.
---
paper_title: Ratings of voice attractiveness predict sexual behavior and body configuration
paper_content:
Abstract We investigated the relationship between ratings of voice attractiveness and sexually dimorphic differences in shoulder-to-hip ratios (SHR) and waist-to-hip ratios (WHR), as well as different features of sexual behavior. Opposite-sex voice attractiveness ratings were positively correlated with SHR in males and negatively correlated with WHR in females. For both sexes, ratings of opposite-sex voice attractiveness also predicted reported age of first sexual intercourse, number of sexual partners, number of extra-pair copulation (EPC) partners, and number of partners that they had intercourse with that were involved in another relationship (i.e., were themselves chosen as an EPC partner). Coupled with previous findings showing a relationship between voice attractiveness and bilateral symmetry, these results provide additional evidence that the sound of a person's voice may serve as an important multidimensional fitness indicator.
---
paper_title: Vocal and visual attractiveness are related in women
paper_content:
Abstract We investigated the relation between visual and vocal attractiveness in women as judged by men. We recorded 34 women speaking four vowels and measured the peak frequency, the first five harmonic frequencies, the first five formant frequencies and formant dispersion. The women were also photographed (head shot), several body measures were taken and their ages were recorded. The voices were played to male judges who were asked to assess the women's age and vocal attractiveness from the recording. The men were then asked to assess the attractiveness of the photographs. Men were in strong agreement on which was an attractive voice and face; and women with attractive faces had attractive voices. Higher-frequency voices were assessed as being more attractive and as belonging to younger women (the lowest frequency produced is a good indicator of age in women in general). Larger women had lower voices and were judged as having less attractive faces and voices. Taller women had narrower formant dispersion as predicted. The results imply that different measures of attractiveness are in agreement and signal similar qualities, such as female age, body size and possibly hormonal profile. Copyright 2003 Published by Elsevier Science Ltd on behalf of The Association for the Study of Animal Behaviour.
---
paper_title: The voice and face of woman: one ornament that signals quality?
paper_content:
Abstract The attractiveness of women's faces, voices, bodies, and odors appear to be interrelated, suggesting that they reflect a common trait such as femininity. We invoked novel approaches to test the interrelationships between female vocal and facial attractiveness and femininity. In Study 1, we examined the relationship between facial-metric femininity and voice pitch in two female populations. In both populations, facial-metric femininity correlated positively with pitch of voice. In Study 2, we constructed facial averages from two populations of women with low- and high-pitched voices and determined men's preferences for resulting prototypes. Men preferred averaged faces of women from both populations with higher pitched voices to those with lower pitched voices. In Study 3, we tested whether the findings from Study 2 also extended to the natural faces that made up the prototypes. Indeed, men and women preferred real faces of women with high-pitched voices to those with low-pitched voices. Because multiple cues to femininity are related, and feminine women may have greater reproductive fitness than do relatively masculine women, male preferences for multiple cues to femininity are potentially adaptive.
---
paper_title: Mating context and menstrual phase affect women's preferences for male voice pitch
paper_content:
Abstract Fundamental frequency ( F 0 ) is the vocal acoustic parameter closest to what we perceive as pitch. Men speak at a lower F 0 than do women, even controlling for body size. Although the developmental and anatomical reasons for this sex difference are known, the evolutionary reasons are not. By examining fertility-related variation in women's preferences for men's voices, the present study tests the hypothesis that female choice for good genes influenced the evolution of male voice pitch (VP). Unlike previous correlational studies that did not consider the effects of menstrual phase and mating context on women's preferences for male VP, the present study includes these variables and utilizes experimental pitch (P) manipulations. Results indicate that low VP is preferred mainly in short-term mating contexts rather than in long-term, committed ones, and this mating context effect is greatest when women are in the fertile phase of their ovulatory cycles. Moreover, lower male F 0 correlated with higher self-reported mating success. These findings are consistent with the hypothesis that an association between low male VP and heritable fitness led to the evolution of the observed patterns in women's P preferences and men's mating success and that these patterns influenced the evolution of low VP in men. However, alternative explanations are considered.
---
paper_title: Androgen Stimulation and Laryngeal Development
paper_content:
Androgen-induced changes in laryngeal growth patterns were studied using a sheep animal model. Forty-eight lambs were divided into eight treatment groups. Lambs in seven of the groups were castrated at birth, while lambs in the eighth group served as an intact (noncastrated) control. Six groups were then treated with varying doses of testosterone and dihydrotestosterone, while the seventh served as a castrated, nontreated control. All animals were killed and gross dissections of the larynges were performed. Thirty-four linear and angular measurements were obtained from each larynx. The mean superior thyroid horn separation showed the most dramatic androgen-induced effect (p = 0.023). Laryngeal anterior-posterior diameter, superior thyroid horn height, posterior thyroid cartilage width, thyroid cartilage angle, and vocal process to arytenoid base distances all demonstrated positive dose-response relationships. Hypoandrogenic levels appeared to have an inhibitory effect upon laryngeal growth when compared to castrated controls.
---
paper_title: Men's voices and women's choices
paper_content:
I investigated the relationship between male human vocal characteristics and female judgements about the speaker. Thirty-four males were recorded uttering five vowels and measures were taken, from power spectrums, of the first five harmonic frequencies, overall peak frequency and formant frequencies (emphasized, resonance, frequencies within the vowel). Male body measures were also taken (age, weight, height, and hip and shoulder width) and the men were asked whether they had chest hair. The recordings were then played to female judges, who were asked to rate the males' attractiveness, age, weight and height, and to estimate the muscularity of the speaker and whether he had a hairy chest. Men with voices in which there were closely spaced, low-frequency harmonics were judged as being more attractive, older and heavier, more likely to have a hairy chest and of a more muscular body type. There was no relationship between any vocal and body characteristic. The judges' estimates were incorrect except for weight. They showed extremely strong agreement on all judgements. The results imply that there could be sexual selection through female choice for male vocal characteristics, deeper voices being preferred. However, the function of the preference is unclear given that the estimates were generally incorrect.
---
paper_title: Menstrual cycle variation in women's preferences for the scent of symmetrical men
paper_content:
Evidence suggests that female sexual preferences change across the menstrual cycle. Women’s extra-pair copulations tend to occur in their most fertile period, whereas their intra-pair copulations tend to be more evenly spread out across the cycle.This pattern is consistent with women preferentially seeking men who evidence phenotypic markers of genetic bene¢ts just before and during ovulation.This study examined whether women’s olfactory preferences for men’s scent would tend to favour the scent of more symmetrical men, most notably during the women’s fertile period. College women sniied and rated the attractiveness of the scent of 41T-shirts worn over a period of two nights by diierent men. Results indicated that normally cycling (non-pill using) women near the peak fertility of their cycle tendedto prefer the scent of shirts worn by symmetrical men. Normally ovulating women at low fertility within their cycle, and women using a contraceptive pill, showed no signi¢cant preference for either symmetrical or asymmetrical men’s scent. A separate analysis revealed that, within the set of normally cycling women, individual women’s preference for symmetry correlated with their probability of conception, given the actuarial value associated with the day of the cycle they reported at the time they smelled the shirts. Potential sexual selection processes and proximate mechanisms accounting for these ¢ndings are discussed.
---
paper_title: The Scent of Symmetry: A Human Sex Pheromone that Signals Fitness?
paper_content:
Abstract A previous study by the authors showed that the body scent of men who have greater body bilateral symmetry is rated as more attractive by normally ovulating (non-pill-using) women during the period of highest fertility based on day within the menstrual cycle. Women in low-fertility phases of the cycle and women using hormone-based contraceptives do not show this pattern. The current study replicated these findings with a larger sample and statistically controlled for men's hygiene and other factors that were not controlled in the first study. The current study also examined women's scent attractiveness to men and found no evidence that men prefer the scent of symmetric women. We propose that the scent of symmetry is an honest signal of phenotypic and genetic quality in the human male, and chemical candidates are discussed. In both sexes, facial attractiveness (as judged from photos) appears to predict body scent attractiveness to the opposite sex. Women's preference for the scent associated with men's facial attractiveness is greatest when their fertility is highest across the menstrual cycle. The results overall suggest that women have an evolved preference for sires with good genes.
---
paper_title: Differential use of sensory information in sexual behavior as a function of gender
paper_content:
Olfactory information is critical to mammalian sexual behavior. Based on parental investment theory the relative importance of olfaction compared with vision, touch, and hearing should be different for human males and females. In particular, because of its link to immunological profile and offspring viability, odor should be a more important determinant of sexual choice and arousal for females than for males. To test this hypothesis a questionnaire was developed and administered to 332 adults (166 males, 166 females). Subjects used a 1–7 scale to indicate how much they agreed with a series of statements concerning the importance of olfactory, visual, auditory, and tactile information for their sexual responsivity. The data reveal that males rated visual and olfactory information as being equally important for selecting a lover, while females considered olfactory information to be the single most important variable in mate choice. Additionally, when considering sexual activity, females singled out body odor from all other sensory experiences as most able to negatively affect desire, while males regarded odors as much more neutral stimuli for sexual arousal. The present results support recent findings in mice and humans concerning the relation of female preferences in body odor and major histocompatibility complex (MHC) compatibility and can be explained by an evolutionary analysis of sex differences in reproductive strategies. This work represents the first direct examination of the role of different forms of sensory information in human sexual behavior.
---
paper_title: Human body odour, symmetry and attractiveness
paper_content:
Several studies have found body and facial symmetry as well as attractiveness to be human mate choice criteria. These characteristics are presumed to signal developmental stability. Human body odour has been shown to influence female mate choice depending on the immune system, but the question of whether smell could signal general mate quality, as do other cues, was not addressed in previous studies. We compared ratings of body odour, attractiveness, and measurements of facial and body asymmetry of 16 male and 19 female subjects. Subjects wore a T-shirt for three consecutive nights under controlled conditions. Opposite-sex raters judged the odour of the T-shirts and another group evaluated portraits of the subjects for attractiveness. We measured seven bilateral traits of the subject's body to assess body asymmetry. Facial asymmetry was examined by distance measurements of portrait photographs. The results showed a significant positive correlation between facial attractiveness and sexiness of body odour for female subjects. We found positive relationships between body odour and attractiveness and negative ones between smell and body asymmetry for males only if female odour raters were in the most fertile phase of their menstrual cycle. The outcomes are discussed in the light of different male and female reproductive strategies.
---
paper_title: Major histocompatibility complex genes, symmetry, and body scent attractiveness in men and women
paper_content:
Previous research indicates that the scent of developmental stability (low fluctuating asymmetry, FA) is attractive to women who are fertile (at high-conception risk points in their menstrual cycles), but not to other women or men. Prior research also indicates that the scent of dissimilarity in major histocompatibility complex (MHC) genes may play a role in human mate choice. We studied the scent attractiveness to the opposite sex of t-shirts worn for 2 nights' sleep. Our results indicate that the two olfactory systems are independent. We repeated previous results from studies of the scent of symmetry. We repeated previous results from MHC research in part; men, but not women, showed a preference for t-shirts with the scent of MHC dissimilarity. Women's scent ratings of t-shirts were uncorrelated with the wearer's MHC dissimilarity and allele frequency, but positively correlated with the wearer's MHC heterozygosity. Fertile women did not exhibit any MHC trait preferences. Women's preference for the scent of men who were heterozygous for MHC alleles may be stronger in women who are at infertile cycle points. Men preferred the scent of common MHC alleles, which may function to avoid mates with rare alleles that exhibit gestational drive. Men also preferred the scent of women at fertile cycle points. The scent of facially attractive women, but not men, was preferred. Neither FA nor facial attractiveness in either sex correlated with MHC dissimilarity to others, MHC heterozygosity, or MHC allelic rarity. Copyright 2003.
---
paper_title: Olfactory information processing during the course of the menstrual cycle
paper_content:
Abstract In the present study we examined whether olfactory information processing depends on the phase of the menstrual cycle. Five female subjects were investigated during three phases (follicular, ovulatory, luteal) of their menstrual cycle. In each session chemosensory (olfactory) event-related potentials (CSERP) were recorded and olfactory thresholds and the hedonic tone of the test stimulus (citral) were determined. Threshold values were correlated with the salivary cortisol level. The results show that olfactory perception changes during the menstrual cycle. After the first stimulus presentations in a recording session, odors were perceived as more complex or novel during the ovulatory period (enhanced amplitude of P3-1). With continued stimulation, odor processing became faster (reduced latency of N1, P2 and P3-2) around ovulation and slower during the follicular phase. Moreover, odors were described more differentially during the ovulatory period. Olfactory sensitivity was correlated positively with the peripheral cortisol level.
---
paper_title: Evolutionary Theory Led to Evidence for a Male Sex Pheromone That Signals Symmetry
paper_content:
In several studies, we asked the question, "Do women find the scent of symmetrical men more pleasant and sexier, particularly on the days of their menstrual cycles when they could potentially conceive an offspring?" (Gangestad & Thomhill, 1998; Thomhill & Gangestad, 1 999b; Thornhill et al., in press). As this question appears to concern a small, rather circumscribed piece of human psychology, why did we ask it? Why was it interesting? And what impact, if any, has it had on addressing larger questions of significance? In short, the question arose rather straightforwardly from modem evolutionary theory. We asked the question not simply because we were interested in women's scent preferences in particular (though we certainly are) but rather we asked it precisely because an answer to the question, embedded within a much larger corpus of theory and evidence, potentially speaks to very broad issues concerning not only human sexuality and male-female relations but also wide-ranging topics such as the development and expression of a wide variety of physical and behavioral traits, same-sex interactions, maternal-fetal interactions, and conflicts within the human genome. We would like to think that the research and what it has led to already illustrate the generative power of well-established, broad theory. Theory is often thought of as generative in a deductive fashion, allowing researchers to derive predictions about phenomena of interest. It certainly is that. However, theory is generative through induction as well, and particularly when it is extremely broad, touching on phenomena not only of core interest but perhaps even outside of one's own particular science. New facts that must be assimilated into a theoretical network may have perturbing effects throughout its reach and may dictate altered understandings of phenomena within its purview.
---
paper_title: Gender differences in beliefs about the causes of male and female sexual desire
paper_content:
Little is known about the beliefs men and women have about the causes of sexual desire, despite the interpersonal and individual significance of those beliefs. Participants in this study received a definition of sexual desire and answered a set of free-response questions exploring their beliefs about the causal antecedents of male and female sexual desire. The results indicated that more women than men view female (and male) sexual desire as caused by external factors. In addition, both men and women believe that male and female sexual desire have different causes: intraindividual and erotic environmental factors are believed to cause male sexual desire, but interpersonal and romantic environmental factors are believed to cause female sexual desire. Although both men and women view physical attractiveness and overall personality as sexually desirable male and female characteristics, women, but not men, view femininity as a sexually desirable female characteristic, and men, but not women, view social and financial power or status as a sexually desirable male attribute.
---
paper_title: Fluctuating asymmetry and psychometric intelligence
paper_content:
Little is known about the genetic nature of human psychometric intelligence (IQ), but it is widely assumed that IQ's heritability is at loci for intelligence per se. We present evidence consistent with a hypothesis that interindividual IQ differences are partly due to heritable vulnerabilities to environmental sources of developmental stress, an indirect genetic mechanism for the heritability of IQ. Using fluctuating asymmetry (FA) of the body (the asymmetry resulting from errors in the development of normally symmetrical bilateral traits under stressful conditions), we estimated the relative developmental instability of 112 undergraduates and administered to them Cattell's culture fair intelligence test (CFIT). A subsequent replication on 128 students was performed. In both samples, FA correlated negatively and significantly with CFIT scores. We propose two non–mutually exclusive physiological explanations for this correlation. First, external body FA may correlate negatively with the developmental integrity of the brain. Second, individual energy budget allocations and/or low metabolic efficiency in high–FA individuals may lower IQ scores. We review the data on IQ in light of our findings and conclude that improving developmental environmental quality may increase average IQ in future generations.
---
paper_title: Human intelligence, fluctuating asymmetry and the peacock’s tail: General intelligence (g) as an honest signal of fitness
paper_content:
Assuming that general intelligence (g) is an honest signal of fitness, we expected g to be related to developmental quality as indexed by Fluctuating Asymmetry (i.e. non-pathological variation in the size of right and left body features). In a population sample of 44 men and 37 women, we assessed the relationship between Fluctuating Asymmetry (FA) and g, and, as a control, to the Big Five personality dimensions for which no theoretical relationship with FA was expected. We found a relation between g and FA in men and women, but only a marginally significant relation with Openness in women only. We conclude that about 20% of the variance in g is explained by FA, and discuss the implications for the evolution of g and human mate choice.
---
paper_title: Facial asymmetry as an indicator of psychological, emotional, and physiological distress
paper_content:
Fluctuating asymmetry (FA) is deviation from bilateral symmetry in morphological traits with asymmetry values that are normally distributed with a mean of 0. FA is produced by genetic or environmental perturbations of developmental design and may play a role in human sexual selection. K. Grammer and R. Thornhill (1994) found that facial FA negatively covaries with observer ratings of attractivenes s, dominance, sexiness, and health. Using self-reports, observer ratings, daily diary reports, and psychophysiological measures, the authors assessed the relationship between facial FA and health in 2 samples of undergraduate s (N = 101). Results partially replicate and extend those of K. Grammer and R. Thornhill (1994) and suggest that facial FA may signal psychological, emotional, and physiological distress. Discussion integrates the authors' findings with previous research on FA and suggests future research needed to clarify the role of FA in human sexual selection.
---
paper_title: The evolutionary psychology of extrapair sex: The role of fluctuating asymmetry
paper_content:
Abstract This study explored evolutionary hypotheses concerning extrapair sex (or EPCs: extrapair copulations). Based on recent notions about sexual selection, we predicted that (a) men's number of EPCs would correlate negatively with their fluctuating asymmetry, a measure of the extent to which developmental design is imprecisely expressed, and (b) men's number of times having been an EPC partner of a woman would negatively correlate with their fluctuating asymmetry. In a sample of college heterosexual couples, both hypotheses were supported. In addition, men's physical attractiveness independently predicted how often they had been an EPC partner. Women's anxious attachment style positively covaried with their number of EPC partners, whereas their avoidant attachment style negatively covaried with their number of EPC partners.
---
paper_title: Facial Asymmetry and Attractiveness Judgment in Developmental Perspective
paper_content:
This study examined the role of facial symmetry in the judgment of physical attractiveness. Four experiments investigated people's preference for either somewhat asymmetrical portraits or their symmetrical chimeric composites when presented simultaneously. Experiment 1 found a higher selection rate for symmetrical faces with neutral expression for portraits of old people, and Experiment 2 indicated this may be because symmetry serves as cue for youth in old age. In Experiments 3 and 4 participants examined portraits with emotional expressions. Experiment 3 found a higher selection rate for asymmetrical faces, and Experiment 4 indicated this may be because observers perceived them as more genuine and natural. This study suggests that the low degree of facial asymmetry found in normal people does not affect attractiveness ratings (except for old age), probably because observers are not tuned to perceive it.
---
paper_title: Fluctuating asymmetry and human male life-history traits in rural Belize
paper_content:
Fluctuating asymmetry (FA), used as a measure of phenotypic quality, has proven to be a useful predictor of human life–history variation, but nothing is known about its effects in humans living in higher fecundity and mortality conditions, typical before industrialization and the demographic transition. In this research, I analyse data on male life histories for a relatively isolated population in rural Belize. Some of the 56 subjects practise subsistence–level slash–and–burn farming, and others are involved in the cash economy. Fecundity levels are quite high in this population, with men over the age of 40 averaging more than eight children. Low FA successfully predicted lower morbidity and more offspring fathered, and was marginally associated with a lower age at first reproduction and more lifetime sex partners. These results indicate that FA may be important in predicting human performance in fecundity and morbidity in pre–demographic transition conditions.
---
paper_title: Fluctuating asymmetry and intelligence
paper_content:
Abstract The general factor of mental ability ( g ) may reflect general biological fitness. If so, g -loaded measures such as Raven's progressive matrices should be related to morphological measures of fitness such as fluctuating asymmetry (FA: left–right asymmetry of a set of typically left–right symmetrical body traits such as finger lengths). This prediction of a negative correlation between FA and IQ was confirmed in two independent samples, with correlations of − 0.41 and − 0.29, respectively. Head size also predicted Raven's scores but this relationship appeared to be mediated by FA. It is concluded that g along with correlated variables such as head size are in large part a reflection of a more general fitness factor influencing the growth and maintenance of all bodily systems, with brain function being an especially sensitive indicator of this fitness factor.
---
paper_title: FACIAL ATTRACTIVENESS AND PHYSICAL HEALTH
paper_content:
Abstract Previous research has documented that more facially attractive people are perceived by others to be physically healthier. Using self-reports, observer ratings, daily diary methodology, and psychophysiological assessments, this study provides limited empirical evidence that more facially attractive people ( N = 100) may be physically healthier than unattractive people. Discussion suggests the value of an evolutionary psychological perspective for understanding the relationship between facial attractiveness and physical health.
---
paper_title: Human female orgasm and mate fluctuating asymmetry
paper_content:
Abstract Human, Homo sapiens , female orgasm is not necessary for conception; hence it seems reasonable to hypothesize that orgasm is an adaptation for manipulating the outcome of sperm competition resulting from facultative polyandry. If heritable differences in male viability existed in the evolutionary past, selection could have favoured female adaptations (e.g. orgasm) that biased sperm competition in favour of males possessing heritable fitness indicators. Accumulating evidence suggests that low fluctuating asymmetry is a sexually selected male feature in a variety of species, including humans, possibly because it is a marker of genetic quality. Based on these notions, the proportion of a woman's copulations associated with orgasm is predicted to be associated with her partner's fluctuating asymmetry. A questionnaire study of 86 sexually active heterosexual couples supported this prediction. Women with partners possessing low fluctuating asymmetry and their partners reported significantly more copulatory female orgasms that were reported by women with partners possessing high fluctuating asymmetry and their partners, even with many potential confounding variables controlled. The findings are used to examine hypotheses for female orgasm other than selective sperm retention.
---
paper_title: Human sperm competition: ejaculate manipulation by females and a function for the female orgasm
paper_content:
. Behavioural ecologists view monogamy as a subtle mixture of conflict and cooperation between the sexes. In part, conflict and cooperation is cryptic, taking place within the female's reproductive tract. In this paper the cryptic interaction for humans was analysed using data from both a nationwide survey and counts of sperm inseminated into, and ejected by, females. On average, 35% of sperm were ejected by the female within 30 min of insemination. The occurrence and timing of female orgasm in relation to copulation and male ejaculation influenced the number of sperm retained at both the current and next copulation. Orgasms that climaxed at any time between 1 min before the male ejaculated up to 45 min after led to a high level of sperm retention. Lack of climax or a climax more than 1 min before the male ejaculated led to a low level of sperm retention. Sperm from one copulation appeared to hinder the retention of sperm at the next copulation for up to 8 days. The efficiency of the block declined with time after copulation but was fixed at its current level by an inter-copulatory orgasm which thus reduced sperm retention at the next copulation. Inter-copulatory orgasms are either spontaneous (= nocturnal) or induced by self-masturbation or stimulation by a partner. It is argued that orgasms generate a blow-suck mechanism that takes the contents of the upper vagina into the cervix. These contents include sperm and seminal fluid if present; acidic vaginal fluids if not. Inter-copulatory orgasms will therefore lower the pH of the cervical mucus and either kill or reduce the mobility of any sperm that attempt to penetrate from reservoirs in the cervical crypts. Intercopulatory orgasms may also serve an antibiotic function. Copulatory and inter-copulatory orgasms endow females with considerable flexibility in their manipulation of inseminates. The data suggest that, in purely monandrous situations, females reduced the number of sperm retained, perhaps as a strategy to enhance conception. During periods of infidelity, however, females changed their orgasm pattern. The changes would have been cryptic to the male partners and would numerically have favoured the sperm from the extra-pair male, presumably raising his chances of success in sperm competition with the female's partner.
---
paper_title: Facial attractiveness, symmetry and cues of good genes
paper_content:
Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.
---
paper_title: Developmental Stability, Ejaculate Size, and Sperm Quality in Men
paper_content:
Abstract There is accumulating evidence that women prefer symmetric men. This preference would be adaptive if symmetry was correlated with a fitness trait such as fertility. We show that, in a sample of 53 men from an infertility clinic, a measure of overall absolute fluctuating asymmetry (FA) in digits 2 to 5 was negatively related to sperm number per ejaculate, sperm speed, and sperm migration, and overall relative FA was negatively related to sperm number and sperm speed. Subjects who had few or no sperm in their ejaculates (azoospermics) tended to have high FA, particularly when obstructive azoospermia was the likely diagnosis. Controlling for weight, height, and age left sperm number, sperm speed, and sperm migration significantly related to both absolute and relative FA. The association between low digit FA and large ejaculates and high sperm quality may arise because (1) generalized developmental stability of the body is related to fertility or (2) Hox genes control differentiation of digits and the urinogenital system in vertebrates, and FA of the former is closely linked to developmental stability of the latter.
---
paper_title: Human Fluctuating Asymmetry and Sexual Behavior
paper_content:
This report presents evidence that sexual selection may favor developmental stability (i e, the absence of fluctuating asymmetry) in humans Subtle, heritable asymmetries in seven nonfacial human body traits correlated negatively with number of self-reported, lifetime sex partners and correlated positively with self-reported age at first copulation in a college student sample These relationships remained statistically significant when age, marital status, body height, ethnicity, physical anomalies associated with early prenatal development, and physical attractiveness were statistically controlled The strength of the relationships did not differ significantly as a function of sex It is unlikely that the relationships are generated by false self-reporting of sexual experience
---
paper_title: Bilateral symmetry and sexual selection: a meta-analysis.
paper_content:
A considerable body of primary research has accumulated over the last 10 yr testing the relationship between developmental instability in the form of fluctuating asymmetry and performance of individuals in mating success itself or sexual attractiveness. This research comprises 146 samples from 65 studies of 42 species of four major taxa. We present the results of a meta-analysis of these studies, which demonstrates that there is indeed an overall significant, moderate negative relationship: for studies, the overall mean Pearson's r or effect size = -.42, P <.0005; for species, the overall mean r = -.34, .01 < P < .025. Based on calculated fail-safe numbers, the effect-size estimates are highly robust against any publication or reporting bias that may exist. There is considerable evidence that the magnitude of the negative correlation between fluctuating asymmetry and success related to sexual selection is greater for males than for females, when a secondary sexual trait rather than an ordinary trait is studied, with experimentation compared with observation, and for traits not involved with mobility compared with traits affecting mobility. There is also limited evidence that higher taxa may differ in effect size and that intensity of sexual selection negatively correlates with effect size.
---
paper_title: Study of genetic variance in the fluctuating asymmetry of anthropometrical traits
paper_content:
SummaryWe have studied the fluctuating asymmetry (FA) of 8 bilateral morphometric traits in two-parent families, comprising 216 families with one newborn baby, and 60 families with two children (age range 5–18 years). Heritability was assessed by: (1) multiple regression analyses of the children's measurements on the mother's and father's measurements; (2) midparent-child regressions; and (3) sibling correlations. The extent of genetic determination of individual FA measurements was generally low, albeit statistically significant in some cases. However, even these correlations were inconsistent between samples and relatives. However, the mean FA values for all 8 studied traits showed positive and significant correlation between parents and children in two samples and in total. Additive genetic variance, calculated from multiple regression analyses and midparent-child correlations, was estimated to be between 0·25–0·30. Three multiple regressions (two for the separate group and one for the total sample) yi...
---
paper_title: Human Fluctuating Asymmetry and Sexual Behavior
paper_content:
This report presents evidence that sexual selection may favor developmental stability (i e, the absence of fluctuating asymmetry) in humans Subtle, heritable asymmetries in seven nonfacial human body traits correlated negatively with number of self-reported, lifetime sex partners and correlated positively with self-reported age at first copulation in a college student sample These relationships remained statistically significant when age, marital status, body height, ethnicity, physical anomalies associated with early prenatal development, and physical attractiveness were statistically controlled The strength of the relationships did not differ significantly as a function of sex It is unlikely that the relationships are generated by false self-reporting of sexual experience
---
paper_title: The evolutionary psychology of extrapair sex: The role of fluctuating asymmetry
paper_content:
Abstract This study explored evolutionary hypotheses concerning extrapair sex (or EPCs: extrapair copulations). Based on recent notions about sexual selection, we predicted that (a) men's number of EPCs would correlate negatively with their fluctuating asymmetry, a measure of the extent to which developmental design is imprecisely expressed, and (b) men's number of times having been an EPC partner of a woman would negatively correlate with their fluctuating asymmetry. In a sample of college heterosexual couples, both hypotheses were supported. In addition, men's physical attractiveness independently predicted how often they had been an EPC partner. Women's anxious attachment style positively covaried with their number of EPC partners, whereas their avoidant attachment style negatively covaried with their number of EPC partners.
---
paper_title: Facial Asymmetry and Attractiveness Judgment in Developmental Perspective
paper_content:
This study examined the role of facial symmetry in the judgment of physical attractiveness. Four experiments investigated people's preference for either somewhat asymmetrical portraits or their symmetrical chimeric composites when presented simultaneously. Experiment 1 found a higher selection rate for symmetrical faces with neutral expression for portraits of old people, and Experiment 2 indicated this may be because symmetry serves as cue for youth in old age. In Experiments 3 and 4 participants examined portraits with emotional expressions. Experiment 3 found a higher selection rate for asymmetrical faces, and Experiment 4 indicated this may be because observers perceived them as more genuine and natural. This study suggests that the low degree of facial asymmetry found in normal people does not affect attractiveness ratings (except for old age), probably because observers are not tuned to perceive it.
---
paper_title: The evolution of monogamy and concealed ovulation in humans
paper_content:
Abstract Homo sapiens is unique among primates in that it is the only group-living species in which monogamy is the major mating system and the only species in which females do not reveal their ovulation by estrus. These unique traits can be explained in terms of natural selective and cultural selective theory applied at the level of the individual. The relative amount of parental investment by the sexes is an important correlate with the type of mating systems found in animal species. In human history the evolution of increased hunting abilities, bipedality, relatively altricial young and the concomitant dependency on the male for food by the female are viewed as the factors allowing and favoring increased male parental investment. The same scenario can explain the evolution of extensive female parental care in humans. Monogamy is the result of the male's investment being increased to approximate equality with that of the female and the male's attempt to insure his paternity. The incipient hunting activities of Australopithecus and small cranium would indicate polygyny at this stage of hominid evolution. The increase in hunting activities and brain size of Homo erectus may have favored monogamy. Polygyny may have reappeared to some extent with the evolution of Homo sapiens sapiens 40,000 B.P., but perhaps not until 15,000–11,000 B.P. did disparity in resources controlled by males allow some males to exceed the polygyny threshold. The loss of estrus in the female is regarded not as a precondition to pair-bonding, but as a means for increasing the likelihood of successful cuckoldry of the male after monogamy has been established. The human social setting of monogamous pairs in close proximity greatly reduces the costs of infidelity.
---
paper_title: Extramarital sex: A review of the research literature
paper_content:
Abstract Research literature in the field of extramarital sex (EMS) is reviewed. Initially, definitional problems which reveal the need for increased rigor in specifying the sexual behavior under consideration, the dyad or group outside of which the behavior occurs, and the consensual or non‐consensual nature of the behavior are discussed. Twelve surveys of EMS are examined and the limitations of incidence rate figures discussed. Empirical studies have attempted to identify key variables which discriminate between EMS and non‐EMS samples. The findings of this research are summarized according to four variable categories: social background characteristics, characteristics of the marriage, personal readiness characteristics, and sex and gender differences. Characteristics of the marriage and personal readiness characteristics are found to be of prime importance in understanding EMS, although sex and gender differences frequently qualify major empirical relationships. The attitude continuum of extramarital s...
---
paper_title: From vigilance to violence: Tactics of mate retention in American undergraduates
paper_content:
Although the attraction and selection of mates are central to human reproduction, the retention of acquired mates is often necessary to actualize the promise of reproductive effort. Three empirical studies used act frequency methods to identify, assess the reported performance frequencies of, and evaluate the perceived effectiveness of 19 tactics and 104 acts of human mate guarding and retention. In Study 1 (N = 105), a hierarchical taxonomy of tactics was developed from a pool of nominated acts. We then assessed the reported performance frequencies of 19 retention tactics and 104 acts and tested three hypotheses derived from evolutionary models in an undergraduate sample (N = 102). Study 2 (N = 46) provided an independent test of these hypotheses by assessing the perceived effectiveness of each tactic. Discussion draws implications for sexual poaching, susceptibility to pair-bond defection, and the power of act frequency methods for preserving the proximate specificity and systemic complexity inherent in human mating processes.
---
paper_title: Menstrual cycle shifts in women's self-perception and motivation: A daily report method
paper_content:
Few systematic studies have examined differences in women’s behaviour across the entire menstrual cycle. Previous research indicates that women’s sexual motivation and mating behaviour increases near ovulation, as well as sexual desire and selection of provocative clothing. We investigated whether self-perception varies across the menstrual cycle, i.e., whether women consider themselves to be more attractive near ovulation. As a hypothesized by-product of increased desire and a reflection of short-term sexual strategy selection, we tested changes in clothing style, use of cosmetics, and purchasing behaviour across the menstrual cycle. We adopted the daily report method of estimated fertility (Haselton & Gangestad, 2006; Schwarz & Hassebrauck, 2008) over 35 days with 25 College age women who were not taking any kind of hormonal contraceptives. Women reported feeling more attractive and desirable, increased sexual interest and appearance related styling at the days near ovulation (i.e., when conception likelihood was high) than on the other days of the menstrual cycle. However, no significant differences between high and low fertile phases were found for purchasing behaviour. We discuss our results with reference to the evolutionary psychology of female fertility and mating related behaviour.
---
paper_title: Second to fourth digit ratio in elite musicians : Evidence for musical ability as an honest signal of male fitness
paper_content:
Abstract Prenatal testosterone may facilitate musical ability. The ratio of the length of the second and fourth digit (2D:4D) is probably determined in utero and is negatively related to adult testosterone concentrations and sperm numbers per ejaculate. Therefore, 2D:4D may be a marker for prenatal testosterone levels. We tested the association between 2D:4D and musical ability by measuring the ratio in 70 musicians (54 men and 16 women) recruited from a British symphony orchestra. The men had significantly lower 2D:4D ratios (indicating high testosterone) than controls ( n = 86). The mean 2D:4D of women did not differ significantly from controls ( n = 78). Rankings of musical ability within the orchestra were associated with male 2D:4D (high rank=low 2D:4D). Differences in 2D:4D ratio were not found among instrument groups, suggesting that 2D:4D was not related to mechanical advantages in playing particular intruments. Concert audiences showed evidence of a female-biased sex ratio in seats close to the orchestra. This preliminary study supports the thesis that music is a sexually selected trait in men that indicates fertilizing capacity and perhaps good genes. However, the association between low 2D:4D ratio and orchestra membership and high status within the orchestra may result from testosterone-mediated competitive ability. Further tests of the association between 2D:4D and musical ability per se are necessary.
---
paper_title: The ratio of 2nd to 4th digit length: a predictor of sperm numbers and concentrations of testosterone, luteinizing hormone and oestrogen.
paper_content:
The differentiation of the urinogenital system and the appendicular skeleton in vertebrates is under the control of Hox genes. The common control of digit and gonad differentiation raises the possibility that patterns of digit formation may relate to spermatogenesis and hormonal concentrations. This work was concerned with the ratio between the length of the 2nd and 4th digit (2D:4D) in humans. We showed that (i) 2D:4D in right and left hands has a sexually dimorphic pattern; in males mean 2D:4D = 0.98, i.e. the 4th digit tended to be longer than the 2nd and in females mean 2D:4D = 1.00, i.e. the 2nd and 4th digits tended to be of equal length. The dimorphism is present from at least age 2 years and 2D:4D is probably established in utero; (ii) high 2D:4D ratio in right hands was associated with germ cell failure in men (P = 0.04); (iii) sperm number was negatively related to 2D:4D in the right hand (P = 0.004); (iv) in men testosterone concentrations were negatively related to right hand 2D:4D and in women and men LH (right hand), oestrogen (right and left hands) and prolactin (right hand) concentrations were positively correlated with 2D:4D ratio and (v) 2D:4D ratio in right hands remained positively related to luteinizing hormone and oestrogen after controlling for sex, age, height and weight.
---
paper_title: Hormonal response to competition in human males
paper_content:
Changes in testosterone (T) and cortisol (C) were evaluated in males competing in a non-athletic laboratory reaction time task. Subjects were randomly assigned to “win” or “lose” by adjusting feedback regarding their task performance. Further, subjects were randomly assigned to either a Close Contest condition (where one person barely “defeated” his opponent), or a Decisive condition (in which the victory was clear). Throughout competition, samples of saliva were taken and assayed later for T and C. Post-competition mood and attributions were also measured. Winners had higher overall T levels than losers, with no significant difference between Close Contest or Decisive Victory conditions. In contrast, C levels did not differ between winners and losers nor did Condition (Close or Decisive) have any effect. Mood was depressed in Decisive losers compared to all other groups. The results indicate that the perception of winning or losing, regardless of actual performance or merit on the task, differentially influenced T (but not C) levels, and that such hormonal changes are not simply general arousal effects but are related to mood and status change.
---
paper_title: Derogation of Competitors
paper_content:
Verbal signals are sometimes used to manipulate the impressions that people form about oneself and others. For the goal of self-enhancement, one can manipulate impressions either by elevating oneself or by derogating others. Five hypotheses about derogation of same-sex competitors were generated from an evolutionary model of human-mate competition. These hypotheses focused on sex differences in the importance that humans attach to external resources, rank, achievements, physical prowess, reproductive value and fidelity. Four studies were conducted to test these hypotheses. In a preliminary study (N = 80), subjects nominated intrasexual derogation tactics they had previously observed. Study 1 (N = 120) examined estimates of the likelihood that men and women would perform each tactic. Study 2 (N = 101) identified the perceived effectiveness of each derogation tactic for men and women. Study 3 (N = 100) used act reports based on self-recording and observer-recording to identify the likelihood of specific per...
---
paper_title: Digit ratio (2D:4D) and physical fitness in males and females: Evidence for effects of prenatal androgens on sexually selected traits
paper_content:
It has been suggested that male achievement in sports and athletics is correlated with a putative measure of prenatal testosterone the 2nd to 4th digit ratio (2D:4D). It is not known whether this association also extends to females, or whether the association results from an effect of testosterone on behavior (such as exercise frequency) or on physical fitness. Here, we report for the first time data from two studies which consider associations between 2D:4D and physical fitness in females in addition to males: Study I—in a sample of teenage boys (n = 114) and girls (n = 175), their ‘physical education grade’ was negatively associated with 2D:4D of the right hand (boys), and right and left hand (girls), and Study II—among a sample of young men (n = 102) and women (n = 77), a composite measure of physical fitness was negatively related to right hand 2D:4D in men and left hand 2D:4D in women. We conclude that 2D:4D is negatively related to physical fitness in both men and women. In Study II, there was evidence that the relationship between physical fitness and 2D:4D in men was mediated through an association with exercise frequency. Thus, 2D:4D in males may be a negative correlate of frequent exercise which then relates to achievement in sports and athletics.
---
paper_title: Second to fourth digit ratio and facial asymmetry
paper_content:
Several studies have reported a positive association between degree of facial symmetry and attractiveness ratings, although the actual causes of the development of facial asymmetries remains to be confirmed. The current study hypothesizes that early hormone levels may play a crucial role in the development of facial asymmetries. Recent evidence suggests that the relative length of the second to fourth finger (2D:4D) is negatively related to prenatal testosterone and positively related to prenatal estrogen and may thus serve as a window to the prenatal hormonal environment. We measured 2D:4D in a sample of male and female college students and analysed their faces for horizontal asymmetries. 2D:4D was significantly negatively related to facial asymmetry in males, whereas in females facial asymmetry was significantly positively related to 2D:4D. We suggest that digit ratio may thus be considered as a pointer to an individual's developmental instability and stress through its association with prenatal sexual steroids.
---
paper_title: Facial attractiveness, symmetry and cues of good genes
paper_content:
Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.
---
paper_title: Symmetry and Performance in Middle Distance Runners
paper_content:
Deviations from perfect symmetry in paired traits such as ear size and nostril width may indicate developmental instability and/or short-term fluctuations in hormones. In both cases symmetry is thought to be optimal and to indicate high pheno-typic quality. The purpose of this work was to determine the relationship between symmetry and performance in middle-distance runners. Fifty male subjects participated in this study. Deviations from perfect bilateral symmetry were measured in seven traits; ear size, nostril width, 2nd to 5th digit length and wrist width. After measurements were made the subjects were ranked for athletic ability and they reported their best 800 metre and 1500 metre times. Symmetric subjects had higher rankings for athletic ability (nostrils, p < 0.001 and ears, p < 0.001), lower best 800 metre times (nostrils, p < 0.05 and ears, p < 0.01) and lower best 1500 metre times (3rd digit, p < 0.01 and ears, p < 0.05) than asymmetric subjects. This conclusion remained essentially the same after Bonferroni adjustment for multiple tests and when experience and age were controlled for in multiple regression tests. We conclude that symmetry in traits such as nostrils and ears indicates good running ability. It may therefore be useful in predicting the future potential of young athletes.
---
paper_title: Evolutionary Theory and African American Self-Perception: Sex Differences in Body-Esteem Predictors of Self-Perceived Physical and Sexual Attractiveness, and Self-Esteem
paper_content:
Evolutionary biological theory has been shown to be relevant to an understanding of how individuals assess others' physical and sexual attractiveness. This research used the Body-Esteem Scale and multiple regression to determine if this theory is also relevant to an understanding of self-perceived physical and sexual attractiveness and self-esteem for a sample of 91 African Americans. The hypotheses that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness for women. This research indicates that evolutionary biological theory can provide relevant insig...
---
paper_title: Second to fourth digit ratios and individual differences in women's self-perceived attractiveness, self-esteem, and body-esteem
paper_content:
Abstract This research examined the relationship between women’s digit ratios (2D:4D) and self-perceived attractiveness, self-esteem and body-esteem. Sixty women rated their physical and sexual attractiveness, self-esteem, and body-esteem. Digit ratios (2D:4D) were determined from tracings of their hands. Based on prior research, it was hypothesized that women with high 2D:4D ratios would find themselves more physically attractive and possibly exhibit higher self-esteem and body-esteem. The results obtained supported the hypothesis for attractiveness. Women with high 2D:4D ratios rated their physical attractiveness higher. These results are discussed in terms of evolutionary theory and prior research.
---
paper_title: The ratio of 2nd to 4th digit length and male homosexuality
paper_content:
Abstract Sexual orientation may be influenced by prenatal levels of testosterone and oestrogen. There is evidence that the ratio of the length of 2nd and 4th digits (2D:4D) is negatively related to prenatal testosterone and positively to oestrogen. We report that (a) 2D:4D was lower in a sample of 88 homosexual men than in 88 sex- and age-matched controls recruited without regard to sexual orientation, (b) within the homosexual sample, there was a significant positive relationship between mean 2D:4D ratio and exclusive homosexuality, (c) overall, there was a decrease in 2D:4D from controls to homosexual men to bisexual men and (d) fraternal birth order, a positive predictor of male homosexuality, was not associated with 2D:4D in a sample of 240 Caucasian men recruited without regard to sexual orientation and 45 homosexual men. Further work is needed to confirm the relationships between 2D:4D and sexual orientation. However, these and other recent data tend to support an association between male homosexuality and high fetal testosterone. Very high testosterone levels may be associated with a sexual preference for both men and women.
---
paper_title: Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans
paper_content:
Abstract Fetal and adult testosterone may be important in establishing and maintaining sex-dependent abilities associated with male physical competitiveness. There is evidence that the ratio of the length of the 2nd and 4th digits (2D:4D) is a negative correlate of prenatal and adult testosterone. We use ability in sports, and particularly ability in football, as a proxy for male physical competitiveness. Compared to males with high 2D:4D ratio, men with low ratio reported higher attainment in a range of sports and had higher mental rotation scores (a measure of visual–spatial ability). Professional football players had lower 2D:4D ratios than controls. Football players in 1st team squads had lower 2D:4D than reserves or youth team players. Men who had represented their country had lower ratios than those who had not, and there was a significant (one-tailed) negative association between 2D:4D and number of international appearances after the effect of country was removed. We suggest that prenatal and adult testosterone promotes the development and maintenance of traits which are useful in sports and athletics disciplines and in male:male fighting.
---
paper_title: Length of index and ring fingers differentially influence sexual attractiveness of men’s and women’s hands
paper_content:
Humans show intra- and intersexual variation in second (2D) relative to fourth (4D) finger length, men having smaller 2D:4D ratio, possibly because of differential exposure to sex hormones during fetal life. The relations between 2D:4D and phenotypic traits including fitness components reported by several studies may originate from the organizational effects that sex hormones have on diverse organs and their concomitant effect on 2D:4D. Evolutionary theory posits that sexual preferences are adaptations whereby choosy individuals obtain direct or genetic indirect benefits by choosing a particular mate. Since sex hormones influence both fitness and 2D:4D, hand sexual attractiveness should depend on 2D:4D, a hypothesis tested only in one correlational study so far. We first presented hand computer images to undergraduates and found that opposite-sex hands with long 2D and 4D were considered more sexually attractive. When we experimentally manipulated hand images by increasing or decreasing 2D and/or 4D length, women preferred opposite-sex hands that had been masculinized by elongating 4D, whereas men avoided masculinized opposite-sex right hands with shortened 2D. Hence, consensus exists about which hands are attractive among different opposite-sex judges. Finger length may signal desirable sex hormone-dependent traits or genetic quality of potential mates. Psychological mechanisms mediating hand attractiveness judgments may thus reflect adaptations functioning to provide direct or indirect benefits to choosy individuals. Because the genetic mechanisms that link digit development to sex hormones may be mediated by Hox genes which are conserved in vertebrates, present results have broad implications for sexual selection studies also in nonhuman taxa.
---
paper_title: Ovulatory shifts in human female ornamentation: Near ovulation, women dress to impress
paper_content:
Humans differ from many other primates in the apparent absence of obvious advertisements of fertility within the ovulatory cycle. However, recent studies demonstrate increases in women's sexual motivation near ovulation, raising the question of whether human ovulation could be marked by observable changes in overt behavior. Using a sample of 30 partnered women photographed at high and low fertility cycle phases, we show that readily-observable behaviors – self-grooming and ornamentation through attractive choice of dress – increase during the fertile phase of the ovulatory cycle. At above-chance levels, 42 judges selected photographs of women in their fertile (59.5%) rather than luteal phase (40.5%) as “trying to look more attractive.” Moreover, the closer women were to ovulation when photographed in the fertile window, the more frequently their fertile photograph was chosen. Although an emerging literature indicates a variety of changes in women across the cycle, the ornamentation effect is striking in both its magnitude and its status as an overt behavioral difference that can be easily observed by others. It may help explain the previously documented finding that men's mate retention efforts increase as their partners approach ovulation.
---
paper_title: Evolutionary Theory and Self-perception: Sex Differences in Body Esteem Predictors of Self-perceived Physical and Sexual Attractiveness and Self-Esteem
paper_content:
Responses to the body esteem scale (Franzoi & Shields, 1984) and multiple regression were used to determine if evolutionary biological theory is relevant to an understanding of self-perceived physical and sexual attractiveness and selfesteem and to determine if physical and sexual attractiveness are the same construct. It was hypothesized that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory. These hypotheses were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness. Physical and sexual attractiveness are not the same constructs. This research indicates that evolutionary bological theory can provide relev...
---
paper_title: Digit Ratio: A Pointer to Fertility, Behavior, and Health
paper_content:
The main message of this book is simple and John Manning does an excellent job of delivering it clearly. The ratio of second and fourth (finger) digit lengths (2D:4D) appears to be affected by foetal testosterone exposure, may be fixed before birth, and so may be a reliable indicator of early prenatal conditions and subsequent health factors.
---
paper_title: Facial symmetry and the big-five personality factors
paper_content:
The present study investigated possible associations between facial symmetry and actual personality as assessed by the 'big-five' personality factors: neuroticism (N), extraversion (E), openness (O), agreeableness (A), and conscientiousness (C). Digital photographs were taken of male and female faces, volunteers also completed the NEO-FFI personality inventory. Facial images were analysed for horizontal symmetry by means of digital image processing. Following previous reports we predicted that facial symmetry should be negatively related to neuroticism but positively related to extraversion, openness, agreeableness, and conscientiousness. In general, our data on actual personality confirmed previous reports on perceptions of personality for neuroticism and extraversion. Neuroticism was found to be negatively but not significantly related to facial symmetry whereas extraversion was positively associated. In contrast to previous data, we found significant negative associations between facial symmetry and openness and agreeableness. Conscientiousness was non-significantly related to facial symmetry. The strongest associations with facial symmetry were found for extraversion and openness. Our results suggest that behavioural perceptions of an individual may reflect an individual's actual personality, and facial symmetry is a correlate of personality. However, because of some inconsistencies between this and previous studies we suggest that (1) the associations between facial symmetry and personality traits require further investigation, and (2) future studies should urge for methodological consistency to make results comparable.
---
paper_title: The effect of facial symmetry on perceptions of personality and attractiveness
paper_content:
Abstract The present study examined the effect of facial symmetry on perceptions of personality and physical attractiveness. Digital photographs of female targets were manipulated into symmetrical and asymmetrical versions and then presented to undergraduate raters along with the “normal” photograph, which was not manipulated for symmetry. Based on the hypothesis that facial symmetry is used as an indicator of health, we predicted that the asymmetrical version of the faces would be perceived as more Neurotic, but less Extraverted, Open, Agreeable, Conscientious, and attractive, relative to the other versions. As predicted, the asymmetrical faces were rated as significantly more Neurotic, less Agreeable and less Conscientious than the normal versions. However, facial symmetry did not affect ratings of Openness and Extraversion, nor did it affect ratings of attractiveness.
---
paper_title: Inferences of competence from faces predict election outcomes.
paper_content:
We show that inferences of competence based solely on facial appearance predicted the outcomes of U.S. congressional elections better than chance (e.g., 68.8% of the Senate races in 2004) and also were linearly related to the margin of victory. These inferences were specific to competence and occurred within a 1-second exposure to the faces of the candidates. The findings suggest that rapid, unreflective trait inferences can contribute to voting choices, which are widely assumed to be based primarily on rational and deliberative considerations.
---
paper_title: Personality and Mate Preferences: Five Factors In Mate Selection and Marital Satisfaction
paper_content:
ABSTRACT Although personality characteristics figure prominently in what people want in a mate, little is known about precisely which personality characteristics are most important, whether men and women differ in their personality preferences, whether individual women or men differ in what they want, and whether individuals actually get what they want. To explore these issues, two parallel studies were conducted, one using a sample of dating couples (N= 118) and one using a sample of married couples (N= 216). The five-factor model, operationalized in adjectival form, was used to assess personality characteristics via three data sources—self-report, partner report, and independent interviewer reports. Participants evaluated on a parallel 40-item instrument their preferences for the ideal personality characteristics of their mates. Results were consistent across both studies. Women expressed a greater preference than men for a wide array of socially desirable personality traits. Individuals differed in which characteristics they desired, preferring mates who were similar to themselves and actually obtaining mates who embodied what they desired. Finally, the personality characteristics of one's partner significantly predicted marital and sexual dissatisfaction, most notably when the partner was lower on Agreeableness, Emotional Stability, and Intellect-Openness than desired.
---
paper_title: Consensus in interpersonal perception: acquaintance and the big five.
paper_content:
Consensus refers to the extent to which judges agree in their ratings of a common target. Consensus has been an important area of research in social and personality psychology. In this article, generalizability theory is used to develop a percentage of total variance measure of consensus. This measure is used to review the level of consensus across 32 studies by considering the role of acquaintance level and trait dimension. The review indicates that consensus correlations ranged from zero to about .3, with higher levels of consensus for ratings of Extraversion. The studies do not provide evidence that consensus increases with increasing acquaintance, a counterintuitive result that can be accounted for by a theoretical model (D.A. Kenny, 1991, in press). Problems in the interpretation of longitudinal research are reviewed.
---
paper_title: Facing faces: Studies on the cognitive aspects of physiognomy.
paper_content:
Physiognomy, the art of reading personality traits from faces, dates back to ancient Greece, and is still very popular. The present studies examine several aspects and consequences of the process of reading traits from faces. Using faces with neutral expressions, it is demonstrated that personality information conveyed in faces changes the interpretation of verbal information. Moreover, it is shown that physiognomic information has a consistent effect on decisions, and creates overconfidence in judgments. It is argued, however, that the process of "reading from faces" is just one side of the coin, the other side of which is "reading into faces." Consistent with the latter, information about personality changes the perception of facial features and, accordingly, the perceived similarity between faces. The implications of both processes and questions regarding their automaticity are discussed. There are some people whose faces bear the stamp of such artless vulgarity and baseness of character, such an animal limitation of intelligence, that one wonders how they can appear in public with such a countenance, instead of wearing a mask. (Schopenhauer, 1942, p. 63)
---
paper_title: Facial symmetry is positively associated with self-reported extraversion
paper_content:
Abstract Fink et al. (2005) reported significant associations between facial symmetry and scores on some of the “big five” personality dimensions derived from self-report data. In particular, they identified a positive association between facial symmetry and extraversion, but negative associations between facial symmetry and both agreeableness and openness. Fink et al. (2005) used a measure of facial symmetry based on analysis of the central region of each face. In the present study we attempted to replicate these findings with a much larger sample (N = 294) and using a landmark-based measure of facial symmetry that includes peripheral regions of the face. In both sexes, we found a significant positive association between self-reported extraversion and facial symmetry but were unable to replicate any of the other previously reported associations. Nevertheless, the positive association between symmetry and extraversion provides further support for the idea that facial appearance could predict personality and therefore makes it possible for some personality attributions to be “data driven”, i.e. driven by properties of the target.
---
paper_title: Male facial attractiveness: evidence for hormone-mediated adaptive design
paper_content:
Abstract Experimenters examining male facial attractiveness have concluded that the attractive male face is (1) an average male face, (2) a masculinized male face, or (3) a feminized male face. Others have proposed that symmetry, hormone markers, and the menstrual phase of the observer are important variables that influence male attractiveness. This study was designed to resolve these issues by examining the facial preferences of 42 female volunteers at two different phases of their menstrual cycle. Preferences were measured using a 40-s QuickTime movie (1200 frames) that was designed to systematically modify a facial image from an extreme male to an extreme female configuration. The results indicate that females exhibit (1) a preference for a male face on the masculine side of average, (2) a shift toward a more masculine male face preference during the high-risk phase of their menstrual cycle, and (3) no shift in other facial preferences. An examination of individual differences revealed that women who scored low on a "masculinity" test (1) showed a larger menstrual shift, (2) had lower self-esteem, and (3) differed in their choice of male faces for dominance and short-term mates. The results are interpreted as support for a hormonal theory of facial attractiveness whereby perceived beauty depends on an interaction between displayed hormone markers and the hormonal state of the viewer.
---
paper_title: Facial asymmetry as an indicator of psychological, emotional, and physiological distress
paper_content:
Fluctuating asymmetry (FA) is deviation from bilateral symmetry in morphological traits with asymmetry values that are normally distributed with a mean of 0. FA is produced by genetic or environmental perturbations of developmental design and may play a role in human sexual selection. K. Grammer and R. Thornhill (1994) found that facial FA negatively covaries with observer ratings of attractivenes s, dominance, sexiness, and health. Using self-reports, observer ratings, daily diary reports, and psychophysiological measures, the authors assessed the relationship between facial FA and health in 2 samples of undergraduate s (N = 101). Results partially replicate and extend those of K. Grammer and R. Thornhill (1994) and suggest that facial FA may signal psychological, emotional, and physiological distress. Discussion integrates the authors' findings with previous research on FA and suggests future research needed to clarify the role of FA in human sexual selection.
---
paper_title: Facial attractiveness, symmetry and cues of good genes
paper_content:
Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.
---
paper_title: Psychopathy and developmental instability
paper_content:
Abstract Psychopaths are manipulative, impulsive, and callous individuals with long histories of antisocial behavior. Two models have guided the study of psychopathy. One suggests that psychopathy is a psychopathology, i.e., the outcome of defective or perturbed development. A second suggests that psychopathy is a life-history strategy of social defection and aggression that was reproductively viable in the environment of evolutionary adaptedness (EEA). These two models make different predictions with regard to the presence of signs of perturbations or instability in the development of psychopaths. In Study 1, we obtained data on prenatal, perinatal, and neonatal signs of developmental perturbations from the clinical files of 643 nonpsychopathic and 157 psychopathic male offenders. In Study 2, we measured fluctuating asymmetry (FA, a concurrent sign of past developmental perturbations) in 15 psychopathic male offenders, 25 nonpsychopathic male offenders, and 31 male nonoffenders. Psychopathic offenders scored lower than nonpsychopathic offenders on obstetrical problems and FA; both psychopathic and nonpsychopathic offenders scored higher than nonoffenders on FA. The five offenders from Study 2 meeting the most stringent criteria for psychopathy were similar to nonoffenders with regard to FA and had the lowest asymmetry scores among offenders. These results provide no support for psychopathological models of psychopathy and partial support for life-history strategy models.
---
paper_title: Evolutionary Theory and African American Self-Perception: Sex Differences in Body-Esteem Predictors of Self-Perceived Physical and Sexual Attractiveness, and Self-Esteem
paper_content:
Evolutionary biological theory has been shown to be relevant to an understanding of how individuals assess others' physical and sexual attractiveness. This research used the Body-Esteem Scale and multiple regression to determine if this theory is also relevant to an understanding of self-perceived physical and sexual attractiveness and self-esteem for a sample of 91 African Americans. The hypotheses that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness for women. This research indicates that evolutionary biological theory can provide relevant insig...
---
paper_title: Dermatoglyphic fluctuating asymmetry and atypical handedness in schizophrenia
paper_content:
Atypical handedness and dermatoglyphic abnormalities are hypothesized to reflect a neurodevelopmental disturbance in schizophrenia. Developmental instability, indexed by dermatoglyphic fluctuating asymmetry (FA), reflects the degree to which an individual's ontogenetic program is maintained and provides a useful framework in which to consider atypical handedness in schizophrenia. Thirty patients diagnosed with schizophrenia were compared with 37 matched healthy controls on levels of dermatoglyphic FA, a demonstration task determining hand preference and a test of relative hand skill. Multivariate analyses established that patients demonstrated greater FA and more atypical hand skill compared with controls. In patients, but not in controls, there was a strong positive association between a measure of FA and a measure of atypical hand skill, suggesting that these markers of neurodevelopmental disturbance are related in schizophrenia. On a measure of hand preference, patients were more likely than controls to be classified as mixed handed than either right or left handed. Results from the present study support the conjecture of greater developmental instability in schizophrenia affecting neurodevelopmental processes, including those conferring manual dominance.
---
paper_title: Invisible Men: Evolutionary Theory and Attractiveness and Personality Evaluations of 10 African American Male Facial Shapes
paper_content:
The entire gamut of facial shapes has not been included in prior research investigating the perception of African American men’s facial attractiveness. The 10 facial shapes identified for African American men (elliptic, oval, reversed oval, round, rectangular, quadratic, rhombic, trapezium, inverted trapezium, and pentagonal) were examined in this research. Based on evolutionary theory and prior research, the reversed oval, rectangular, trapezium, and inverted trapezium faces were hypothesized to be rated as most attractive, most dominant, most mature, most masculine, strongest, and most socially competent. Smaller, round, or oval faces were hypothesized to be perceived as warmest. The results obtained were consistent with these hypotheses. These findings are discussed in terms of evolutionary psychological adaptations and prior research.
---
paper_title: Evolutionary Theory and Self-perception: Sex Differences in Body Esteem Predictors of Self-perceived Physical and Sexual Attractiveness and Self-Esteem
paper_content:
Responses to the body esteem scale (Franzoi & Shields, 1984) and multiple regression were used to determine if evolutionary biological theory is relevant to an understanding of self-perceived physical and sexual attractiveness and selfesteem and to determine if physical and sexual attractiveness are the same construct. It was hypothesized that regression models of physical and sexual attractiveness would differ within and across sex groups and that models of self-esteem would differ across sex groups in accordance with evolutionary theory. These hypotheses were supported. Attributes of the body related to fecundity and successful mothering characteristics predicted for women and attributes of the body related to strength and dominance predicted for men. In addition, attributes of the body dealing with sexual maturity were stronger predictors of sexual attractiveness. Physical and sexual attractiveness are not the same constructs. This research indicates that evolutionary bological theory can provide relev...
---
paper_title: The Scent of Symmetry: A Human Sex Pheromone that Signals Fitness?
paper_content:
Abstract A previous study by the authors showed that the body scent of men who have greater body bilateral symmetry is rated as more attractive by normally ovulating (non-pill-using) women during the period of highest fertility based on day within the menstrual cycle. Women in low-fertility phases of the cycle and women using hormone-based contraceptives do not show this pattern. The current study replicated these findings with a larger sample and statistically controlled for men's hygiene and other factors that were not controlled in the first study. The current study also examined women's scent attractiveness to men and found no evidence that men prefer the scent of symmetric women. We propose that the scent of symmetry is an honest signal of phenotypic and genetic quality in the human male, and chemical candidates are discussed. In both sexes, facial attractiveness (as judged from photos) appears to predict body scent attractiveness to the opposite sex. Women's preference for the scent associated with men's facial attractiveness is greatest when their fertility is highest across the menstrual cycle. The results overall suggest that women have an evolved preference for sires with good genes.
---
paper_title: Facial attractiveness, symmetry and cues of good genes
paper_content:
Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.
---
paper_title: Human body odour, symmetry and attractiveness
paper_content:
Several studies have found body and facial symmetry as well as attractiveness to be human mate choice criteria. These characteristics are presumed to signal developmental stability. Human body odour has been shown to influence female mate choice depending on the immune system, but the question of whether smell could signal general mate quality, as do other cues, was not addressed in previous studies. We compared ratings of body odour, attractiveness, and measurements of facial and body asymmetry of 16 male and 19 female subjects. Subjects wore a T-shirt for three consecutive nights under controlled conditions. Opposite-sex raters judged the odour of the T-shirts and another group evaluated portraits of the subjects for attractiveness. We measured seven bilateral traits of the subject's body to assess body asymmetry. Facial asymmetry was examined by distance measurements of portrait photographs. The results showed a significant positive correlation between facial attractiveness and sexiness of body odour for female subjects. We found positive relationships between body odour and attractiveness and negative ones between smell and body asymmetry for males only if female odour raters were in the most fertile phase of their menstrual cycle. The outcomes are discussed in the light of different male and female reproductive strategies.
---
paper_title: Facial symmetry detection ability changes across the menstrual cycle
paper_content:
The effects of menstrual cycle phase and hormones on women's visual ability to detect symmetry and visual preference for symmetry were examined. Participants completed tests of symmetry detection and preference for male facial symmetry at two of three menstrual cycle phases (menses, periovulatory, and luteal). Women were better at detecting facial symmetry during the menses than luteal phase of their cycle. A trend indicated the opposite pattern for dot symmetry detection. Similarly, change in salivary progesterone levels across the cycle was negatively related to change in facial symmetry detection scores. However, there was no clear evidence of a greater preference for facial symmetry at any cycle phase, despite an overall preference for facial symmetry across phases. These findings suggest a menses phase advantage and a low progesterone advantage in women's ability to detect facial symmetry. The results are discussed in the context of hormonal, evolutionary mate selection, and functional neurocognitive theories.
---
| Title: The Relationships between Symmetry and Attractiveness and Mating Relevant Decisions and Behavior: A Review
Section 1: Introduction
Description 1: This section provides an overview of the evolutionary basis for physical and sexual attractiveness, highlighting various facial and bodily traits that signal reproductive fitness and health.
Section 2: Developmental Stability
Description 2: This section defines developmental stability and explains its relevance to mate selection and attractiveness, emphasizing the role of symmetry.
Section 3: Facial Symmetry and Facial Attractiveness
Description 3: This section reviews the research on how facial symmetry influences perceptions of facial attractiveness and its evolutionary significance.
Section 4: Body Symmetry and Attractiveness
Description 4: This section discusses the relationship between overall body symmetry, specific body parts, and attractiveness.
Section 5: Breast Symmetry
Description 5: This section explores the importance of breast symmetry in assessments of female attractiveness and reproductive potential.
Section 6: Vocal Quality
Description 6: This section examines how vocal quality, pitch, and symmetry are associated with attractiveness and developmental stability.
Section 7: Pheromones
Description 7: This section investigates the role of pheromones in attractiveness and their association with symmetry.
Section 8: Mate Protection
Description 8: This section discusses how symmetry influences perceptions of a male's ability to protect his mate and offspring.
Section 9: Human Female Orgasm
Description 9: This section details the relationship between male symmetry and female orgasmic response during copulation.
Section 10: Sperm Quality
Description 10: This section explains the connection between symmetry and sperm quality, linking developmental stability with reproductive success.
Section 11: Sexual Behavior
Description 11: This section discusses the impact of symmetry on sexual behavior, including lifetime number of sexual partners and age of first intercourse.
Section 12: Infidelity
Description 12: This section reviews how symmetry influences the likelihood of engaging in extrapair copulations (EPCs) and infidelity.
Section 13: Athletic and Talent Displays
Description 13: This section examines the connection between symmetry and abilities in athletics, music, and dancing as displays of genetic quality and attractiveness.
Section 14: Personality
Description 14: This section explores how symmetry is related to personality traits and how these traits influence mate selection.
Section 15: Emotional and Psychological Health
Description 15: This section discusses the link between symmetry, psychological health, and emotional stability, highlighting its importance in attractiveness.
Section 16: Conclusions
Description 16: This section summarizes the key findings on the relationship between symmetry and attractiveness, and suggests directions for future research. |
Code Design for Short Blocks: A Survey | 5 | ---
paper_title: List decoding of polar codes
paper_content:
We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.
---
paper_title: Soft decision decoding of linear block codes based on ordered statistics
paper_content:
Soft decision decoding of linear block codes has been investigated for many years and several decoding schemes based on the reordering of the received symbols according to their reliability have been proposed. In this paper, we derive the ordered statistics of the noise after ordering and develop a simple algorithm based on these ordered statistics. This algorithm consists of successive reprocessing stages. For each stage, the error performance can be evaluated. For short codes of lengths up to 64, the optimum bit error performance is achieved in two stages of reprocessing, with at most a computation complexity of o(K/sup 2/) constructed codewords. For longer codes three or more reprocessing stages are required to achieve optimum decoding. The proposed algorithm applies to any linear block code, does not require any data storage and is well suitable for parallel processing. Furthermore, the maximum number of computations required at each reprocessing stage is fixed, which prevents the overflow problem at low SNR.<<ETX>>
---
paper_title: Reliability Options for Data Communications in the Future Deep-Space Missions
paper_content:
Availability of higher capacity for both uplinks and downlinks is expected in the future deep-space missions on Mars, thus enabling a large range of services that could eventually support human remote operations. The provisioning for deep-space links offering data rate up to several megabits per second will be a crucial element to allow new services for the space domain along with the common telecommand and telemetry services with enhanced communication capabilities. On the other hand, also the geometry proper of this scenario with orbiting and landed elements sharing only partial visibility among them and towards Earth provides another challenge. This paper surveys the reliability options that are available in the Consultative Committee for Space Data Systems (CCSDS) Protocol Stack for application in the deep-space missions. In particular, the solutions implemented from the physical up to the application layer are illustrated in terms of channel coding and Automatic Retransmission reQuest (ARQ) schemes. Finally, advanced reliability strategies possibly applicable in next-generation deep-space missions are explored as well.
---
paper_title: Error Free Coding
paper_content:
An adap~ive ~ransform coding technique followed bya OPCM technique is employed ~o code mul~ispectral data. A method of ins~antaneous expansion of quantization levels by reserving two oodewords in the codebook ~ perform a folding over in quantization is implemented for data with incomplete knowledge of probability density funotion. Controlled redundancy is inserted periodioally as fixed length codewords into the bit strinq of data paoked with variable lenqth codes to facilitate fast retrieval and to detect errors. preliminary results of several sets of data from the ERTS-l data frame and the ERIM· airoraft data frame showed that an error· free reconstruction of the data can be achieved with four bits per picture element or less. 1 The work reported in this paper was sponsored by the National Aeronautics and Space Administration (NASA) under Grant Number NGL 15-005-112 at the Laboratory for Applications of Remote Sensinq (LARS) and under Grant Number NGR 15-005-152. Paper was presented at the 1973 Picture Coding Symposium, January 1973, University of Southern California. 2 J. R. Duan is a graduate student in Electrical Enqineering and P. A. Wintz is Professor of Electrical Enqineerinq, both at Purdue University. • Earth Resources Institute of Michigan. Formerly: University of Michigan/WRL.
---
paper_title: Interleaver design for short block length turbo codes
paper_content:
The performance of a turbo code with short block length depends critically on the interleaver design. There are two major criteria in the design of an interleaver: the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder corresponding to its parity bits. This paper describes a new interleaver design based on these two criteria. Simulation results compare the new interleaver design with different existing interleavers.
---
paper_title: Non-Binary Protograph-Based LDPC Codes: Enumerators, Analysis, and Designs
paper_content:
This paper provides a comprehensive analysis of nonbinary low-density parity check (LDPC) codes built out of protographs. We consider both random and constrained edge-weight labeling, and refer to the former as the unconstrained nonbinary protograph-based LDPC codes (U-NBPB codes) and to the latter as the constrained nonbinary protograph-based LDPC codes (C-NBPB codes). Equipped with combinatorial definitions extended to the nonbinary domain, ensemble enumerators of codewords, trapping sets, stopping sets, and pseudocodewords are calculated. The exact enumerators are presented in the finite-length regime, and the corresponding growth rates are calculated in the asymptotic regime. An EXIT chart tool for computing the iterative decoding thresholds of protograph-based LDPC codes is presented, followed by several examples of finite-length U-NBPB and C-NBPB codes with high performance. Throughout this paper, we provide accompanying examples, which demonstrate the advantage of nonbinary protograph-based LDPC codes over their binary counterparts and over random constructions. The results presented in this paper advance the analytical toolbox of nonbinary graph-based codes.
---
paper_title: Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC
paper_content:
Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing as a function of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism which explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of the individual code structure has the effect of increasing the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold of the underlying ensemble. For this reason we call this phenomenon “threshold saturation”. This gives an entirely new way of approaching capacity. One significant advantage of such a construction is that one can create capacity-approaching ensembles with an error correcting radius which is increasing in the blocklength. Our proof makes use of the area theorem of the BP-EXIT curve and the connection between the MAP and BP threshold recently pointed out by Measson, Montanari, Richardson, and Urbanke. Although we prove the connection between the MAP and the BP threshold only for a very specific ensemble and only for the binary erasure channel, empirically the same statement holds for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar collapse of thresholds occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms as well as to new techniques for analysis.
---
paper_title: Information Theory and Reliable Communication
paper_content:
Communication Systems and Information Theory. A Measure of Information. Coding for Discrete Sources. Discrete Memoryless Channels and Capacity. The Noisy-Channel Coding Theorem. Techniques for Coding and Decoding. Memoryless Channels with Discrete Time. Waveform Channels. Source Coding with a Fidelity Criterion. Index.
---
paper_title: Short Protograph-Based LDPC Codes
paper_content:
In this paper we design protograph-based LDPC codes with short block sizes. Mainly we consider rate 1/2 codes with input block sizes 64, 128, and 256 bits. To simplify the encoder and decoder implementations for high data rate transmission, the structure of the codes is based on protographs and circulants. These codes are designed for short block sizes based on maximizing the minimum distance and stopping set size subject to a constraint on the maximum variable node degree. In particular, we consider codes with variable node degrees between 3 and 5. Increasing the node degree leads to larger minimum distances, at the expense of smaller girth. Therefore, there is a trade-off between undetected error rate performance (improved by increasing minimum distance) and the degree of sub-optimality of the iterative decoders typically used (which are adversely affected by graph loops). Various LDPC codes are compared and simulation results are provided.
---
paper_title: Multiple-Bases Belief-Propagation Decoding of High-Density Cyclic Codes
paper_content:
We introduce a new method for decoding short and moderate-length linear block codes with dense parity check matrix representations of cyclic form. This approach is termed multiple-bases belief-propagation. The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard belief-propagation decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the multiple-bases belief propagation decoding performance can be shown to closely follow that of the maximum-likelihood decoder.
---
paper_title: List decoding of polar codes
paper_content:
We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.
---
paper_title: Soft decision decoding of linear block codes based on ordered statistics
paper_content:
Soft decision decoding of linear block codes has been investigated for many years and several decoding schemes based on the reordering of the received symbols according to their reliability have been proposed. In this paper, we derive the ordered statistics of the noise after ordering and develop a simple algorithm based on these ordered statistics. This algorithm consists of successive reprocessing stages. For each stage, the error performance can be evaluated. For short codes of lengths up to 64, the optimum bit error performance is achieved in two stages of reprocessing, with at most a computation complexity of o(K/sup 2/) constructed codewords. For longer codes three or more reprocessing stages are required to achieve optimum decoding. The proposed algorithm applies to any linear block code, does not require any data storage and is well suitable for parallel processing. Furthermore, the maximum number of computations required at each reprocessing stage is fixed, which prevents the overflow problem at low SNR.<<ETX>>
---
paper_title: Advanced channel coding for space mission telecommand links
paper_content:
We investigate and compare different options for updating the error correcting code currently used in space mission telecommand links. Taking as a reference the solutions recently emerged as the most promising ones, based on Low-Density Parity-Check codes, we explore the behavior of alternative schemes, based on parallel concatenated turbo codes and soft-decision decoded BCH codes. Our analysis shows that these further options can offer similar or even better performance.
---
paper_title: Multiple-Bases Belief-Propagation for Decoding of Short Block Codes
paper_content:
A novel soft-decoding method for algebraic block codes is presented. The algorithm is designed for soft-decision decoding and is based on belief-propagation (BP) decoding using multiple bases of the dual code. Compared to other approaches for high-performance BP decoding, this method is conceptually simple and does not change at each stage of the decoding process. With its multiple BP decoders the proposed scheme achieves the performance of a standard BP algorithm with a significantly lower number of iterations per decoder realization. By this means the data delay introduced by decoding is reduced. Moreover, a significant improvement in decoding performance is achieved while keeping the data delay small. It is shown that for selected codes the proposed scheme approaches near maximum likelihood (ML) performance for very small data processing delays.
---
paper_title: Two decoding algorithms for tailbiting codes
paper_content:
The paper presents two efficient Viterbi decoding-based suboptimal algorithms for tailbiting codes. The first algorithm, the wrap-around Viterbi algorithm (WAVA), falls into the circular decoding category. It processes the tailbiting trellis iteratively, explores the initial state of the transmitted sequence through continuous Viterbi decoding, and improves the decoding decision with iterations. A sufficient condition for the decision to be optimal is derived. For long tailbiting codes, the WAVA gives essentially optimal performance with about one round of Viterbi trial. For short- and medium-length tailbiting codes, simulations show that the WAVA achieves closer-to-optimum performance with fewer decoding stages compared with the other suboptimal circular decoding algorithms. The second algorithm, the bidirectional Viterbi algorithm (BVA), employs two wrap-around Viterbi decoders to process the tailbiting trellis from both ends in opposite directions. The surviving paths from the two decoders are combined to form composite paths once the decoders meet in the middle of the trellis. The composite paths at each stage thereafter serve as candidates for decision update. The bidirectional process improves the error performance and shortens the decoding latency of unidirectional decoding with additional storage and computation requirements. Simulation results show that both proposed algorithms effectively achieve practically optimum performance for tailbiting codes of any length.
---
paper_title: Error Free Coding
paper_content:
An adap~ive ~ransform coding technique followed bya OPCM technique is employed ~o code mul~ispectral data. A method of ins~antaneous expansion of quantization levels by reserving two oodewords in the codebook ~ perform a folding over in quantization is implemented for data with incomplete knowledge of probability density funotion. Controlled redundancy is inserted periodioally as fixed length codewords into the bit strinq of data paoked with variable lenqth codes to facilitate fast retrieval and to detect errors. preliminary results of several sets of data from the ERTS-l data frame and the ERIM· airoraft data frame showed that an error· free reconstruction of the data can be achieved with four bits per picture element or less. 1 The work reported in this paper was sponsored by the National Aeronautics and Space Administration (NASA) under Grant Number NGL 15-005-112 at the Laboratory for Applications of Remote Sensinq (LARS) and under Grant Number NGR 15-005-152. Paper was presented at the 1973 Picture Coding Symposium, January 1973, University of Southern California. 2 J. R. Duan is a graduate student in Electrical Enqineering and P. A. Wintz is Professor of Electrical Enqineerinq, both at Purdue University. • Earth Resources Institute of Michigan. Formerly: University of Michigan/WRL.
---
paper_title: Optimal and near-optimal encoders for short and moderate-length tail-biting trellises
paper_content:
The results of an extensive search for short and moderate length polynomial convolutional encoders for time-invariant tail-biting representations of block codes at rates R=1/4, 1/3, 1/2, and 2/3 are reported. The tail-biting representations found are typically as good as the best known block codes.
---
paper_title: An improved sphere-packing bound for finite-length codes over symmetric memoryless channels
paper_content:
We present an improved sphere-packing (ISP) bound for finite-length error-correcting codes whose transmission takes place over symmetric memoryless channels, and the codes are decoded by an arbitrary list decoder. Some applications of the ISP bound are also exemplified. Its tightness under maximum-likelihood (ML) decoding is studied by comparing the ISP bound to previously reported upper and lower bounds on the ML decoding error probability, and also to computer simulations of iteratively decoded turbo-like codes.
---
paper_title: Fundamentals of Convolutional Coding
paper_content:
From the Publisher: ::: Convolutional codes, among the main error control codes, are routinely used in applications for mobile telephony, satellite communications, and voice-band modems. Written by two leading authorities in coding and information theory, this book brings you a clear and comprehensive discussion of the basic principles underlying convolutional coding. This book can be used as a textbook for graduate-level electrical engineering students. It will be of key interest to researchers and engineers of wireless and mobile communication, satellite communication, and data communication.
---
paper_title: Channel Coding Rate in the Finite Blocklength Regime
paper_content:
This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V/n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.
---
paper_title: Short Turbo Codes over High Order Fields
paper_content:
Two classes of turbo codes constructed on high-order finite fields are introduced. The codes are derived from a particular protograph sub-ensemble of the (2,3) regular low-density parity-check (LDPC) code ensemble. The first construction results in a parallel concatenation of two non-binary, time-variant accumulators. The second construction consists of the serial concatenation of a non-binary time-variant differentiator with a non-binary time-variant accumulator, and provides a highly structured flexible encoding scheme for (2,4) LDPC codes. A cycle graph representation is also provided. The proposed codes can be decoded efficiently either as LDPC codes (via belief propagation decoding on their bipartite graphs) or as turbo codes (via the forward-backward algorithm applied to the component code trellises) by means of the fast Fourier transform. The proposed codes provide remarkable coding gains (more than 1 dB at a codeword error rate 10-4) over binary LDPC and turbo codes in the moderate-short block length regime.
---
paper_title: Accumulate repeat accumulate codes
paper_content:
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as preceded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as the precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known unstructured irregular LDPC codes with the same maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation, that allows for high speed decoder implementation.
---
paper_title: On Dualizing Trellis-Based APP Decoding Algorithms
paper_content:
The trellis of a finite Abelian group code is locally (i.e., trellis section by trellis section) related to the trellis of the corresponding dual group code which allows one to express the basic operations of the a posteriori probability (APP) decoding algorithm (defined on a single trellis section of the primal trellis) in terms of the corresponding dual trellis section. Using this local approach, any algorithm employing the same type of operations as the APP algorithm can, thus, be dualized, even if the global dual code does not exist (e.g., nongroup codes represented by a group trellis). Given this, the complexity advantage of the dual approach for high-rate codes can be generalized to a broader class of APP decoding algorithms, including suboptimum algorithms approximating the true APP, which may be more attractive in practical applications due to their reduced complexity. Moreover, the local approach opens the way for mixed approaches where the operations of the APP algorithm are not exclusively performed on the primal or dual trellis. This is inevitable if the code does not possess a trellis consisting solely of group trellis sections as, e.g., for certain terminated group or ring codes. The complexity reduction offered by applying dualization is evaluated. As examples, we give a dual implementation of a suboptimum APP decoding algorithm for tailbiting convolutional codes, as well as dual implementations of APP algorithms of the sliding-window type. Moreover, we evaluate their performance for decoding usual tailbiting codes or convolutional codes, respectively, as well as their performance as component decoders in iteratively decoded parallel concatenated schemes.
---
paper_title: Codes on high-order fields for the CCSDS next generation uplink
paper_content:
Recently, short binary and non-binary iteratively-decodable codes have been proposed within the Next Generation Uplink (NGU) working group (WG) of the Consultative Committee for Space Data Systems (CCSDS). The NGU WG targets the design of an enhanced uplink mainly for telecommand with the aim of updating the current uplink standard that employs a short (63, 56) BCH code to protect the telecommand messages. The proposed non-binary turbo/LDPC codes attain large coding gains over the standardized BCH codes, but also over their binary turbo/LDPC counterparts (up to 1.5 dB of coding gain for information blocks of 64 bits). This paper overviews the proposed non-binary code construction, illustrating the potential of non-binary turbo/LDPC codes in the short block length regime. The impact of the proposed solution at system level is investigated, together with its integration in the CCSDS protocol stack.
---
paper_title: Lower Bounds to Error Probability for Coding on Discrete Memoryless Channels. I
paper_content:
New lower bounds are presented for the minimum error probability that can be achieved through the use of block coding on noisy discrete memoryless channels. Like previous upper bounds, these lower bounds decrease exponentially with the block length N . The coefficient of N in the exponent is a convex function of the rate. From a certain rate of transmission up to channel capacity, the exponents of the upper and lower bounds coincide. Below this particular rate, the exponents of the upper and lower bounds differ, although they approach the same limit as the rate approaches zero. Examples are given and various incidental results and techniques relating to coding theory are developed. The paper is presented in two parts: the first, appearing here, summarizes the major results and treats the case of high transmission rates in detail; the second, to appear in the subsequent issue, treats the case of low transmission rates.
---
paper_title: Low-density parity-check (LDPC) codes constructed from protographs
paper_content:
We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.
---
paper_title: Fast Decoding Algorithm for LDPC over GF(2q)
paper_content:
In this paper, we present a modification of Belief Propagation that enable us to decode LDPC codes defined on high order Galois fields with a complexity that scales as ploga(p), p being the field order. With this low complexity algorithm, we are able to decode GF(2q) LDPC codes up to a field order value of 256. We show by simulation that ultra-sparse regular LDPC codes in GF(64) and GF(256) exhibit very good performance.
---
paper_title: Box and match techniques applied to soft-decision decoding
paper_content:
In this paper, we improve the ordered statistics decoding algorithm by using matching techniques. This allows us: to reduce the worst case complexity of decoding (the error performance being fixed) or to improve the error performance (for a same complexity); to reduce the ratio between average complexity and worst case complexity; to achieve practically optimal decoding of rate-1/2 codes of lengths up to 128 (rate-1/2 codes are a traditional benchmark, for coding rates different from 1/2, the decoding is easier); to achieve near-optimal decoding of a rate-1/2 code of length 192, which could never be performed before.
---
paper_title: Performance Bounds for Erasure, List, and Decision Feedback Schemes With Linear Block Codes
paper_content:
A message independence property and some new performance upper bounds are derived in this work for erasure, list, and decision-feedback schemes with linear block codes transmitted over memoryless symmetric channels. Similar to the classical work of Forney, this work is focused on the derivation of some Gallager-type bounds on the achievable tradeoffs for these coding schemes, where the main novelty is the suitability of the bounds for both random and structured linear block codes (or code ensembles). The bounds are applicable to finite-length codes and to the asymptotic case of infinite block length, and they are applied to low-density parity-check code ensembles.
---
paper_title: Bounded angle iterative decoding of LDPC codes
paper_content:
A modification to the usual iterative decoding algorithm for LDPC codes, called bounded angle iterative (BA-I) decoding, is introduced. The modified decoder erases codewords detected during iterations that fall outside a maximum decoding angle with respect to the received observation. The new algorithm is applicable in scenarios that demand a very low undetected error rate but require short LDPC codes that are too vulnerable to undetected errors when the usual iterative decoding algorithm is used. BA-I decoding provides a means of reducing the maximum undetected error rate for short LDPC codes significantly, by incorporating a simple extra condition into the iterative decoder structure without redesigning the code. The reduction in undetected error rate comes at a price of increasing the threshold signal-to-noise ratio (SNR) required for achieving a good overall error rate, but this increase in channel threshold can be minimized by allowing the decoderpsilas maximum decoding angle to vary with SNR.
---
paper_title: Exponential error bounds for erasure, list, and decision feedback schemes
paper_content:
By an extension of Gallager's bounding methods, exponential error bounds applicable to coding schemes involving erasures, variable-size lists, and decision feedback are obtained. The bounds are everywhere the tightest known.
---
paper_title: Reliability-output Decoding of Tail-biting Convolutional Codes
paper_content:
We present extensions to Raghavan and Baum's reliability-output Viterbi algorithm (ROVA) to accommodate tail-biting convolutional codes. These tail-biting reliability-output algorithms compute the exact word-error probability of the decoded codeword after first calculating the posterior probability of the decoded tail-biting codeword's starting state. One approach employs a state-estimation algorithm that selects the maximum a posteriori state based on the posterior distribution of the starting states. Another approach is an approximation to the exact tail-biting ROVA that estimates the word-error probability. A comparison of the computational complexity of each approach is discussed in detail. The presented reliability-output algorithms apply to both feedforward and feedback tail-biting convolutional encoders. These tail-biting reliability-output algorithms are suitable for use in reliability-based retransmission schemes with short blocklengths, in which terminated convolutional codes would introduce rate loss.
---
| Title: Code Design for Short Blocks: A Survey
Section 1: Introduction
Description 1: Overview of the research on capacity-approaching error correcting codes with a focus on short and medium-length linear block codes and their applications in modern communication systems.
Section 2: A Case Study
Description 2: Provide an exemplary comparison of short codes, focusing on codes with block length n = 128 and code dimension k = 64, and discuss their performance metrics.
Section 3: The Elephant in the Room: Complexity
Description 3: Discuss the complexity aspects of decoding algorithms used for short block codes and provide qualitative remarks on their algorithmic complexity.
Section 4: Error Detection
Description 4: Outline the error detection capabilities of different decoding algorithms and the importance of low undetected error rates in critical applications.
Section 5: Conclusions
Description 5: Summarize the recent efforts in the design and analysis of efficient error-correcting codes for short block lengths, and highlight trade-offs between coding gain and decoding complexity. |
Verifying linearizability: A comparative survey | 10 | ---
paper_title: Formal verification of an array-based nonblocking queue
paper_content:
We describe an array-based nonblocking implementation of a concurrent bounded queue, due to Shann, Huang and Chen (2000), and explain how we detected errors in the algorithm while attempting a formal verification. We explain how we first corrected the errors, and then modified the algorithm to obtain nonblocking behaviour in the boundary cases. Both the corrected and modified versions of the algorithm were verified using the PVS theorem proven. We describe the verification of the modified algorithm, which subsumes the proof of the corrected version.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: A lazy concurrent list-based set algorithm
paper_content:
List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked).
---
paper_title: Linearizability: a correctness condition for concurrent objects
paper_content:
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
---
paper_title: Modelling and verifying non-blocking algorithms that use dynamically allocated memory
paper_content:
This thesis presents techniques for the machine-assisted verification of an important class of concurrent algorithms, called non-blocking algorithms. The notion of linearizability is used as a correctness condition for concurrent implementations of sequential datatypes and the use of forward simulation relations as a proof method for showing linearizability is described. A detailed case study is presented: the attempted verification of a concurrent double-ended queue implementation, the Snark algorithm, using the theorem proving system PVS. This case study allows the exploration of the difficult problem of verifying an algorithm that uses low-level pointer operations over a dynamic data-structure in the presence of concurrent access by multiple processes. During the verification attempt, a previously undetected bug was found in the Snark algorithm. Two possible corrections to this algorithm are presented and their merits discussed. The verification of one of these corrections would require the use of a backward simulation relation. The thesis concludes by describing the reason for this extention to the verification methodology and the use of a hierarchical proof structure to simplify verifications that require backward simulations.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Linearizability: a correctness condition for concurrent objects
paper_content:
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: How to prove algorithms linearisable
paper_content:
Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Proving Linearizability of Multiset with Local Proof Obligations
paper_content:
Linearizability is a key correctness criterion for concurrent software. In our previous work, we introduced local proof obligations, which, by showing a refinement between an abstract specification and its implementation, imply linearizability of the implementation. The refinement is shown via a thread local backward simulation, which reduces the complexity of a backward simulation to an execution of two symbolic threads. In this paper, we present a correctness proof by applying those proof obligations to a lock-based implementation of a multiset. It is interesting for two reasons: First, one of its operations inserts two elements non-atomically. To show that it linearizes, we have to find one point, where the multiset is changed instantaneously, which is a counter-intuitive task. Second, another operation has non-fixed linearization points, i.e. the linearization points cannot be statically fixed, because the operation’s linearization may depend on other processes’ execution. This is a typical case to use backward simulation, where we could apply our thread local variant of it. All proofs were mechanized in the theorem prover KIV.
---
paper_title: Automatically proving linearizability
paper_content:
This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms.
---
paper_title: Formal verification of an array-based nonblocking queue
paper_content:
We describe an array-based nonblocking implementation of a concurrent bounded queue, due to Shann, Huang and Chen (2000), and explain how we detected errors in the algorithm while attempting a formal verification. We explain how we first corrected the errors, and then modified the algorithm to obtain nonblocking behaviour in the boundary cases. Both the corrected and modified versions of the algorithm were verified using the PVS theorem proven. We describe the verification of the modified algorithm, which subsumes the proof of the corrected version.
---
paper_title: Shared Memory Consistency Models: A Tutorial
paper_content:
The memory consistency model of a system affects performance, programmability, and portability. We aim to describe memory consistency models in a way that most computer professionals would understand. This is important if the performance-enhancing features being incorporated by system designers are to be correctly and widely used by programmers. Our focus is consistency models proposed for hardware-based shared memory systems. Most of these models emphasize the system optimizations they support, and we retain this system-centric emphasis. We also describe an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations.
---
paper_title: Aspect-Oriented linearizability proofs
paper_content:
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
---
paper_title: The existence of refinement mappings
paper_content:
Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method.
---
paper_title: How to prove algorithms linearisable
paper_content:
Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.
---
paper_title: Concurrency of operations on B-trees
paper_content:
Concurrent operations on B-trees pose the problem of insuring that each operation can be carried out without interfering with other operations being performed simultaneously by other users. This problem can become critical if these structures are being used to support access paths, like indexes, to data base systems. In this case, serializing access to one of these indexes can create an unacceptable bottleneck for the entire system. Thus, there is a need for locking protocols that can assure integrity for each access while at the same time providing a maximum possible degree of concurrency. Another feature required from these protocols is that they be deadlock free, since the cost to resolve a deadlock may be high. ::: ::: Recently, there has been some questioning on whether B-tree structures can support concurrent operations. In this paper, we examine the problem of concurrent access to B-trees. We present a deadlock free solution which can be tuned to specific requirements. An analysis is presented which allows the selection of parameters so as to satisfy these requirements. ::: ::: The solution presented here uses simple locking protocols. Thus, we conclude that B-trees can be used advantageously in a multi-user environment.
---
paper_title: Data Refinement: Model-Oriented Proof Methods and their Comparison
paper_content:
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
---
paper_title: Modeling in Event-B: System and Software Engineering
paper_content:
A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.
---
paper_title: Verifying Concurrent Data Structures by Simulation
paper_content:
We describe an approach to verifying concurrent data structures based on simulation between two Input/Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.
---
paper_title: High performance dynamic lock-free hash tables and list-based sets
paper_content:
Lock-free (non-blocking) shared data structures promise more robust performance and reliability than conventional lock-based implementations. However, all prior lock-free algorithms for sets and hash tables suffer from serious drawbacks that prevent or limit their use in practice. These drawbacks include size inflexibility, dependence on atomic primitives not supported on any current processor architecture, and dependence on highly-inefficient or blocking memory management techniques.Building on the results of prior researchers, this paper presents the first CAS-based lock-free list-based set algorithm that is compatible with all lock-free memory management methods. We use it as a building block of an algorithm for lock-free hash tables. In addition to being lock-free, the new algorithm is dynamic, linearizable, and space-efficient.Our experimental results show that the new algorithm outperforms the best known lock-free as well as lock-based hash table implementations by significant margins, and indicate that it is the algorithm of choice for implementing shared hash tables.
---
paper_title: Proving linearizability via non-atomic refinement
paper_content:
Linearizability is a correctness criterion for concurrent objects. In this paper, we prove linearizability of a concurrent lock-free stack implementation by showing the implementation to be a nonatomic refinement of an abstract stack. To this end, we develop a generalisation of non-atomic refinement allowing one to refine a single (Z) operation into a CSP process. Besides this extension, the definition furthermore embodies a termination condition which permits one to prove starvation freedom for the concurrent processes.
---
paper_title: Comparison under abstraction for verifying linearizability
paper_content:
Linearizability is one of the main correctness criteria for implementations of concurrent data structures. A data structure is linearizable if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unboundedsize) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists.
---
paper_title: Proving linearizability with temporal logic
paper_content:
Linearizability is a global correctness criterion for concurrent systems. One technique to prove linearizability is applying a composition theorem which reduces the proof of a property of the overall system to sufficient rely-guarantee conditions for single processes. In this paper, we describe how the temporal logic framework implemented in the KIV interactive theorem prover can be used to model concurrent systems and to prove such a composition theorem. Finally, we show how this generic theorem can be instantiated to prove linearizability of two classic lock-free implementations: a Treiber-like stack and a slightly improved version of Michael and Scott’s queue.
---
paper_title: Formal verification of an array-based nonblocking queue
paper_content:
We describe an array-based nonblocking implementation of a concurrent bounded queue, due to Shann, Huang and Chen (2000), and explain how we detected errors in the algorithm while attempting a formal verification. We explain how we first corrected the errors, and then modified the algorithm to obtain nonblocking behaviour in the boundary cases. Both the corrected and modified versions of the algorithm were verified using the PVS theorem proven. We describe the verification of the modified algorithm, which subsumes the proof of the corrected version.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Verifying Concurrent Data Structures by Simulation
paper_content:
We describe an approach to verifying concurrent data structures based on simulation between two Input/Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.
---
paper_title: How to prove algorithms linearisable
paper_content:
Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: Concurrency of operations on B-trees
paper_content:
Concurrent operations on B-trees pose the problem of insuring that each operation can be carried out without interfering with other operations being performed simultaneously by other users. This problem can become critical if these structures are being used to support access paths, like indexes, to data base systems. In this case, serializing access to one of these indexes can create an unacceptable bottleneck for the entire system. Thus, there is a need for locking protocols that can assure integrity for each access while at the same time providing a maximum possible degree of concurrency. Another feature required from these protocols is that they be deadlock free, since the cost to resolve a deadlock may be high. ::: ::: Recently, there has been some questioning on whether B-tree structures can support concurrent operations. In this paper, we examine the problem of concurrent access to B-trees. We present a deadlock free solution which can be tuned to specific requirements. An analysis is presented which allows the selection of parameters so as to satisfy these requirements. ::: ::: The solution presented here uses simple locking protocols. Thus, we conclude that B-trees can be used advantageously in a multi-user environment.
---
paper_title: Modelling and verifying non-blocking algorithms that use dynamically allocated memory
paper_content:
This thesis presents techniques for the machine-assisted verification of an important class of concurrent algorithms, called non-blocking algorithms. The notion of linearizability is used as a correctness condition for concurrent implementations of sequential datatypes and the use of forward simulation relations as a proof method for showing linearizability is described. A detailed case study is presented: the attempted verification of a concurrent double-ended queue implementation, the Snark algorithm, using the theorem proving system PVS. This case study allows the exploration of the difficult problem of verifying an algorithm that uses low-level pointer operations over a dynamic data-structure in the presence of concurrent access by multiple processes. During the verification attempt, a previously undetected bug was found in the Snark algorithm. Two possible corrections to this algorithm are presented and their merits discussed. The verification of one of these corrections would require the use of a backward simulation relation. The thesis concludes by describing the reason for this extention to the verification methodology and the use of a hierarchical proof structure to simplify verifications that require backward simulations.
---
paper_title: Data Refinement: Model-Oriented Proof Methods and their Comparison
paper_content:
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
---
paper_title: Interactive Verification of Concurrent Systems using Symbolic Execution
paper_content:
This paper presents an interactive proof method for the verification of temporal properties of concurrent systems based on symbolic execution. Symbolic execution is a well known and very intuitive strategy for the verification of sequential programs. We have carried over this approach to the interactive verification of arbitrary linear temporal logic properties of (infinite state) parallel programs. The resulting proof method is very intuitive to apply and can be automated to a large extent. It smoothly combines first-order reasoning with reasoning in temporal logic. The proof method has been implemented in the interactive verification environment KIV and has been used in several case studies.
---
paper_title: Modular verification of linearizability with non-fixed linearization points
paper_content:
Locating linearization points (LPs) is an intuitive approach for proving linearizability, but it is difficult to apply the idea in Hoare-style logic for formal program verification, especially for verifying algorithms whose LPs cannot be statically located in the code. In this paper, we propose a program logic with a lightweight instrumentation mechanism which can verify algorithms with non-fixed LPs, including the most challenging ones that use the helping mechanism to achieve lock-freedom (as in HSY elimination-based stack), or have LPs depending on unpredictable future executions (as in the lazy set algorithm), or involve both features. We also develop a thread-local simulation as the meta-theory of our logic, and show it implies contextual refinement, which is equivalent to linearizability. Using our logic we have successfully verified various classic algorithms, some of which are used in the java.util.concurrent package.
---
paper_title: Comparison under abstraction for verifying linearizability
paper_content:
Linearizability is one of the main correctness criteria for implementations of concurrent data structures. A data structure is linearizable if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unboundedsize) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists.
---
paper_title: Automatically proving linearizability
paper_content:
This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms.
---
paper_title: A flexible approach to interprocedural data flow analysis and programs with recursive data structures
paper_content:
A new approach to data flow analysis of procedural programs and programs with recursive data structures is described. The method depends on simulation of the interpreter for the subject programming language using a retrieval function to approximate a program's data structures.
---
paper_title: Tentative steps toward a development method for interfering programs
paper_content:
Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references.
---
paper_title: Shape-value abstraction for verifying linearizability
paper_content:
This paper presents a novel abstraction for heap-allocated data structures that keeps track of both their shape and their contents. By combining this abstraction with thread-local analysis and rely-guarantee reasoning, we can verify a collection of fine-grained blocking and non-blocking concurrent algorithms for an arbitrary (unbounded) number of threads. We prove that these algorithms are linearizable, namely equivalent (modulo termination) to their sequential counterparts.
---
paper_title: The existence of refinement mappings
paper_content:
Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method.
---
paper_title: A marriage of rely/guarantee and separation logic
paper_content:
In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. ::: ::: We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes.
---
paper_title: Proving correctness of highly-concurrent linearisable objects
paper_content:
We study a family of implementations for linked lists using fine-grain synchronisation. This approach enables greater concurrency, but correctness is a greater challenge than for classical, coarse-grain synchronisation. Our examples are demonstrative of common design patterns such as lock coupling, optimistic, and lazy synchronisation. Although they are are highly concurrent, we prove that they are linearisable, safe, and they correctly implement a high-level abstraction. Our proofs illustrate the power and applicability of rely-guarantee reasoning, as well of some of its limitations. The examples of the paper establish a benchmark challenge for other reasoning techniques.
---
paper_title: Local reasoning about programs that alter data structures
paper_content:
We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update, allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the "small axioms", each of which mentions only those cells accessed by a particular command. Through these and a number of examples we show that the formalism supports local reasoning: A specification and proof can concentrate on only those cells in memory that a program accesses. ::: ::: This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures.
---
paper_title: Proving linearizability with temporal logic
paper_content:
Linearizability is a global correctness criterion for concurrent systems. One technique to prove linearizability is applying a composition theorem which reduces the proof of a property of the overall system to sufficient rely-guarantee conditions for single processes. In this paper, we describe how the temporal logic framework implemented in the KIV interactive theorem prover can be used to model concurrent systems and to prove such a composition theorem. Finally, we show how this generic theorem can be instantiated to prove linearizability of two classic lock-free implementations: a Treiber-like stack and a slightly improved version of Michael and Scott’s queue.
---
paper_title: Tentative steps toward a development method for interfering programs
paper_content:
Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references.
---
paper_title: The KIV System: A Tool for Formal Program Development
paper_content:
In order to keep the tasks of specification, programming and verification in manageable orders of magnitude, a system for formM development should support the structuring of the development process. This process starts with a horizontally structured (top-level) specification. While it is generMly agreed that a formal specification has a significant value in itself, it is by no means a guarantee that the development process will end up with an implemented software system, let alone a correct one. A system for formal development must therefore also support the implementation process using a hierarchy of increasingly concrete intermediate specifications. Refinement steps may contain pieces of code of some suitable programmung language. The notion of correctness (of refinement steps) must be complemented by a program logic powerful enough to express the necessary proof obligations and by theorem proving support to actually prove these assertions. In many aspects the techniques of "classical" theorem proving are not suitable for the deduction tasks that accompany the development process. The approach that has proven successful in this area is Tactical Theorem Proving, where a proof calculus is embedded into a (usually functional) recta-language. Proof search is then implemented by programs in this recta-language. Usually, a sequent calculus or Natural Deduction is used in such systems. The availability of an entire programming language, rather than a mere set of axioms and rules, facilitates the sound extension of the basic logic, and in fact the construction of a complete derived calculus.
---
paper_title: Interleaved Programs and Rely-Guarantee Reasoning with ITL
paper_content:
This paper presents a logic that extends basicITL with explicit, interleaved programs. The calculus is based on symbolic execution, as previously described. We extend this former work here, by integrating the logic with higher-order logic, adding recursive procedures and rules to reason about fairness. Further, we show how rules for rely-guarantee reasoning can be derived and outline the application of some features to verify concurrent programs in practice. The logic is implemented in the interactive verification environment KIV.
---
paper_title: Interactive Verification of Concurrent Systems using Symbolic Execution
paper_content:
This paper presents an interactive proof method for the verification of temporal properties of concurrent systems based on symbolic execution. Symbolic execution is a well known and very intuitive strategy for the verification of sequential programs. We have carried over this approach to the interactive verification of arbitrary linear temporal logic properties of (infinite state) parallel programs. The resulting proof method is very intuitive to apply and can be automated to a large extent. It smoothly combines first-order reasoning with reasoning in temporal logic. The proof method has been implemented in the interactive verification environment KIV and has been used in several case studies.
---
paper_title: Concurrency Verification: Introduction to Compositional and Non-compositional Methods
paper_content:
This is a systematic and comprehensive introduction both to compositional proof methods for the state-based verification of concurrent programs, such as the assumption-commitment and rely-guarantee paradigms, and to noncompositional methods, whose presentation culminates in an exposition of the communication-closed-layers (CCL) paradigm for verifying network protocols. Compositional concurrency verification methods reduce the verification of a concurrent program to the independent verification of its parts. If those parts are tightly coupled, one additionally needs verification methods based on the causal order between events. These are presented using CCL. The semantic approach followed here allows a systematic presentation of all these concepts in a unified framework which highlights essential concepts. The book is self-contained, guiding the reader from advanced undergraduate level to the state-of-the-art. Every method is illustrated by examples, and a picture gallery of some of the subject's key figures complements the text.
---
paper_title: A complete axiomatization of interval temporal logic with infinite time
paper_content:
Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω -regular expressions. We give a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. The full paper (and another conference paper) presents the basic framework for finite time. Here and in the full paper the axiom system (and completeness) is extended to infinite time.
---
paper_title: QED: a proof system based on reduction and abstraction for the static verification of concurrent software
paper_content:
We present a proof system and supporting tool, QED, for the static verification of concurrent software. Our key idea is to simplify the verification of a program by rewriting it with larger atomic actions. We demonstrated the simplicity and effectiveness of our approach on benchmarks with intricate synchronization.
---
paper_title: Automatic Linearizability Proofs of Concurrent Objects with Cooperating Updates
paper_content:
An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add.
---
paper_title: Proving linearizability with temporal logic
paper_content:
Linearizability is a global correctness criterion for concurrent systems. One technique to prove linearizability is applying a composition theorem which reduces the proof of a property of the overall system to sufficient rely-guarantee conditions for single processes. In this paper, we describe how the temporal logic framework implemented in the KIV interactive theorem prover can be used to model concurrent systems and to prove such a composition theorem. Finally, we show how this generic theorem can be instantiated to prove linearizability of two classic lock-free implementations: a Treiber-like stack and a slightly improved version of Michael and Scott’s queue.
---
paper_title: Simplifying linearizability proofs with reduction and abstraction
paper_content:
The typical proof of linearizability establishes an abstraction map from the concurrent program to a sequential specification, and identifies the commit points of operations. If the concurrent program uses fine-grained concurrency and complex synchronization, constructing such a proof is difficult. We propose a sound proof system that significantly simplifies the reasoning about linearizability. Linearizability is proved by transforming an implementation into its specification within this proof system. The proof system combines reduction and abstraction, which increase the granularity of atomic actions, with variable introduction and hiding, which syntactically relate the representation of the implementation to that of the specification. We construct the abstraction map incrementally, and eliminate the need to reason about the location of commit points in the implementation. We have implemented our method in the QED verifier and demonstrated its effectiveness and practicality on several highly-concurrent examples from the literature.
---
paper_title: Reasoning about Nonblocking Concurrency using Reduction
paper_content:
Reduction methods developed by Lipton, Lamport, Cohen, and others, allow one to reason about concurrent programs at various levels of atomicity. An action which is considered to be atomic at one level may be implemented by more complex code at the next level. We can show that certain properties of the program are preserved by first showing that the property holds when the expanded code is executed sequentially, and then showing that any execution in which this code is executed concurrently with other processes is equivalent to an execution in which the expanded code is executed without interruption. Existing reduction methods are aimed at traditional approaches to concurrency which prevent interference between concurrent processes using mechanisms such as locks or semaphores. In this paper, we show that these reduction methods can be adapted to reason about nonblocking algorithms, which are designed to operate correctly in the presence of interference, rather than to avoid interference. These algorithms typically use strong synchronisation primitives, such as Load Linked/Store Conditional or Compare and Swap, to detect that interference has occurred and in that case retry their operations. We show that reduction can be used with such algorithms, and illustrate this approach with examples based on shared counters and stacks.
---
paper_title: Aspect-Oriented linearizability proofs
paper_content:
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
---
paper_title: Static analysis of atomicity for programs with non-blocking synchronization
paper_content:
In concurrent programming, non-blocking synchronization is very efficient but difficult to design correctly. This paper presents a static analysis to show that code blocks are atomic, i.e., that every execution of the program is equivalent to one in which those code blocks execute without interruption by other threads. Our analysis determines commutativity of operations based primarily on how synchronization primitives (including locks, load-linked, store-conditional, and compare-and-swap) are used. A reduction theorem states that certain patterns of commutativity imply atomicity. Atomicity is itself an important correctness requirement for many concurrent programs. Furthermore, an atomic code block can be treated as a single transition during subsequent analysis of the program; this can greatly improve the efficiency of the subsequent analysis. We demonstrate the effectiveness of our approach on several concurrent non-blocking programs.
---
paper_title: Reduction: a method of proving properties of parallel programs
paper_content:
When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified.
---
paper_title: R-linearizability: an extension of linearizability to replicated objects
paper_content:
The authors extend linearizability, a consistency criterion for concurrent systems, to the replicated context, where availability and performance are enhanced by using redundant objects. The mode of operation on sets of replicas and the consistency criterion of R-linearizability are defined. An implementation of R-linearizable replicated atoms (on which only read and write operations are defined) is described. It is realized in the virtually synchronous model, based on a group view mechanism. This framework provides reliable multicast primitives, enabling a fault-tolerant implementation. >
---
paper_title: Interleaved Programs and Rely-Guarantee Reasoning with ITL
paper_content:
This paper presents a logic that extends basicITL with explicit, interleaved programs. The calculus is based on symbolic execution, as previously described. We extend this former work here, by integrating the logic with higher-order logic, adding recursive procedures and rules to reason about fairness. Further, we show how rules for rely-guarantee reasoning can be derived and outline the application of some features to verify concurrent programs in practice. The logic is implemented in the interactive verification environment KIV.
---
paper_title: Interactive Verification of Concurrent Systems using Symbolic Execution
paper_content:
This paper presents an interactive proof method for the verification of temporal properties of concurrent systems based on symbolic execution. Symbolic execution is a well known and very intuitive strategy for the verification of sequential programs. We have carried over this approach to the interactive verification of arbitrary linear temporal logic properties of (infinite state) parallel programs. The resulting proof method is very intuitive to apply and can be automated to a large extent. It smoothly combines first-order reasoning with reasoning in temporal logic. The proof method has been implemented in the interactive verification environment KIV and has been used in several case studies.
---
paper_title: Verifying Michael and Scott ’ s Lock-Free Queue Algorithm using Trace Reduction
paper_content:
Lock-free algorithms have been developed to avoid various problems associated with using locks to control access to shared data structures. These algorithms are typically more intricate than lock-based algorithms, as they allow more complex interactions between processes, and many published algorithms have turned out to contain errors. There is thus a pressing need for practical techniques for verifying lock-free algorithms and programs that use them. ::: ::: In this paper we show how Michael and Scott's well known lock-free queue algorithm can be verified using a trace reduction method, based on Lipton's reduction method. Michael and Scott's queue is an interesting case study because, although the basic idea is easy to understand, the actual algorithm is quite subtle, and it demonstrates several way in which the basic reduction method needs to be extended.
---
paper_title: A complete axiomatization of interval temporal logic with infinite time
paper_content:
Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω -regular expressions. We give a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. The full paper (and another conference paper) presents the basic framework for finite time. Here and in the full paper the axiom system (and completeness) is extended to infinite time.
---
paper_title: Compositional reasoning using Interval Temporal Logic and Tempura
paper_content:
We present a compositional methodology for specification and proof using Interval Temporal Logic (ITL). After given an introduction to ITL, we show how fixpoints of various ITL operators provide a flexible way to modularly reason about safety and liveness. In addition, some new techniques are described for compositionally transforming and refining ITL specifications We also consider the use of ITL's programming language subset Tempura as a tool for testing the kinds of specifications dealt with here.
---
paper_title: Verifying linearizability with hindsight
paper_content:
We present a proof of safety and linearizability of a highly-concurrent optimistic set algorithm. The key step in our proof is the Hindsight Lemma, which allows a thread to infer the existence of a global state in which its operation can be linearized based on limited local atomic observations about the shared state. The Hindsight Lemma allows us to avoid one of the most complex and non-intuitive steps in reasoning about highly concurrent algorithms: considering the linearization point of an operation to be in a different thread than the one executing it. The Hindsight Lemma assumes that the algorithm maintains certain simple invariants which are resilient to interference, and which can themselves be verified using purely thread-local proofs. As a consequence, the lemma allows us to unlock a perhaps-surprising intuition: a high degree of interference makes non-trivial highly-concurrent algorithms in some cases much easier to verify than less concurrent ones.
---
paper_title: A general lock-free algorithm using compare-and-swap
paper_content:
The compare-and-swap register (CAS) is a synchronization primitive for lock-free algorithms. Most uses of it, however, suffer from the so-called ABA problem. The simplest and most efficient solution to the ABA problem is to include a tag with the memory location such that the tag is incremented with each update of the target location. This solution, however, is theoretically unsound and has limited applicability. This paper presents a general lock-free pattern that is based on the synchronization primitive CAS without causing ABA problem or problems with wrap around. It can be used to provide lock-free functionality for any data type. Our algorithm is a CAS variation of Herlihy's LL/SC methodology for lock-free transformation. The basis of our techniques is to poll different locations on reading and writing objects in such a way that the consistency of an object can be checked by its location instead of its tag. It consists of simple code that can be easily implemented using C-like languages. A true problem of lock-free algorithms is that they are hard to design correctly, which even holds for apparently straightforward algorithms. We therefore develop a reduction theorem that enables us to reason about the general lock-free algorithm to be designed on a higher level than the synchronization primitives. The reduction theorem is based on Lamport's refinement mappings, and has been verified with the higher-order interactive theorem prover PVS. Using the reduction theorem, fewer invariants are required and some invariants are easier to discover and formulate without considering the internal structure of the final implementation.
---
paper_title: Experience with model checking linearizability
paper_content:
Non-blocking concurrent algorithms offer significant performance advantages, but are very difficult to construct and verify. In this paper, we describe our experience in using SPIN to check linearizability of non-blocking concurrent data-structure algorithms that manipulate dynamically allocated memory. In particular, this is the first work that describes a method for checking linearizability with non-fixed linearization points.
---
paper_title: A separation logic for refining concurrent objects
paper_content:
Fine-grained concurrent data structures are crucial for gaining performance from multiprocessing, but their design is a subtle art. Recent literature has made large strides in verifying these data structures, using either atomicity refinement or separation logic with rely-guarantee reasoning. In this paper we show how the ownership discipline of separation logic can be used to enable atomicity refinement, and we develop a new rely-guarantee method that is localized to the definition of a data structure. We present the first semantics of separation logic that is sensitive to atomicity, and show how to control this sensitivity through ownership. The result is a logic that enables compositional reasoning about atomicity and interference, even for programs that use fine-grained synchronization and dynamic memory allocation.
---
paper_title: Using refinement calculus techniques to prove linearizability
paper_content:
Stepwise refinement is a method for systematically transforming a high-level program into an efficiently executable one. A sequence of successively refined programs can also serve as a correctness proof, which makes different mechanisms in the program explicit. We present rules for refinement of multi-threaded shared-variable concurrent programs. We apply our rules to the problem of verifying linearizability of concurrent objects, that are accessed by an unbounded number of concurrent threads. Linearizability is an established correctness criterion for concurrent objects, which states that the effect of each method execution can be considered to occur atomically at some point in time between its invocation and response. We show how linearizability can be expressed in terms of our refinement relation, and present rules for establishing this refinement relation between programs by a sequence of local transformations of method bodies. Contributions include strengthenings of previous techniques for atomicity refinement, as well as an absorption rule, which is particularly suitable for reasoning about concurrent algorithms that implement atomic operations. We illustrate the application of the refinement rules by proving linearizability of Treiber’s concurrent stack algorithm and Michael and Scott’s concurrent queue algorithm.
---
paper_title: Verification of a Lock-Free Implementation of Multiword LL/SC Object
paper_content:
On shared memory multiprocessors, synchronization often turns out to be a performance bottleneck and the source of poor fault-tolerance. By avoiding locks, the significant benefit of lock (or wait)-freedom for real-time systems is that the potentials for deadlock and priority inversion are avoided. The lock-free algorithms often require the use of special atomic processor primitives such as CAS (Compare And Swap) or LL /SC (Load Linked/Store Conditional). However, many machine architectures support either CAS or LL /SC , but not both. In this paper, we present a lock-free implementation of the ideal semantics of LL /SC using only pointer-size CAS , and show how to use refinement mapping to prove the correctness of the algorithm.
---
paper_title: A scalable lock-free stack algorithm
paper_content:
The literature describes two high performance concurrent stack algorithms based on combining funnels and elimination trees. Unfortunately, the funnels are linearizable but blocking, and the elimination trees are non-blocking but not linearizable. Neither is used in practice since they perform well only at exceptionally high loads. The literature also describes a simple lock-free linearizable stack algorithm that works at low loads but does not scale as the load increases. The question of designing a stack algorithm that is non-blocking, linearizable, and scales well throughout the concurrency range, has thus remained open.This paper presents such a concurrent stack algorithm. It is based on the following simple observation: that a single elimination array used as a backoff scheme for a simple lock-free stack is lock-free, linearizable, and scalable. As our empirical results show, the resulting elimination-backoff stack performs as well as the simple stack at low loads, and increasingly outperforms all other methods (lock-based and non-blocking) as concurrency increases. We believe its simplicity and scalability make it a viable practical alternative to existing constructions for implementing concurrent stacks.
---
paper_title: On the Refinement Calculus
paper_content:
The Specification Statement.- 1 Introduction.- 2 Specification statements.- 3 The implementation ordering.- 4 Suitability of the definitions.- 5 Using specification statements.- 6 Miracles.- 7 Guarded commands are miracles.- 8 Positive applications of miracles.- 9 Conclusion.- 10 Acknowledgements.- Specification Statements and Refinement.- 1 Introduction.- 2 The refinement theorems.- 3 The refinement calculus.- 4 An example: square root.- 5 Derivation of laws.- 6 Conclusion.- 7 Acknowledgements.- Procedures, Parameters, and Abstraction: Separate Concerns.- 1 Introduction.- 2 Procedure call.- 3 Procedural abstraction.- 4 Parameters.- 5 Conclusion.- 6 Acknowledgements.- Data Refinement by Miracles.- 1 Introduction.- 2 An abstract program.- 3 A difficult data refinement.- 4 Miraculous programs.- 5 Eliminating miracles.- 6 Conclusion.- 7 Acknowledgements.- Auxiliary Variables in Data Refinement.- 1 Introduction.- 2 The direct technique.- 3 The auxiliary variable technique.- 4 The correspondence.- 5 Conclusion.- 6 Acknowledgements.- Data Refinement of Predicate Transformers.- 1 Introduction.- 2 Predicate transformers.- 3 Algorithmic refinement of predicate transformers.- 4 Data refinement of predicate transformers.- 5 The programming language.- 6 Distribution of data refinement.- 7 Data refinement of specifications.- 8 Data refinement in practice.- 9 Conclusions.- 10 Acknowledgements.- Data Refinement by Calculation.- 1 Introduction.- 2 Refinement.- 3 Language extensions.- 4 Data refinement calculators.- 5 Example of refinement: the "mean" module.- 6 Specialized techniques.- 7 Conclusions.- 8 Acknowledgements.- 9 Appendix: refinement laws.- A Single Complete Rule for Data Refinement.- 1 Introduction.- 2 Data refinement.- 3 Predicate transformers.- 4 Completeness.- 5 Soundness.- 6 Partial programs.- 7 An example.- 8 Conclusion.- 9 Acknowledgements.- Types and Invariants in the Refinement Calculus.- 1 Introduction.- 2 Invariant semantics.- 3 The refinement calculus.- 4 A development method.- 5 Laws for local invariants.- 6 Eliminating local invariants.- 7 Type-checking.- 8 Recursion.- 9 Examples.- 10 A discussion of motives.- 11 Related work.- 12 Conclusions.- Acknowledgements.- A Additional refinement laws.- References.- Authors' Addresses.
---
paper_title: Trace-based derivation of a scalable lock-free stack algorithm
paper_content:
We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is based on the scalable stack algorithm of Hendler et al. (Proceedings of the sixteenth annual ACM symposium on parallel algorithms, 27–30 June 2004, Barcelona, Spain, pp 206–215), which allows push and pop operations to be paired off and eliminated without affecting the central stack, thus reducing contention on the stack, and allowing multiple pairs of push and pop operations to be performed in parallel. Our algorithm uses a simpler data structure than Hendler, Shavit and Yerushalmi’s, and avoids an ABA problem. We first derive a simple lock-free stack algorithm using a linked-list implementation, and discuss issues related to memory management and the ABA problem. We then add an abstract model of the elimination process, from which we derive our elimination algorithm. This allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. Each step in the derivation is either a data refinement which preserves the level of atomicity, an operational refinement which may alter the level of atomicity, or a refactoring step which alters the structure of the system resulting from the preceding derivation. We verify our refinements using an extension of Lipton’s reduction method, allowing concurrent and non-concurrent aspects to be considered separately.
---
paper_title: Trace-based Derivation of a Lock-Free Queue Algorithm
paper_content:
Lock-free algorithms have been developed to avoid various problems associated with using locks to control access to shared data structures. Instead of preventing interference between processes using mutual exclusion, lock-free algorithms must ensure correct behaviour in the presence of interference. While this avoids the problems with locks, the resulting algorithms are typically more intricate than lock-based algorithms, and allow more complex interactions between processes. The result is that even when the basic idea is easy to understand, the code implementing lock-free algorithms is typically very subtle, hard to understand, and hard to get right. In this paper, we consider the well-known lock-free queue implementation due to Michael and Scott, and show how a slightly simplified version of this algorithm can be derived from an abstract specification via a series of verifiable refinement steps. Reconstructing a design history in this way allows us to examine the kinds of design decisions that underlie the algorithm as describe by Michael and Scott, and to explore the consequences of some alternative design choices. Our derivation is based on a refinement calculus with concurrent composition, combined with a reduction approach, based on that proposed by Lipton, Lamport, Cohen, and others, which we have previously used to derive a scalable stack algorithm. The derivation of Michael and Scott's queue algorithm introduces some additional challenges because it uses a ''helper'' mechanism which means that part of an enqueue operation can be performed by any process, also in a simulation proof the treatment of dequeue on an empty queue requires the use of backward simulation.
---
paper_title: Lock-free parallel and concurrent garbage collection by mark&sweep
paper_content:
This paper presents a lock-free algorithm for mark&sweep garbage collection (GC) in a realistic model using synchronization primitives load-linked/store-conditional (LL/SC) or compare-and-swap (CAS) offered by machine architectures. The algorithm is concurrent in the sense that garbage collection can run concurrently with the application (the mutator threads). It is parallel in that garbage collection itself may employ several concurrent collector threads. We first design and prove an algorithm with a coarse grain of atomicity and subsequently apply the reduction method developed and verified in [H. Gao, W.H. Hesselink, A formal reduction for lock-free parallel algorithms, in: Proceedings of the 16th Conference on Computer Aided Verification, CAV, July 2004] to implement the high-level atomic steps by means of the low-level primitives. Even the correctness of the coarse grain algorithm is very delicate due to the presence of concurrent mutator and collector threads. We have verified it therefore by means of the interactive theorem prover PVS.
---
paper_title: Simple, fast, and practical non-blocking and blocking concurrent queue algorithms
paper_content:
Drawing ideas from previous authors, we present a new non-blocking concurrent queue algorithm and a new two-lock queue algorithm in which one enqueue and one dequeue can proceed concurrently. Both algorithms are simple, fast, and practical; we were surprised not to find them in the literature. Experiments on a 12-node SGI Challenge multiprocessor indicate that the new non-blocking queue consistently outperforms the best known alternatives; it is the clear algorithm of choice for machines that provide a universal atomic primitive (e.g., compare_and_swap or load_linked/store_conditional). The two-lock concurrent queue outperforms a single lock when several processes are competing simultaneously for access; it appears to be the algorithm of choice for busy queues on machines with non-universal atomic primitives (e.g., test_and_set). Since much of the motivation for non-blocking algorithms is rooted in their immunity to large, unpredictable delays in process execution, we report experimental results both for systems with dedicated processors and for systems with several processes multiprogrammed on each processor.
---
paper_title: Reduction: a method of proving properties of parallel programs
paper_content:
When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified.
---
paper_title: Derivation of a Scalable Lock-Free Stack Algorithm
paper_content:
We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is a simplified version of one described by Hendler, Shavit and Yerushalmi [Hendler, D., N. Shavit and L. Yerushalmi, A scalable lock-free stack algorithm, in: SPAA 2004: Proceedings of the Sixteenth Annual ACM Symposium on Parallel Algorithms, June 27-30, 2004, Barcelona, Spain, 2004, pp. 206-215], which allows push and pop operations to be paired off and eliminated without affecting the central stack. This reduces contention on the stack compared with other implementations, and allows multiple pairs of push and pop operations to be performed in parallel. Our derivation introduces an abstract model of the elimination process, which allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. At each step in the derivation, this transformation reduces an execution of an entire operation at a time of the model at that level, or two in the case of a successful elimination, rather than translating one atomic action at a time as is done in simulation proofs.
---
paper_title: Modeling in Event-B: System and Software Engineering
paper_content:
A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.
---
paper_title: Deriving linearizable fine-grained concurrent objects
paper_content:
Practical and efficient algorithms for concurrent data structures are difficult to construct and modify. Algorithms in the literature are often optimized for a specific setting, making it hard to separate the algorithmic insights from implementation details. The goal of this work is to systematically construct algorithms for a concurrent data structure starting from its sequential implementation. Towards that goal, we follow a construction process that combines manual steps corresponding to high-level insights with automatic exploration of implementation details. To assist us in this process, we built a new tool called Paraglider. The tool quickly explores large spaces of algorithms and uses bounded model checking to check linearizability of algorithms. Starting from a sequential implementation and assisted by the tool, we present the steps that we used to derive various highly-concurrent algorithms. Among these algorithms is a new fine-grained set data structure that provides a wait-free contains operation, and uses only the compare-and-swap (CAS) primitive for synchronization.
---
paper_title: Formal Specification and Documentation using Z: A Case Study Approach
paper_content:
Reserve ( interval? : Interval; until! : Time; report! : Report ) A reservation is made for a period of time (interval?), and returns the expiry time of the new reservation (until!). A client can cancel a reservation by making a new reservation in which interval? is zero; this will then be removed by the next scavenge. Definition ∗ Reservesuccess ∆RS interval? : Interval until! : Time until! = now + interval? shutdown′ = shutdown resns′ = resns⊕ {clientnum 7→ until!} Reports † Reserve = (Reservesuccess ∧ Success) ⊕ TooManyUsers ⊕ NotAvailable ⊕ NotKnownUser The client cannot be a guest user. The reservation must expire before the shutdown time or be for a zero interval. There may be no space for new reservations. ∗ In the Definition section,⊕ is used for relational overriding. Any existing entry under clientnum in resns is removed and a new entry with value until! is added. † In the Reports section, ⊕ is applied to schemas for schema overriding. Mathematically, this can be defined as A ⊕ B = (A ∧ ¬ pre B) ∨ B, where pre B is the precondition of the B schema in which all after state and output components have been existentially quantified. In practice this means that the error conditions are ‘checked’ in reverse order. 78 Formal Specification and Documentation using Z 4.5.5 Service charges The basic parameters are supplemented by two hidden parameters, an operation identifier op? and the cost of executing the operation cost!. The latter can conveniently be defined in terms of natural numbers.
---
paper_title: Verifying Concurrent Data Structures by Simulation
paper_content:
We describe an approach to verifying concurrent data structures based on simulation between two Input/Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.
---
paper_title: Abstraction for concurrent objects
paper_content:
Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs.
---
paper_title: Maintaining consistency in distributed systems
paper_content:
The emerging generation of database systems and general purpose operating systems share many characteristics: object orientation, a stress on distribution, and the utilization of concurrency to increase performance. A consequence is that both types of systems are confronted with the problem of maintaining the consistency of multi-component distributed applications in the face of concurrency and failures. Moreover, large applications can be expected to combine database and general purpose components. This paper reviews four basic approaches to the distributed consistency problem as it arises in such hybrid applications:• Transactional serializability, a widely used database execution model, which has been adapted to distributed and object-oriented settings by several research efforts.• Traditional operating systems synchronization constructs, such as monitors, used within individual system components, and with no system-wide mechanism for inter-object synchronization.• Linearizability, an execution model for object-oriented systems with internal concurrency proposed by Herlihy and Wing [HW90] (similarly restricted to synchronization within individual objects).• Virtual synchrony, a non-transactional execution model used to characterize consistency and correctness in groups of cooperating processes (or groups of objects, in object-oriented systems) [BJ87].We suggest that no single method can cover the spectrum of issues that arise in general purpose distributed systems, and that a composite approach must therefore be adopted. The alternative proposed here uses virtual synchrony and linearizability at a high level, while including transactional mechanisms and monitors for synchronization in embedded subsystems. Such a hybrid solution requires some changes to both the virtual synchrony and transactional model, which we outline. The full-length version of the paper gives details on this, and also explores the problem in the context of a series of examples.The organization of the presentation is as follows. We begin by reviewing the database data and execution models and presenting the transactional approach to concurrency control and failure atomicity. We then turn to distributed systems, focusing on aspects related to synchronization and fault-tolerance and introducing virtually synchronous process groups. The last part of the paper focuses on an object oriented view of distributed systems, and suggests that the linearizability model of Herlihy and Wing might be used to link the virtual synchrony approach with transactions and "internal" synchronization mechanisms such as monitors, arriving at a flexible, general approach to concurrency control in systems built of typed objects. We identify some technical problems raised by this merging of models and propose solutions.
---
paper_title: How to prove algorithms linearisable
paper_content:
Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.
---
paper_title: A lazy concurrent list-based set algorithm
paper_content:
List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked).
---
paper_title: Concurrency Verification: Introduction to Compositional and Non-compositional Methods
paper_content:
This is a systematic and comprehensive introduction both to compositional proof methods for the state-based verification of concurrent programs, such as the assumption-commitment and rely-guarantee paradigms, and to noncompositional methods, whose presentation culminates in an exposition of the communication-closed-layers (CCL) paradigm for verifying network protocols. Compositional concurrency verification methods reduce the verification of a concurrent program to the independent verification of its parts. If those parts are tightly coupled, one additionally needs verification methods based on the causal order between events. These are presented using CCL. The semantic approach followed here allows a systematic presentation of all these concepts in a unified framework which highlights essential concepts. The book is self-contained, guiding the reader from advanced undergraduate level to the state-of-the-art. Every method is illustrated by examples, and a picture gallery of some of the subject's key figures complements the text.
---
paper_title: A lazy concurrent list-based set algorithm
paper_content:
List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the JavaTM Concurrency Package of JDK 1.6.0. However, Michael's lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. ::: ::: We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael's lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. ::: ::: Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael's lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael's. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael's algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java's RTTI mechanism to create pointers that can be atomically marked).
---
paper_title: Data Refinement: Model-Oriented Proof Methods and their Comparison
paper_content:
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Formal Specification and Documentation using Z: A Case Study Approach
paper_content:
Reserve ( interval? : Interval; until! : Time; report! : Report ) A reservation is made for a period of time (interval?), and returns the expiry time of the new reservation (until!). A client can cancel a reservation by making a new reservation in which interval? is zero; this will then be removed by the next scavenge. Definition ∗ Reservesuccess ∆RS interval? : Interval until! : Time until! = now + interval? shutdown′ = shutdown resns′ = resns⊕ {clientnum 7→ until!} Reports † Reserve = (Reservesuccess ∧ Success) ⊕ TooManyUsers ⊕ NotAvailable ⊕ NotKnownUser The client cannot be a guest user. The reservation must expire before the shutdown time or be for a zero interval. There may be no space for new reservations. ∗ In the Definition section,⊕ is used for relational overriding. Any existing entry under clientnum in resns is removed and a new entry with value until! is added. † In the Reports section, ⊕ is applied to schemas for schema overriding. Mathematically, this can be defined as A ⊕ B = (A ∧ ¬ pre B) ∨ B, where pre B is the precondition of the B schema in which all after state and output components have been existentially quantified. In practice this means that the error conditions are ‘checked’ in reverse order. 78 Formal Specification and Documentation using Z 4.5.5 Service charges The basic parameters are supplemented by two hidden parameters, an operation identifier op? and the cost of executing the operation cost!. The latter can conveniently be defined in terms of natural numbers.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: A marriage of rely/guarantee and separation logic
paper_content:
In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. ::: ::: We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes.
---
paper_title: Proving linearizability via non-atomic refinement
paper_content:
Linearizability is a correctness criterion for concurrent objects. In this paper, we prove linearizability of a concurrent lock-free stack implementation by showing the implementation to be a nonatomic refinement of an abstract stack. To this end, we develop a generalisation of non-atomic refinement allowing one to refine a single (Z) operation into a CSP process. Besides this extension, the definition furthermore embodies a termination condition which permits one to prove starvation freedom for the concurrent processes.
---
paper_title: Concurrency of operations on B-trees
paper_content:
Concurrent operations on B-trees pose the problem of insuring that each operation can be carried out without interfering with other operations being performed simultaneously by other users. This problem can become critical if these structures are being used to support access paths, like indexes, to data base systems. In this case, serializing access to one of these indexes can create an unacceptable bottleneck for the entire system. Thus, there is a need for locking protocols that can assure integrity for each access while at the same time providing a maximum possible degree of concurrency. Another feature required from these protocols is that they be deadlock free, since the cost to resolve a deadlock may be high. ::: ::: Recently, there has been some questioning on whether B-tree structures can support concurrent operations. In this paper, we examine the problem of concurrent access to B-trees. We present a deadlock free solution which can be tuned to specific requirements. An analysis is presented which allows the selection of parameters so as to satisfy these requirements. ::: ::: The solution presented here uses simple locking protocols. Thus, we conclude that B-trees can be used advantageously in a multi-user environment.
---
paper_title: Data Refinement: Model-Oriented Proof Methods and their Comparison
paper_content:
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: Using coupled simulations in non-atomic refinement
paper_content:
Refinement is one of the most important techniques in formal system design, supporting stepwise development of systems from abstract specifications into more concrete implementations. Nonatomic refinement is employed when the level of granularity changes during a refinement step, i.e., whenever an abstract operation is refined into a sequence of concrete operations, as opposed to a single concrete operation. There has been some limited work on non-atomic refinement in Z, and the purpose of this paper is to extend this existing theory. In particular, we strengthen the proposed definition to exclude certain behaviours which only occur in the concrete specification but have no counterpart on the abstract level. To do this we use coupled simulations: the standard simulation relation is complemented by a second relation which guarantees the exclusion of undesired behaviour of the concrete system. These two relations have to agree at specific points (coupling condition), thus ensuring the desired close correspondence between abstract and concrete specification.
---
paper_title: Non-atomic refinement in z and CSP
paper_content:
In this paper we discuss the relationship between notions of non-atomic (or action) refinement in a state-based setting with that in a behavioural setting. In particular, we show that the definition of non-atomic coupled downward simulation as defined for Z and Object-Z is sound with respect to an action refinement definition of CSP failures refinement.
---
paper_title: Modular verification of linearizability with non-fixed linearization points
paper_content:
Locating linearization points (LPs) is an intuitive approach for proving linearizability, but it is difficult to apply the idea in Hoare-style logic for formal program verification, especially for verifying algorithms whose LPs cannot be statically located in the code. In this paper, we propose a program logic with a lightweight instrumentation mechanism which can verify algorithms with non-fixed LPs, including the most challenging ones that use the helping mechanism to achieve lock-freedom (as in HSY elimination-based stack), or have LPs depending on unpredictable future executions (as in the lazy set algorithm), or involve both features. We also develop a thread-local simulation as the meta-theory of our logic, and show it implies contextual refinement, which is equivalent to linearizability. Using our logic we have successfully verified various classic algorithms, some of which are used in the java.util.concurrent package.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Tentative steps toward a development method for interfering programs
paper_content:
Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references.
---
paper_title: Verifying linearisability with potential linearisation points
paper_content:
Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.
---
paper_title: A marriage of rely/guarantee and separation logic
paper_content:
In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. ::: ::: We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes.
---
paper_title: Verifying properties of parallel programs: an axiomatic approach
paper_content:
An axiomatic method for proving a number of properties of parallel programs is presented. Hoare has given a set of axioms for partial correctness, but they are not strong enough in most cases. This paper defines a more powerful deductive system which is in some sense complete for partial correctness. A crucial axiom provides for the use of auxiliary variables, which are added to a parallel program as an aid to proving it correct. The information in a partial correctness proof can be used to prove such properties as mutual exclusion, freedom from deadlock, and program termination. Techniques for verifying these properties are presented and illustrated by application to the dining philosophers problem.
---
paper_title: Progress-based verification and derivation of concurrent programs
paper_content:
Concurrent programs are known to be complicated because synchronisation is required amongst the processes in order to ensure safety (nothing bad ever happens) and progress (something good eventually happens). Due to possible interference from other processes, a straightforward rearrangement of statements within a process can lead to dramatic changes in the behaviour of a program, even if the behaviour of the process executing in isolation is unaltered. Verifying concurrent programs using informal arguments are usually unconvincing, which makes formal methods a necessity. However, formal proofs can be challenging due to the complexity of concurrent programs. Furthermore, safety and progress properties are proved using fundamentally different techniques. Within the literature, safety has been given considerably more attention than progress. One method of formally verifying a concurrent program is to develop the program, then perform a post-hoc verification using one of the many available frameworks. However, this approach tends to be optimistic because the developed program seldom satisfies its requirements. When a proof becomes difficult, it can be unclear whether the proof technique or the program itself is at fault. Furthermore, following any modifications to program code, a verification may need to be repeated from the beginning. An alternative approach is to develop a program using a verify-while-develop paradigm. Here, one starts with a simple program together with the safety and progress requirements that need to be established. Each derivation step consists of a verification, followed by introduction of new program code motivated using the proofs themselves. Because a program is developed side-by-side with its proof, the completed program satisfies the original requirements. Our point of departure for this thesis is the Feijen and van Gasteren method for deriving concurrent programs, which uses the logic of Owicki and Gries. Although Feijen and van Gasteren derive several concurrent programs, because the Owicki-Gries logic does not include a logic of progress, their derivations only consider safety properties formally. Progress is considered post-hoc to the derivation using informal arguments. Furthermore, rules on how programs may be modified have not been presented, i.e., a program may be arbitrarily modified and hence unspecified behaviours may be introduced. In this thesis, we develop a framework for developing concurrent programs in the verify-while-develop paradigm. Our framework incorporates linear temporal logic, LTL, and hence both safety and progress properties may be given full consideration. We examine foundational aspects of progress by formalising minimal progress, weak fairness and strong fairness, which allow scheduler assumptions to be described. We formally define progress terms such as individual progress, individual deadlock, liveness, etc (which are properties of blocking programs) and wait-, lock-, and obstruction-freedom (which are properties of non-blocking programs). Then, we explore the inter-relationships between the various terms under the different fairness assumptions. Because LTL is known to be difficult to work with directly, we incorporate the logic of Owicki-Gries (for proving safety) and the leads-to relation from UNITY (for proving progress) within our framework. Following the nomenclature of Feijen and van Gasteren, our techniques are kept calculational, which aids derivation. We prove soundness of our framework by proving theorems that relate our techniques to the LTL definitions. Furthermore, we introduce several methods for proving progress using a well-founded relation, which keeps proofs of progress scalable. During program derivation, in order to ensure unspecified behaviour is not introduced, it is also important to verify a refinement, i.e., show that every behaviour of the final (more complex) program is a possible behaviour of the abstract representation. To facilitate this, we introduce the concept of an enforced property, which is a property that the program code does not satisfy, but is required of the final program. Enforced properties may be any LTL formula, and hence may represent both safety and progress requirements. We formalise stepwise refinement of programs with enforced properties, so that code is introduced in a manner that satisfies the enforced properties, yet refinement of the original program is guaranteed. We present derivations of several concurrent programs from the literature.
---
paper_title: Comparing Degrees of Non-Determinism in Expression Evaluation
paper_content:
Expression evaluation in programming languages is normally assumed to be deterministic; however, if an expression involves variables that are being modified by the environment of the process during its evaluation, the result of the evaluation can be non-deterministic. Two common scenarios in which this occurs are concurrent programs within which processes share variables and real-time programs that interact to monitor and/or control their environment. In these contexts, although any particular evaluation of an expression gives a single result, there is a range of possible values that could be returned depending on the relative timing between modification of a variable by the environment and its access within the expression evaluation. To compare the semantics of non-deterministic expression evaluation, one can use the set of possible values the expression evaluation could return. This paper formalizes three approaches to non-deterministic expression evaluation, highlights their commonalities and differences, shows the relationships between the approaches and explores conditions under which they coincide. Modal operators representing that a predicate holds for all possible evaluations and for some possible evaluation are associated with each of the evaluation approaches, and the properties and relationships between these operators are investigated. Furthermore, a link is made to a new notation used in reasoning about interference.
---
paper_title: Comparing Degrees of Non-Determinism in Expression Evaluation
paper_content:
Expression evaluation in programming languages is normally assumed to be deterministic; however, if an expression involves variables that are being modified by the environment of the process during its evaluation, the result of the evaluation can be non-deterministic. Two common scenarios in which this occurs are concurrent programs within which processes share variables and real-time programs that interact to monitor and/or control their environment. In these contexts, although any particular evaluation of an expression gives a single result, there is a range of possible values that could be returned depending on the relative timing between modification of a variable by the environment and its access within the expression evaluation. To compare the semantics of non-deterministic expression evaluation, one can use the set of possible values the expression evaluation could return. This paper formalizes three approaches to non-deterministic expression evaluation, highlights their commonalities and differences, shows the relationships between the approaches and explores conditions under which they coincide. Modal operators representing that a predicate holds for all possible evaluations and for some possible evaluation are associated with each of the evaluation approaches, and the properties and relationships between these operators are investigated. Furthermore, a link is made to a new notation used in reasoning about interference.
---
paper_title: Tentative steps toward a development method for interfering programs
paper_content:
Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed. 41 references.
---
paper_title: Fractional Permissions and Non-Deterministic Evaluators in Interval Temporal Logic
paper_content:
We propose Interval Temporal Logic as a basis for reasoning about concurrent programs with fine-grained atomicity due to the generality it provides over reasoning with standard pre/post-state relations. To simplify the semantics of parallel composition over intervals, we use fractional permissions, which allows one to ensure that conflicting reads and writes to a variable do not occur simultaneously. Using non-deterministic evaluators over intervals, we enable reasoning about the apparent states over an interval, which may differ from the actual states in the interval. The combination of Interval Temporal Logic, non-deterministic evaluators and fractional permissions results in a generic framework for reasoning about concurrent programs with fine-grained atomicity. We use our logic to develop rely/guarantee-style rules for decomposing a proof of a large system into proofs of its subcomponents, where fractional permissions are used to ensure that the behaviours of a program and its environment do not conflict.
---
paper_title: Deriving real-time action systems with multiple time bands using algebraic reasoning
paper_content:
The verify-while-develop paradigm allows one to incrementally develop programs from their specifications using a series of calculations against the remaining proof obligations. This paper presents a derivation method for real-time systems with realistic constraints on their behaviour. We develop a high-level interval-based logic that provides flexibility in an implementation, yet allows algebraic reasoning over multiple granularities and sampling multiple sensors with delay. The semantics of an action system is given in terms of interval predicates and algebraic operators to unify the logics for an action system and its properties, which in turn simplifies the calculations and derivations.
---
paper_title: Interleaved Programs and Rely-Guarantee Reasoning with ITL
paper_content:
This paper presents a logic that extends basicITL with explicit, interleaved programs. The calculus is based on symbolic execution, as previously described. We extend this former work here, by integrating the logic with higher-order logic, adding recursive procedures and rules to reason about fairness. Further, we show how rules for rely-guarantee reasoning can be derived and outline the application of some features to verify concurrent programs in practice. The logic is implemented in the interactive verification environment KIV.
---
paper_title: Interactive Verification of Concurrent Systems using Symbolic Execution
paper_content:
This paper presents an interactive proof method for the verification of temporal properties of concurrent systems based on symbolic execution. Symbolic execution is a well known and very intuitive strategy for the verification of sequential programs. We have carried over this approach to the interactive verification of arbitrary linear temporal logic properties of (infinite state) parallel programs. The resulting proof method is very intuitive to apply and can be automated to a large extent. It smoothly combines first-order reasoning with reasoning in temporal logic. The proof method has been implemented in the interactive verification environment KIV and has been used in several case studies.
---
paper_title: A complete axiomatization of interval temporal logic with infinite time
paper_content:
Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω -regular expressions. We give a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. The full paper (and another conference paper) presents the basic framework for finite time. Here and in the full paper the axiom system (and completeness) is extended to infinite time.
---
paper_title: Compositional reasoning using Interval Temporal Logic and Tempura
paper_content:
We present a compositional methodology for specification and proof using Interval Temporal Logic (ITL). After given an introduction to ITL, we show how fixpoints of various ITL operators provide a flexible way to modularly reason about safety and liveness. In addition, some new techniques are described for compositionally transforming and refining ITL specifications We also consider the use of ITL's programming language subset Tempura as a tool for testing the kinds of specifications dealt with here.
---
paper_title: Experience with model checking linearizability
paper_content:
Non-blocking concurrent algorithms offer significant performance advantages, but are very difficult to construct and verify. In this paper, we describe our experience in using SPIN to check linearizability of non-blocking concurrent data-structure algorithms that manipulate dynamically allocated memory. In particular, this is the first work that describes a method for checking linearizability with non-fixed linearization points.
---
paper_title: Speculative linearizability
paper_content:
Linearizability is a key design methodology for reasoning about implementations of concurrent abstract data types in both shared memory and message passing systems. It provides the illusion that operations execute sequentially and fault-free, despite the asynchrony and faults inherent to a concurrent system, especially a distributed one. A key property of linearizability is inter-object composability: a system composed of linearizable objects is itself linearizable. However, devising linearizable objects is very difficult, requiring complex algorithms to work correctly under general circumstances, and often resulting in bad average-case behavior. Concurrent algorithm designers therefore resort to speculation: optimizing algorithms to handle common scenarios more efficiently. The outcome are even more complex protocols, for which it is no longer tractable to prove their correctness. To simplify the design of efficient yet robust linearizable protocols, we propose a new notion: speculative linearizability. This property is as general as linearizability, yet it allows intra-object composability: the correctness of independent protocol phases implies the correctness of their composition. In particular, it allows the designer to focus solely on the proof of an optimization and derive the correctness of the overall protocol from the correctness of the existing, non-optimized one. Our notion of protocol phases allows processes to independently switch from one phase to another, without requiring them to reach agreement to determine the change of a phase. To illustrate the applicability of our methodology, we show how examples of speculative algorithms for shared memory and asynchronous message passing naturally fit into our framework. We rigorously define speculative linearizability and prove our intra-object composition theorem in a trace-based as well as an automaton-based model. To obtain a further degree of confidence, we also formalize and mechanically check the theorem in the automaton-based model, using the I/O automata framework within the Isabelle interactive proof assistant. We expect our framework to enable, for the first time, scalable specifications and mechanical proofs of speculative implementations of linearizable objects.
---
paper_title: Model Checking Linearizability via Refinement
paper_content:
Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that 1) all executions of concurrent operations be serializable, and 2) the serialized executions be correct with respect to the sequential semantics. This paper describes a new method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. Our method avoids the often difficult task of determining linearization points in implementations, but can also take advantage of linearization points if they are given. The method exploits model checking of finite state systems specified as concurrent processes with shared variables. Partial order reduction is used to effectively reduce the search space. The approach is built into a toolset that supports a rich set of concurrent operators. The tool has been used to automatically check a variety of implementations of concurrent objects, including the first algorithms for the mailbox problem and scalable NonZero indicators. Our system was able to find all known and injected bugs in these implementations.
---
paper_title: Distributed Algorithms for Message-Passing Systems
paper_content:
Distributed computing is at the heart of many applications. It arises as soon as one has to solve a problem in terms of entities -- such as processes, peers, processors, nodes, or agents -- that individually have only a partial knowledge of the many input parameters associated with the problem. In particular each entity cooperating towards the common goal cannot have an instantaneous knowledge of the current state of the other entities. Whereas parallel computing is mainly concerned with 'efficiency', and real-time computing is mainly concerned with 'on-time computing', distributed computing is mainly concerned with 'mastering uncertainty' created by issues such as the multiplicity of control flows, asynchronous communication, unstable behaviors, mobility, and dynamicity. While some distributed algorithms consist of a few lines only, their behavior can be difficult to understand and their properties hard to state and prove. The aim of this book is to present in a comprehensive way the basic notions, concepts, and algorithms of distributed computing when the distributed entities cooperate by sending and receiving messages on top of an asynchronous network. The book is composed of seventeen chapters structured into six parts: distributed graph algorithms, in particular what makes them different from sequential or parallel algorithms; logical time and global states, the core of the book; mutual exclusion and resource allocation; high-level communication abstractions; distributed detection of properties; and distributed shared memory. The author establishes clear objectives per chapter and the content is supported throughout with illustrative examples, summaries, exercises, and annotated bibliographies. This book constitutes an introduction to distributed computing and is suitable for advanced undergraduate students or graduate students in computer science and computer engineering, graduate students in mathematics interested in distributed computing, and practitioners and engineers involved in the design and implementation of distributed applications. The reader should have a basic knowledge of algorithms and operating systems.
---
paper_title: Automatic Linearizability Proofs of Concurrent Objects with Cooperating Updates
paper_content:
An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add.
---
paper_title: Automatically proving linearizability
paper_content:
This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms.
---
paper_title: How to make a correct multiprocess program execute correctly on a multiprocessor
paper_content:
A multiprocess program executing on a modern multiprocessor must issue explicit commands to synchronize memory accesses. A method is proposed for deriving the necessary commands from a correctness proof of the underlying algorithm in a formalism based on temporal relations among operation executions.
---
paper_title: Verifying Linearizability via Optimized Refinement Checking
paper_content:
Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that: (1) All executions of concurrent operations are serializable, and (2) the serialized executions are correct with respect to the sequential semantics. In this work, we describe a method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. The method does not require that linearization points in the implementations be given, which is often difficult or impossible. However, the method takes advantage of linearization points if they are given. The method is based on refinement checking of finite-state systems specified as concurrent processes with shared variables. To tackle state space explosion, we develop and apply symmetry reduction, dynamic partial order reduction, and a combination of both for refinement checking. We have built the method into the PAT model checker, and used PAT to automatically check a variety of implementations of concurrent objects, including the first algorithm for scalable nonzero indicators. Our system is able to find all known and injected bugs in these implementations.
---
paper_title: Formal verification of an array-based nonblocking queue
paper_content:
We describe an array-based nonblocking implementation of a concurrent bounded queue, due to Shann, Huang and Chen (2000), and explain how we detected errors in the algorithm while attempting a formal verification. We explain how we first corrected the errors, and then modified the algorithm to obtain nonblocking behaviour in the boundary cases. Both the corrected and modified versions of the algorithm were verified using the PVS theorem proven. We describe the verification of the modified algorithm, which subsumes the proof of the corrected version.
---
paper_title: Data structures in the multicore age
paper_content:
The advent of multicore processors as the standard computing platform will force major changes in software design.
---
paper_title: Mechanically verified proof obligations for linearizability
paper_content:
Concurrent objects are inherently complex to verify. In the late 80s and early 90s, Herlihy and Wing proposed linearizability as a correctness condition for concurrent objects, which, once proven, allows us to reason about concurrent objects using pre- and postconditions only. A concurrent object is linearizable if all of its operations appear to take effect instantaneously some time between their invocation and return. In this article we define simulation-based proof conditions for linearizability and apply them to two concurrent implementations, a lock-free stack and a set with lock-coupling. Similar to other approaches, we employ a theorem prover (here, KIV) to mechanize our proofs. Contrary to other approaches, we also use the prover to mechanically check that our proof obligations actually guarantee linearizability. This check employs the original ideas of Herlihy and Wing of verifying linearizability via possibilities.
---
paper_title: Quasi-Linearizability Relaxed Consistency For Improved Concurrency
paper_content:
Linearizability, the key correctness condition that most optimized concurrent object implementations comply with, imposes tight synchronization between the object concurrent operations. This tight synchronization usually comes with a performance and scalability price. Yet, these implementations are often employed in an environment where a more relaxed linearizability condition suffices, where strict linearizability is not a must. ::: ::: Here we provide a quantitative definition of limited non-determinism, a notion we call Quasi Linearizability. Roughly speaking an implementation of an object is quasi linearizable if each run of the implementation is at a bounded "distance" away from some linear run of the object. However, as we show the limited distance has to be relative to some operations but not all. ::: ::: Following the definition we provide examples of quasi concurrent implementations that out perform state of the art standard implementations due to the relaxed requirement. Finally we show that the Bitonic Counting Network non-deterministic behavior can be quantified using our Quasi Linearizable notion.
---
paper_title: Shared Memory Consistency Models: A Tutorial
paper_content:
The memory consistency model of a system affects performance, programmability, and portability. We aim to describe memory consistency models in a way that most computer professionals would understand. This is important if the performance-enhancing features being incorporated by system designers are to be correctly and widely used by programmers. Our focus is consistency models proposed for hardware-based shared memory systems. Most of these models emphasize the system optimizations they support, and we retain this system-centric emphasis. We also describe an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations.
---
paper_title: Aspect-Oriented linearizability proofs
paper_content:
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. ::: ::: In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
---
paper_title: On the nature of progress
paper_content:
We identify a simple relationship that unifies seemingly unrelated progress conditions ranging from the deadlock-free and starvation-free properties common to lock-based systems, to non-blocking conditions such as obstruction-freedom, lock-freedom, and wait-freedom. ::: ::: Properties can be classified along two dimensions based on the demands they make on the operating system scheduler. A gap in the classification reveals a new non-blocking progress condition, weaker than obstruction-freedom, which we call clash-freedom. ::: ::: The classification provides an intuitively-appealing explanation why programmers continue to devise data structures that mix both blocking and non-blocking progress conditions. It also explains why the wait-free property is a natural basis for the consensus hierarchy: a theory of shared-memory computation requires an independent progress condition, not one that makes demands of the operating system scheduler.
---
paper_title: Abstraction for concurrent objects
paper_content:
Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs.
---
paper_title: The existence of refinement mappings
paper_content:
Abstract Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. We consider specifications consisting of a state machine (which may be infinite- state) that specifies safety requirements, and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S 1 to a higher-level one S 2 is a mapping from S 1 's state space to S 2 's state space. It maps steps of S 1 's state machine to steps of S 2 's state machine and maps behaviors allowed by S 1 to behaviors allowed by S 2 . We show that, under reasonable assumptions about the specification, if S 1 implements S 2 , then by adding auxiliary variables to S 1 we can guarantee the existence of a refinement mapping. This provides a completeness result for a practical, hierarchical specification method.
---
paper_title: Maintaining consistency in distributed systems
paper_content:
The emerging generation of database systems and general purpose operating systems share many characteristics: object orientation, a stress on distribution, and the utilization of concurrency to increase performance. A consequence is that both types of systems are confronted with the problem of maintaining the consistency of multi-component distributed applications in the face of concurrency and failures. Moreover, large applications can be expected to combine database and general purpose components. This paper reviews four basic approaches to the distributed consistency problem as it arises in such hybrid applications:• Transactional serializability, a widely used database execution model, which has been adapted to distributed and object-oriented settings by several research efforts.• Traditional operating systems synchronization constructs, such as monitors, used within individual system components, and with no system-wide mechanism for inter-object synchronization.• Linearizability, an execution model for object-oriented systems with internal concurrency proposed by Herlihy and Wing [HW90] (similarly restricted to synchronization within individual objects).• Virtual synchrony, a non-transactional execution model used to characterize consistency and correctness in groups of cooperating processes (or groups of objects, in object-oriented systems) [BJ87].We suggest that no single method can cover the spectrum of issues that arise in general purpose distributed systems, and that a composite approach must therefore be adopted. The alternative proposed here uses virtual synchrony and linearizability at a high level, while including transactional mechanisms and monitors for synchronization in embedded subsystems. Such a hybrid solution requires some changes to both the virtual synchrony and transactional model, which we outline. The full-length version of the paper gives details on this, and also explores the problem in the context of a series of examples.The organization of the presentation is as follows. We begin by reviewing the database data and execution models and presenting the transactional approach to concurrency control and failure atomicity. We then turn to distributed systems, focusing on aspects related to synchronization and fault-tolerance and introducing virtually synchronous process groups. The last part of the paper focuses on an object oriented view of distributed systems, and suggests that the linearizability model of Herlihy and Wing might be used to link the virtual synchrony approach with transactions and "internal" synchronization mechanisms such as monitors, arriving at a flexible, general approach to concurrency control in systems built of typed objects. We identify some technical problems raised by this merging of models and propose solutions.
---
paper_title: How to prove algorithms linearisable
paper_content:
Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.
---
paper_title: Quantitative relaxation of concurrent data structures
paper_content:
There is a trade-off between performance and correctness in implementing concurrent data structures. Better performance may be achieved at the expense of relaxing correctness, by redefining the semantics of data structures. We address such a redefinition of data structure semantics and present a systematic and formal framework for obtaining new data structures by quantitatively relaxing existing ones. We view a data structure as a sequential specification S containing all "legal" sequences over an alphabet of method calls. Relaxing the data structure corresponds to defining a distance from any sequence over the alphabet to the sequential specification: the k-relaxed sequential specification contains all sequences over the alphabet within distance k from the original specification. In contrast to other existing work, our relaxations are semantic (distance in terms of data structure states). As an instantiation of our framework, we present two simple yet generic relaxation schemes, called out-of-order and stuttering relaxation, along with several ways of computing distances. We show that the out-of-order relaxation, when further instantiated to stacks, queues, and priority queues, amounts to tolerating bounded out-of-order behavior, which cannot be captured by a purely syntactic relaxation (distance in terms of sequence manipulation, e.g. edit distance). We give concurrent implementations of relaxed data structures and demonstrate that bounded relaxations provide the means for trading correctness for performance in a controlled way. The relaxations are monotonic which further highlights the trade-off: increasing k increases the number of permitted sequences, which as we demonstrate can lead to better performance. Finally, since a relaxed stack or queue also implements a pool, we actually have new concurrent pool implementations that outperform the state-of-the-art ones.
---
paper_title: Formalising Progress Properties of Non-blocking Programs
paper_content:
A non-blocking program is one that uses non-blocking primitives, such as load-linked/store-conditional and compare-and-swap, for synchronisation instead of locks so that no process is ever blocked. According to their progress properties, non-blocking programs may be classified as wait-free, lock-free or obstruction-free. However, a precise description of these properties does not exist and it is not unusual to find a definition that is ambiguous or even incorrect. We present a formal definition of the progress properties so that any confusion is removed. The formalisation also allows one to prove the widely believed presumption that wait-freedom is a special case of lock-freedom, which in turn is a special case of obstruction-freedom.
---
paper_title: In Search of Acceptability Criteria: Database Consistency Requirements and Transaction Correctness Properties
paper_content:
Whereas serializability captures database consistency requirements and transaction correctness properties via a single notion, recent research has attempted to come up with correctness criteria that view these two types of requirements indepen- dently. The search for more flexible correctness criteria is partly motivated by the introduction of new transaction models that extend the traditional atomic transaction model. These extensions came about because the atomic transac- tion model in conjunction with serializability is found to be very constraining when applied in advanced applications, such as, design databases, that function in distributed, cooperative, and heterogeneous environments. In this paper, we develop a taxonomy of various correctness criteria that focus on database consistency requirements and transaction correctness properties from the viewpoint of what the different dimensions of these two are. This taxonomy allows us to categorize correctness criteria that have been proposed in the lit- erature. To help in this categorization, we have applied a uniform specification technique, based on ACTA, to express the various criteria. Such a categorization helps shed light on the similarities and differences between different criteria and to place them in perspective.
---
paper_title: R-linearizability: an extension of linearizability to replicated objects
paper_content:
The authors extend linearizability, a consistency criterion for concurrent systems, to the replicated context, where availability and performance are enhanced by using redundant objects. The mode of operation on sets of replicas and the consistency criterion of R-linearizability are defined. An implementation of R-linearizable replicated atoms (on which only read and write operations are defined) is described. It is realized in the virtually synchronous model, based on a group view mechanism. This framework provides reliable multicast primitives, enabling a fault-tolerant implementation. >
---
paper_title: Linearizability: a correctness condition for concurrent objects
paper_content:
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
---
paper_title: Data Refinement: Model-Oriented Proof Methods and their Comparison
paper_content:
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
---
paper_title: Progress-based verification and derivation of concurrent programs
paper_content:
Concurrent programs are known to be complicated because synchronisation is required amongst the processes in order to ensure safety (nothing bad ever happens) and progress (something good eventually happens). Due to possible interference from other processes, a straightforward rearrangement of statements within a process can lead to dramatic changes in the behaviour of a program, even if the behaviour of the process executing in isolation is unaltered. Verifying concurrent programs using informal arguments are usually unconvincing, which makes formal methods a necessity. However, formal proofs can be challenging due to the complexity of concurrent programs. Furthermore, safety and progress properties are proved using fundamentally different techniques. Within the literature, safety has been given considerably more attention than progress. One method of formally verifying a concurrent program is to develop the program, then perform a post-hoc verification using one of the many available frameworks. However, this approach tends to be optimistic because the developed program seldom satisfies its requirements. When a proof becomes difficult, it can be unclear whether the proof technique or the program itself is at fault. Furthermore, following any modifications to program code, a verification may need to be repeated from the beginning. An alternative approach is to develop a program using a verify-while-develop paradigm. Here, one starts with a simple program together with the safety and progress requirements that need to be established. Each derivation step consists of a verification, followed by introduction of new program code motivated using the proofs themselves. Because a program is developed side-by-side with its proof, the completed program satisfies the original requirements. Our point of departure for this thesis is the Feijen and van Gasteren method for deriving concurrent programs, which uses the logic of Owicki and Gries. Although Feijen and van Gasteren derive several concurrent programs, because the Owicki-Gries logic does not include a logic of progress, their derivations only consider safety properties formally. Progress is considered post-hoc to the derivation using informal arguments. Furthermore, rules on how programs may be modified have not been presented, i.e., a program may be arbitrarily modified and hence unspecified behaviours may be introduced. In this thesis, we develop a framework for developing concurrent programs in the verify-while-develop paradigm. Our framework incorporates linear temporal logic, LTL, and hence both safety and progress properties may be given full consideration. We examine foundational aspects of progress by formalising minimal progress, weak fairness and strong fairness, which allow scheduler assumptions to be described. We formally define progress terms such as individual progress, individual deadlock, liveness, etc (which are properties of blocking programs) and wait-, lock-, and obstruction-freedom (which are properties of non-blocking programs). Then, we explore the inter-relationships between the various terms under the different fairness assumptions. Because LTL is known to be difficult to work with directly, we incorporate the logic of Owicki-Gries (for proving safety) and the leads-to relation from UNITY (for proving progress) within our framework. Following the nomenclature of Feijen and van Gasteren, our techniques are kept calculational, which aids derivation. We prove soundness of our framework by proving theorems that relate our techniques to the LTL definitions. Furthermore, we introduce several methods for proving progress using a well-founded relation, which keeps proofs of progress scalable. During program derivation, in order to ensure unspecified behaviour is not introduced, it is also important to verify a refinement, i.e., show that every behaviour of the final (more complex) program is a possible behaviour of the abstract representation. To facilitate this, we introduce the concept of an enforced property, which is a property that the program code does not satisfy, but is required of the final program. Enforced properties may be any LTL formula, and hence may represent both safety and progress requirements. We formalise stepwise refinement of programs with enforced properties, so that code is introduced in a manner that satisfies the enforced properties, yet refinement of the original program is guaranteed. We present derivations of several concurrent programs from the literature.
---
paper_title: Fault-Tolerance by Replication in Distributed Systems
paper_content:
The paper is a tutorial on fault-tolerance by replication in distributed systems. We start by defining linearizability as the correctness criterion for replicated services (or objects), and present the two main classes of replication techniques: primary-backup replication and active replication. We introduce group communication as the infrastructure providing the adequate multicast primitives to implement either primary-backup replication, or active replication. Finally, we discuss the implementation of the two most fundamental group multicast primitives: total order multicast and view synchronous multicast.
---
paper_title: Verifying linearizability with hindsight
paper_content:
We present a proof of safety and linearizability of a highly-concurrent optimistic set algorithm. The key step in our proof is the Hindsight Lemma, which allows a thread to infer the existence of a global state in which its operation can be linearized based on limited local atomic observations about the shared state. The Hindsight Lemma allows us to avoid one of the most complex and non-intuitive steps in reasoning about highly concurrent algorithms: considering the linearization point of an operation to be in a different thread than the one executing it. The Hindsight Lemma assumes that the algorithm maintains certain simple invariants which are resilient to interference, and which can themselves be verified using purely thread-local proofs. As a consequence, the lemma allows us to unlock a perhaps-surprising intuition: a high degree of interference makes non-trivial highly-concurrent algorithms in some cases much easier to verify than less concurrent ones.
---
paper_title: DCAS is not a silver bullet for nonblocking algorithm design
paper_content:
Despite years of research, the design of efficient nonblocking algorithms remains difficult. A key reason is that current shared-memory multiprocessor architectures support only single-location synchronisation primitives such as compare-and-swap (CAS) and load-linked/store-conditional (LL/SC). Recently researchers have investigated the utility of double-compare-and-swap (DCAS)--a generalisation of CAS that supports atomic access to two memory locations -- in overcoming these problems. We summarise recent research in this direction and present a detailed case study concerning a previously published nonblocking DCAS-based double-ended queue implementation. Our summary and case study clearly show that DCAS does not provide a silver bullet for nonblocking synchronisation. That is, it does not make the design and verification of even mundane nonblocking data structures with desirable properties easy. Therefore, our position is that while slightly more powerful synchronisation primitives can ave a profound effect on ease of algorithm design and verification, DCAS does not provide sufficient additional power over CAS to justify supporting it in hardware.
---
| Title: Verifying Linearizability: A Comparative Survey
Section 1: Introduction
Description 1: Provide an overview of the motivation behind verifying linearizability in concurrent algorithms and the main questions to address in the survey.
Section 2: Linearizability
Description 2: Define linearizability and explain its importance in concurrent data structures. Introduce basic concepts such as invocation, response, and the linearization point.
Section 3: Example: The Treiber Stack
Description 3: Present a concrete example of a linearizable data structure, the Treiber stack, and detail its push and pop operations to illustrate the concept of linearization points.
Section 4: Formalizing Linearizability
Description 4: Discuss the formal definitions related to linearizability, including histories, events, and the formal criteria that a concurrent history must meet to be considered linearizable.
Section 5: Using an Explicit Matching Function
Description 5: Introduce Derrick et al.'s method of reformulating linearizability through an explicit matching function and discuss how it compares with Herlihy and Wing's original definition.
Section 6: Difficulties in Verifying Linearizability via Linearization Points
Description 6: Discuss various complexities in identifying and verifying linearization points, including the different classes of algorithms and their specific challenges.
Section 7: Verifying Linearizability
Description 7: Explore different methods and approaches for verifying linearizability, including refinement-based verification, augmented states, separation logic, and automation.
Section 8: Case Study 1: An Optimistic Set Algorithm
Description 8: Provide a detailed verification of a simplified version of Heller et al.'s concurrent set algorithm using various verification methods to highlight their differences and similarities.
Section 9: Case Study 2: A Lazy Set Algorithm
Description 9: Present verification of the full lazy set algorithm, including the contains operation, and discuss the inherent complexities in verifying operations without straightforward linearization points.
Section 10: Conclusions
Description 10: Summarize the findings of the survey, reflect on the progress in verifying linearizability, and discuss future directions and open challenges in making the verification more scalable. |
Intelligent Approaches in Locomotion - A Review | 6 | ---
paper_title: Evolving Swimming Controllers for a Simulated Lamprey with Inspiration from Neurobiology
paper_content:
This paper presents how neural swimming controllers for a simulated lamprey can be developed using evolutionary algorithms. A genetic algorithm is used for evolving the architecture of a connectionist model which determines the muscular activity of a simulated body in interaction with water. This work is inspired by the biological model developed by Ekeberg which repro duces the central pattern generator observed in the real lamprey (Ekeberg, 1993). In evolving artificial controllers, we demonstrate that a genetic algorithm can be an interesting design tech nique for neural controllers and that there exist alternative solutions to the biological connectiv ity. A variety of neural controllers are evolved which can produce the pattern of oscillations necessary for swimming. These patterns can be modulated through the external excitation ap plied to the network in order to vary the speed and the direction of swimming. The best evolved controllers cover larger ranges of frequencies, phase lags and speeds of s...
---
paper_title: An effective trajectory generation method for bipedal walking
paper_content:
This paper presents the virtual height inverted pendulum mode (VHIPM), which is a simple and effective trajectory generation method for the stable walking of biped robots. VHIPM, which is based on the inverted pendulum mode (IPM), can significantly reduce the zero moment point (ZMP) error by adjusting the height in the inverted pendulum. We show the relationship between VHIPM and other popular trajectory generation methods, and compare the ZMP errors in walking when trajectories are generated by various methods including VHIPM. We also investigate the sensitivity of the ZMP error in VHIPM to the step length, walking period and mass distribution of a robot. The simulation results show that VHIPM significantly reduces the ZMP errors compared to other methods under various circumstances.
---
paper_title: Research on gait planning of artificial leg based on central pattern generator
paper_content:
Biped robot with heterogeneous legs (BRHL) is a novel robot model, which consists of an artificial leg and an intelligent bionic leg. The artificial leg is used to simulate the amputeepsilas healthy leg and the bionic leg works as the intelligent artificial limb. To describe the present gait of the healthy leg and make intelligent bionic leg follow the walking of artificial leg in all phases is the target of BRHLpsilas research. So gait planning of artificial leg is the emphasis of BRHLpsilas research. This paper uses the model of central pattern generator (CPG) in the research of artificial legpsilas gait planning from the point of biology. To obtain natural and robust walking pattern, genetic algorithm is used to optimize parameters of the CPG network model and the fitness function is formulated based on zero moment point (ZMP). Simulation results testify the feasibility of this method.
---
paper_title: Evolution of Central Pattern Generators for Bipedal Walking in a Real-Time Physics Environment
paper_content:
We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary in order to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to 3D physically simulated biped locomotion.
---
paper_title: Humanoid robotics platforms developed in HRP
paper_content:
Abstract This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI’s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the robot for biped locomotion, falling and getting up motions. The platform has been used to develop various applications and is expected to initiate more humanoid robotics research.
---
paper_title: A Biologically Inspired Biped Locomotion Strategy for Humanoid Robots: Modulation of Sinusoidal Patterns by a Coupled Oscillator Model
paper_content:
Biological systems seem to have a simpler but more robust locomotion strategy than that of the existing biped walking controllers for humanoid robots. We show that a humanoid robot can step and walk using simple sinusoidal desired joint trajectories with their phase adjusted by a coupled oscillator model. We use the center-of-pressure location and velocity to detect the phase of the lateral robot dynamics. This phase information is used to modulate the desired joint trajectories. We do not explicitly use dynamical parameters of the humanoid robot. We hypothesize that a similar mechanism may exist in biological systems. We applied the proposed biologically inspired control strategy to our newly developed human-sized humanoid robot computational brain (CB) and a small size humanoid robot, enabling them to generate successful stepping and walking patterns.
---
paper_title: Development of Adaptive Modular Active Leg (AMAL) using bipedal robotics technology
paper_content:
The objective of the work presented here is to develop a low cost active above knee prosthetic device exploiting bipedal robotics technology which will work utilizing the available biological motor control circuit properly integrated with a Central Pattern Generator (CPG) based control scheme. The approach is completely different from the existing Active Prosthetic devices, designed primarily as standalone systems utilizing multiple sensors and embedded rigid control schemes. In this research, first we designed a fuzzy logic based methodology for offering suitable gait pattern for an amputee, followed by formulating a suitable algorithm for designing a CPG, based on Rayleigh's oscillator. An indigenous probe, Humanoid Gait Oscillator Detector (HGOD) has been designed for capturing gait patterns from various individuals of different height, weight and age. These data are used to design a Fuzzy inference system which generates most suitable gait pattern for an amputee. The output of the Fuzzy inference system is used for designing a CPG best suitable for the amputee. We then developed a CPG based control scheme for calculating the damping profile in real time for maneuvering a prosthetic device called AMAL (Adaptive Modular Active Leg). Also a number of simulation results are presented which show the stable behavior of knee and hip angles and determine the stable limit cycles of the network.
---
paper_title: Dynamic response for motion capture animation
paper_content:
Human motion capture embeds rich detail and style which is difficult to generate with competing animation synthesis technologies. However, such recorded data requires principled means for creating responses in unpredicted situations, for example reactions immediately following impact. This paper introduces a novel technique for incorporating unexpected impacts into a motion capture-driven animation system through the combination of a physical simulation which responds to contact forces and a specialized search routine which determines the best plausible re-entry into motion library playback following the impact. Using an actuated dynamic model, our system generates a physics-based response while connecting motion capture segments. Our method allows characters to respond to unexpected changes in the environment based on the specific dynamic effects of a given contact while also taking advantage of the realistic movement made available through motion capture. We show the results of our system under various conditions and with varying responses using martial arts motion capture as a testbed.
---
paper_title: Evolving Swimming Controllers for a Simulated Lamprey with Inspiration from Neurobiology
paper_content:
This paper presents how neural swimming controllers for a simulated lamprey can be developed using evolutionary algorithms. A genetic algorithm is used for evolving the architecture of a connectionist model which determines the muscular activity of a simulated body in interaction with water. This work is inspired by the biological model developed by Ekeberg which repro duces the central pattern generator observed in the real lamprey (Ekeberg, 1993). In evolving artificial controllers, we demonstrate that a genetic algorithm can be an interesting design tech nique for neural controllers and that there exist alternative solutions to the biological connectiv ity. A variety of neural controllers are evolved which can produce the pattern of oscillations necessary for swimming. These patterns can be modulated through the external excitation ap plied to the network in order to vary the speed and the direction of swimming. The best evolved controllers cover larger ranges of frequencies, phase lags and speeds of s...
---
paper_title: On the mechanics of natural compliance in frictional contacts and its effect on grasp stiffness and stability
paper_content:
This paper considers the effect of natural material compliance on the stiffness and stability of frictional multi-contact grasps and fixtures. The contact preload profile is a key parameter in the nonlinear compliance laws governing such contacts. The paper introduces the Hertz-Walton contact compliance model which is valid for linear contact loading profiles. The model is specified in a lumped parameter form suitable for on-line grasping applications, and is entirely determined by the contact friction and by the material and geometric properties of the contacting bodies. The model predicts an asymmetric stiffening of the tangential reaction force as the normal load at the contact increases. As a result, the composite stiffness matrix of multi-contact grasps governed by natural compliance effects is asymmetric, indicating that these contact arrangements are not governed by any potential energy function. Based on the compliant grasp dynamics, the paper derives rules indicating which contact point locations and what preload profiles guarantee grasp and fixture stability. The paper also describes preliminary experiments supporting the contact model predictions.
---
paper_title: A pattern generator of humanoid robots walking on a rough terrain using a handrail
paper_content:
This paper presents a motion pattern generator of humanoid robots that walks on a flat plane, steps and a rough terrain. It is guaranteed rigorously that the desired contact between a humanoid robot and terrain should be maintained by keeping the contact wrench sum between them inside the contact wrench cone under the sufficient friction assumption. A walking pattern is generated by solving the contact wrench equations and by applying the resolved momentum control.
---
paper_title: Incremental Learning and Memory Consolidation of Whole Body Human Motion Primitives
paper_content:
The ability to learn during continuous and on-line observation would be advantageous for humanoid robots, as it would enable them to learn during co-location and interaction in the human environment. However, when motions are being learned and clustered on-line, there is a trade-off between classification accuracy and the number of training examples, resulting in potential misclassifications both at the motion and hierarchy formation level. This article presents an approach enabling fast on-line incremental learning, combined with an incremental memory consolidation process correcting initial misclassifications and errors in organization, to improve the stability and accuracy of the learned motions, analogous to the memory consolidation process following motor learning observed in humans. Following initial organization, motions are randomly selected for reclassification, at both low and high levels of the hierarchy. If a better reclassification is found, the knowledge structure is reorganized to comply. The approach is validated during incremental acquisition of a motion database containing a variety of full body motions.1
---
paper_title: Design of a novel central pattern generator and the hebbian motion learning
paper_content:
In this paper, we propose a new CPG model and hebbian learning rule for the CPG. The output of the proposed CPG is determined by only phase differences of synchronization of component oscillators. A phase synchronization can be regarded as an adaptive behavior to an environment, so the CPG has an adaptavility despite of only simple connections to the environment. We also propose a motion learning rule for the proposed CPG. Since the rule is described by only simple signal processing, so that it can be realized by electric circuits easily, it will be able to be used efficiently for high degree of freedom robots.
---
paper_title: Dynamically balanced optimal gaits of a ditch-crossing biped robot
paper_content:
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
---
paper_title: An effective trajectory generation method for bipedal walking
paper_content:
This paper presents the virtual height inverted pendulum mode (VHIPM), which is a simple and effective trajectory generation method for the stable walking of biped robots. VHIPM, which is based on the inverted pendulum mode (IPM), can significantly reduce the zero moment point (ZMP) error by adjusting the height in the inverted pendulum. We show the relationship between VHIPM and other popular trajectory generation methods, and compare the ZMP errors in walking when trajectories are generated by various methods including VHIPM. We also investigate the sensitivity of the ZMP error in VHIPM to the step length, walking period and mass distribution of a robot. The simulation results show that VHIPM significantly reduces the ZMP errors compared to other methods under various circumstances.
---
paper_title: Probabilistic Balance Monitoring for Bipedal Robots
paper_content:
In this paper, a probability-based balance monitoring concept for humanoid robots is proposed. Two algorithms are presented that allow us to distinguish between exceptional situations and normal operations. The first classification approach uses Gaussian-Mixture-Models (GMM) to describe the distribution of the robot's sensor data for typical situations such as stable walking or falling down. With the GMM it is possible to state the probability of the robot being in one of the known situations. The concept of the second algorithm is based on Hidden-Markov-Models (HMM). The objective is to detect and classify unstable situations by means of their typical sequences in the robot's sensor data. When appropriate reflex motions are linked to the critical situations, the robot can prevent most falls or is at least able to execute a controlled falling motion. The proposed algorithms are verified by simulations and experiments with our bipedal robot BARt-UH.
---
paper_title: Embodied Symbol Emergence Based on Mimesis Theory
paper_content:
“Mimesis” theory focused in the cognitive science field and “mirror neurons” found in the biology field show that the behavior generation process is not independent of the behavior cognition process. The generation and cognition processes have a close relationship with each other. During the behavioral imitation period, a human being does not practice simple joint coordinate transformation, but will acknowledge the parents’ behavior. It understands the behavior after abstraction as symbols, and will generate its self-behavior. Focusing on these facts, we propose a new method which carries out the behavior cognition and behavior generation processes at the same time. We also propose a mathematical model based on hidden Markov models in order to integrate four abilities: (1) symbol emergence; (2) behavior recognition; (3) self-behavior generation; (4) acquiring the motion primitives. Finally, the feasibility of this method is shown through several experiments on a humanoid robot.
---
paper_title: Incremental Learning, Clustering and Hierarchy Formation of Whole Body Motion Patterns using Adaptive Hidden Markov Chains
paper_content:
This paper describes a novel approach for autonomous and incremental learning of motion pattern primitives by observation of human motion. Human motion patterns are abstracted into a dynamic stochastic model, which can be used for both subsequent motion recognition and generation, analogous to the mirror neuron hypothesis in primates. The model size is adaptable based on the discrimination requirements in the associated region of the current knowledge base. A new algorithm for sequentially training the Markov chains is developed, to reduce the computation cost during model adaptation. As new motion patterns are observed, they are incrementally grouped together using hierarchical agglomerative clustering based on their relative distance in the model space. The clustering algorithm forms a tree structure, with specialized motions at the tree leaves, and generalized motions closer to the root. The generated tree structure will depend on the type of training data provided, so that the most specialized motions will be those for which the most training has been received. Tests with motion capture data for a variety of motion primitives demonstrate the efficacy of the algorithm.
---
paper_title: Application of Genetic Algorithms for biped robot gait synthesis optimization during walking and going up-stairs
paper_content:
Selecting an appropriate gait can reduce consumed energy by a biped robot. In this paper, a Genetic Algorithm gait synthesis method is proposed, which generates the angle trajectories based on the minimum consumed energy and minimum torque change. The gait synthesis is considered for two cases: walking and going up-stairs. The proposed method can be applied for a wide range of step lengths and step times during walking; or step lengths, stair heights and step times for going up-stairs. The angle trajectories are generated without neglecting the stability of the biped robot. The angle trajectories can be generated for other tasks to be performed by the biped robot, like going down-stairs, overcoming obstacles, etc. In order to verify the effectiveness of the proposed method, the results for minimum consumed energy and minimum torque change are compared. A Radial Basis Function Neural Network is considered for the real-time application. Simulations are realized based upon the parameters of the 'Bonten-Maru ...
---
paper_title: Biped robot design powered by antagonistic pneumatic actuators for multi-modal locomotion
paper_content:
An antagonistic muscle mechanism that regulates joint compliance contributes enormously to human dynamic locomotion. Antagonism is considered to be the key for realizing more than one locomotion mode. In this paper, we demonstrate how antagonistic pneumatic actuators can be utilized to achieve three dynamic locomotion modes (walking, jumping, and running) in a biped robot. Firstly, we discuss the contribution of joint compliance to dynamic locomotion, which highlights the importance of tunable compliance. Secondly, we introduce the design of a biped robot powered by antagonistic pneumatic actuators. Lastly, we apply simple feedforward controllers for realizing walking, jumping, and running and confirm the contribution of joint compliance to such multimodal dynamic locomotion. Based on the results, we can conclude that the antagonistic pneumatic actuators are superior candidates for constructing a human-like dynamic locomotor.
---
paper_title: A combined potential function and graph search approach for free gait generation of quadruped robots
paper_content:
This paper presents an algorithm for planning the foothold positions of quadruped robots on irregular terrain. The input to the algorithm is the robot kinematics, the terrain geometry, a required motion path, as well as initial posture. Our goal is to develop general algorithm that navigate quadruped robots quasi-statically over rough terrain, using an APF (Artificial Potential Field) and graph searching. The algorithm is planning a sequence set of footholds that navigates the robot along the required path with controllable motion characteristics. Simulations results demonstrate the algorithm in a planner environment.
---
paper_title: Integration of linguistic and numerical information for biped control
paper_content:
Bipedal locomotion is an important hallmark of human evolution. Despite of complex control systems, human locomotion is characterized by smooth, regular, and repeating movements. Therefore, there is the potential for applying human locomotion strategies and any knowledge available to the biped control. In order to make the most use of the information available, a linguistic-numerical integration-based biped control method is proposed in this paper. The numerical data from biped measuring instruments, and the linguistic rules obtained from intuitive walking knowledge and biomechanics study have been classified into four categories: direct rules, direct data, indirect rules, and indirect data. Based on inverse learning and data fusion theory, two simple and intuitive integration schemes are proposed to integrate linguistic and numerical information with various forms, such as direct and indirect. One is neurofuzzy-based integration, and another is fuzzy rules extraction-based integration. The simulation results show that the biped gait and joint control performance can be significantly improved by the prescribed synergy method-based neurofuzzy gait synthesis and fuzzy rules extraction-based joint control strategies using linguistic and numerical integrated information.
---
paper_title: Experimental Validation of a Framework for the Design of Controllers that Induce Stable Walking in Planar Bipeds
paper_content:
In this paper we present the experimental validation of a framework for the systematic design, analysis, and performance enhancement of controllers that induce stable walking in N -link underactuated planar biped robots. Controllers designed via this framework act by enforcing virtual constraints—holonomic constraints imposed via feedback—on the robot’s configuration, which create an attracting two-dimensional invariant set in the full walking model’s state space. Stability properties of resulting walking motions are easily analyzed in terms of a two-dimensional subdynamic of the full walking model. A practical introduction to and interpretation of the framework is given. In addition, in this paper we develop the ability to regulate the average walking rate of the biped to a continuum of values by modification of within-stride and stride-boundary characteristics, such as step length.
---
paper_title: Configuring of Spiking Central Pattern Generator Networks for Bipedal Walking Using Genetic Algorthms
paper_content:
In limbed animals, spinal neural circuits responsible for controlling muscular activities during walking are called central pattern generators (CPG). CPG networks display oscillatory activities that actuates individual or groups of muscles in a coordinated fashion so that the limbs of the animal are flexed and extended at the appropriate time and with the required velocity for the animal to efficiently traverse various types of terrain, and to recover from environmental perturbation. Typically, the CPG networks are constructed with many neurons, each of which has a number of control parameters. As the number of muscles increases, it is often impossible to manually, albeit intelligently, select the network parameters for a particular movement. Furthermore, it is virtually impossible to reconfigure the parameters on-line. This paper describes how genetic algorithms (GA) can be used for on-line (re)configuring of CPG networks for a bipedal robot. We show that the neuron parameters and connection weights/network topology of a canonical walking network can be reconfigured within a few of generations of the GA. The networks, constructed with integrate-and-fire-with-adaptation (IFA) neurons, are implemented with a microcontroller and can be reconfigured to vary walking speed from 0.5Hz to 3.5Hz. The phase relationship between the hips and knees can be arbitrarily set (to within 1 degree) and prescribed complex joint angle profiles are realized. This is a powerful approach to generating complex muscle synergies for robots with multiple joints and distributed actuators.
---
paper_title: Programmable central pattern generators: an application to biped locomotion control
paper_content:
We present a system of coupled nonlinear oscillators to be used as programmable central pattern generators, and apply it to control the locomotion of a humanoid robot. Central pattern generators are biological neural networks that can produce coordinated multidimensional rhythmic signals, under the control of simple input signals. They are found both in vertebrate and invertebrate animals for the control of locomotion. In this article, we present a novel system composed of coupled adaptive nonlinear oscillators that can learn arbitrary rhythmic signals in a supervised learning framework. Using adaptive rules implemented as differential equations, parameters such as intrinsic frequencies, amplitudes, and coupling weights are automatically adjusted to replicate a teaching signal. Once the teaching signal is removed, the trajectories remain embedded as the limit cycle of the dynamical system. An interesting aspect of this approach is that the learning is completely embedded into the dynamical system, and does not require external optimization algorithms. We use our system to encapsulate rhythmic trajectories for biped locomotion with a simulated humanoid robot, and demonstrate how it can be used to do online trajectory generation. The system can modulate the speed of locomotion, and even allow the reversal of direction (i.e. walking backwards). The integration of sensory feedback allows the online modulation of trajectories such as to increase the basin of stability of the gaits, and therefore the range of speeds that can be produced
---
paper_title: On Ballistic Walking Locomotion of a Quadruped
paper_content:
This paper investigates ballistic motions in walking quadrupeds on a horizontal plane. The study is carried out on a quadruped consisting of a body and four identical two-link legs. Each leg has a knee joint and is connected to the body by a haunch joint. Three types of quadruped gaits, bound, amble, and trot, are studied. None of these gaits complies with a flight phase, but they all involve simultaneous and identical motion of two legs. Muscle activities are commonly believed to alternate with periods of relaxation. Our study, therefore, assumes that the swing phase is ballistic, i.e., no active control torque is exerted. Ballistic motion is achieved through appropriate initial velocities. These velocities result from impulsive active control torques and ground reactions exerted at the boundary instants of the single support phase. Natural ballistic motions are shown to exist for the three gaits and for each valid walking velocity class. Torque cost analysis shows that amble and trot gaits are more effi...
---
paper_title: Automated evolutionary design, robustness, and adaptation of sidewinding locomotion of a simulated snake-like robot
paper_content:
Inspired by the efficient method of locomotion of the rattlesnake Crotalus cerastes, the objective of this work is automatic design through genetic programming (GP) of the fastest possible (sidewinding) locomotion of simulated limbless, wheelless snake-like robot (Snakebot). The realism of simulation is ensured by employing the Open Dynamics Engine (ODE), which facilitates implementation of all physical forces, resulting from the actuators, joints constrains, frictions, gravity, and collisions. Reduction of the search space of the GP is achieved by representation of Snakebot as a system comprising identical morphological segments and by automatic definition of code fragments, shared among (and expressing the correlation between) the evolved dynamics of the vertical and horizontal turning angles of the actuators of Snakebot. Empirically obtained results demonstrate the emergence of sidewinding locomotion from relatively simple motion patterns of morphological segments. Robustness of the sidewinding Snakebot, which is considered to be the ability to retain its velocity when situated in an unanticipated environment, is illustrated by the ease with which Snakebot overcomes various types of obstacles such as a pile of or burial under boxes, rugged terrain, and small walls. The ability of Snakebot to adapt to partial damage by gradually improving its velocity characteristics is discussed. Discovering compensatory locomotion traits, Snakebot recovers completely from single damage and recovers a major extent of its original velocity when more significant damage is inflicted. Exploring the opportunity for automatic design and adaptation of a simulated artifact, this work could be considered as a step toward building real Snakebots, which are able to perform robustly in difficult environments.
---
paper_title: Neural networks for the control of a six-legged walking machine
paper_content:
Abstract In this paper a hierarchical control architecture for a six-legged walking machine is presented. The basic components of this architecture are neural networks which are taught in using examples of the control process. It is shown how the basic components “leg control” and “leg coordination” have been implemented by recurrent and feedforward networks respectively. The teaching process and the tests of the walking behaviour have mainly been done in a simulation system. First tests of the leg control on our real walking machine LAURON are also described.
---
paper_title: Efficient Walking Speed Optimization of a Humanoid Robot
paper_content:
The development of optimized motions of humanoid robots that guarantee fast and also stable walking is an important task, especially in the context of autonomous soccer-playing robots in RoboCup. We present a walking motion optimization approach for the humanoid robot prototype HR18 which is equipped with a low-dimensional parameterized walking trajectory generator, joint motor controller and an internal stabilization. The robot is included as hardware-in-the-loop to define a low-dimensional black-box optimization problem. In contrast to previously performed walking optimization approaches, we apply a sequential surrogate optimization approach using stochastic approximation of the underlying objective function and sequential quadratic programming to search for a fast and stable walking motion. This is done under the conditions that only a small number of physical walking experiments should have to be carried out during the online optimization process. For the identified walking motion for the considered 55 cm tall humanoid robot, we measured a forward walking speed of more than 30 cm s-1 . With a modified version of the robot, even more than 40 cm s-1 could be achieved in permanent operation.
---
paper_title: Structural Evolution of Central Pattern Generators for Bipedal Walking in 3D Simulation
paper_content:
Anthropomorphic walking for a simulated bipedal robot has been realized by means of artificial evolution of central pattern generator (CPG) networks. The approach has been investigated through full rigid-body dynamics simulations in 3D of a bipedal robot with 14 degrees of freedom. The half-center CPG model has been used as an oscillator unit, with interconnection paths between oscillators undergoing structural modifications using a genetic algorithm. In addition, the connection weights in a feedback network of predefined structure were evolved. Furthermore, a supporting structure was added to the robot in order to guide the evolutionary process towards natural, human-like gaits. Subsequently, this structure was removed, and the ability of the best evolved controller to generate a bipedal gait without the help of the supporting structure was verified. Stable, natural gait patterns were obtained, with a maximum walking speed of around 0.9 m/s.
---
paper_title: Optimal trajectory generation for a prismatic joint biped robot using genetic algorithms
paper_content:
Abstract In this paper, a prismatic joint biped robot trajectory planning method is proposed. The minimum consumed energy is used as a criterion for trajectory generation, by using a real number genetic algorithm as an optimization tool. The minimum torque change cost function and constant vertical position trajectories are used in order to compare the results and verify the effectiveness of this method. The minimum consumed energy walking is stable and the impact of the foot with the ground is very small. Experimental investigations of a prismatic joint biped robot confirmed the predictions concerning the consumed energy and stability.
---
paper_title: Generation of free gait-a graph search approach
paper_content:
A method is presented for the generation of a free gait for the straight-line motion of a quadruped walking machine. It uses a heuristic graph search procedure based on the A* algorithm. The method essentially looks into the consequences of a move to a certain depth before actually committing to it. Deadlocks and inefficiencies are thus sensed well in advance and avoided. >
---
paper_title: Passive compliant quadruped robot using Central Pattern Generators for locomotion control
paper_content:
We present a new quadruped robot, ldquoCheetahrdquo, featuring three-segment pantographic legs with passive compliant knee joints. Each leg has two degrees of freedom - knee and hip joint can be actuated using proximal mounted RC servo motors, force transmission to the knee is achieved by means of a bowden cable mechanism. Simple electronics to command the actuators from a desktop computer have been designed in order to test the robot. A Central Pattern Generator (CPG) network has been implemented to generate different gaits. A parameter space search was performed and tested on the robot to optimize forward velocity.
---
paper_title: Stability Analysis of Legged Locomotion Models by Symmetry-Factored Return Maps
paper_content:
We present a new stability analysis for hybrid legged locomotion systems based on the “symmetric” factorization of return maps. We apply this analysis to two-degrees-of-freedom (2DoF) and three-degrees-of-freedom (3DoF) models of the spring loaded inverted pendulum (SLIP) with different leg recirculation strategies. Despite the non-integrability of the SLIP dynamics, we obtain a necessary condition for asymptotic stability (and a sufficient condition for instability) at a fixed point, formulated as an exact algebraic expression in the physical parameters. We use this expression to characterize analytically the sensory cost and stabilizing benefit of various feedback schemes previously proposed for the 2DoF SLIP model, posited as a low-dimensional representation of running. We apply the result as well to a 3DoF SLIP model that will be treated at greater length in a companion paper as a descriptive model for the robot RHex.
---
paper_title: Research on gait planning of artificial leg based on central pattern generator
paper_content:
Biped robot with heterogeneous legs (BRHL) is a novel robot model, which consists of an artificial leg and an intelligent bionic leg. The artificial leg is used to simulate the amputeepsilas healthy leg and the bionic leg works as the intelligent artificial limb. To describe the present gait of the healthy leg and make intelligent bionic leg follow the walking of artificial leg in all phases is the target of BRHLpsilas research. So gait planning of artificial leg is the emphasis of BRHLpsilas research. This paper uses the model of central pattern generator (CPG) in the research of artificial legpsilas gait planning from the point of biology. To obtain natural and robust walking pattern, genetic algorithm is used to optimize parameters of the CPG network model and the fitness function is formulated based on zero moment point (ZMP). Simulation results testify the feasibility of this method.
---
paper_title: Evolution of Central Pattern Generators for Bipedal Walking in a Real-Time Physics Environment
paper_content:
We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary in order to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to 3D physically simulated biped locomotion.
---
paper_title: Slip-adaptive walk of quadruped robot
paper_content:
Abstract In this paper, we investigated the effects of the friction condition on walking pattern and energy efficiency, and based on the results, we proposed two new “slip-adaptive” strategies for generating a slip-adaptive walk. The first strategy for a slip-adaptive walk uses a slip reflex via a Central Pattern Generator (CPG) to change the walking pattern. The second strategy for a walk uses a force control to immediately compensate a slip. Using these strategies, a walk, which is adaptive to varying friction conditions and slips, becomes possible. The validity of the proposed method is confirmed through simulation and experimentation.
---
paper_title: Autonomous evolution of dynamic gaits with two quadruped robots
paper_content:
A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First, we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10 m/min, which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.
---
paper_title: A movement pattern generator model using artificial neural networks
paper_content:
The authors have developed a movement pattern generator, using an artificial neural network (ANN) for generating periodic movement trajectories. This model is based on the concept of 'central pattern generators'. Jordan's (1986) sequential network, which is capable of learning sequences of patterns, was modified and used to generate several bipedal trajectories (or gaits), coded in task space, at different frequencies. The network model successfully learned all of the trajectories presented to it. The model has many attractive properties, such as limit cycle behavior, generalization of trajectories and frequencies, phase maintenance, and fault tolerance. The movement pattern generator model is potentially applicable for improved understanding of animal locomotion and for use in legged robots and rehabilitation medicine. >
---
paper_title: Fuzzy-logic zero-moment-point trajectory generation for reduced trunk motions of biped robots
paper_content:
Trunk motions are typically used in biped robots to stabilize the locomotion. However, they can be very large for some leg trajectories unless they are carefully designed. This paper proposes a fuzzy-logic zero-moment-point (ZMP) trajectory generator that would eventually reduce the swing motion of the trunk significantly even though the leg trajectory is casually designed, for example, simply to avoid obstacles. The fuzzy-logic ZMP trajectory generator uses the leg trajectory as an input. The resulting ZMP trajectory is similar to that of a human one and continuously moves forward in the direction of the locomotion. The trajectory of the trunk to stabilize the locomotion is determined by solving a differential equation with the ZMP trajectory and the leg trajectory known. The proposed scheme is simulated on a 7-DOF biped robot in the sagittal plane. The simulation results show that the ZMP trajectory generated by the proposed fuzzy-logic generator increases the stability of the locomotion and thus reduces the motion range of the trunk significantly.
---
paper_title: Humanoid robotics platforms developed in HRP
paper_content:
Abstract This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI’s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the robot for biped locomotion, falling and getting up motions. The platform has been used to develop various applications and is expected to initiate more humanoid robotics research.
---
paper_title: A Parametric Optimization Approach to Walking Pattern Synthesis
paper_content:
Walking pattern synthesis is carried out using a spline-based parametric optimization technique. Generalized coordinates are approximated by spline functions of class C3fitted at knots uniformly distributed along the motion time. This high-order differentiability eliminates jerky variations of actuating torques. Through connecting conditions, spline polynomial coefficients are determined as a linear function of the joint coordinates at knots. These values are then dealt with as optimization parameters. An optimal control problem is formulated on the basis of a performance criterion to be minimized, representing an integral quadratic amount of driving torques. Using the above spline approximations, this primary problem is recast into a constrained non-linear optimization problem of mathematical programming, which is solved using a computing code implementing an SQP algorithm. As numerical simulations, complete gait cycles are generated for a seven-link planar biped. The only kinematic data to be accounted for are the walking speeds. Optimization of both phases of gait is carried out globally; it includes the optimization of transition configurations of the biped between successive phases of the gait cycle.
---
paper_title: A universal stability criterion of the foot contact of legged robots - adios ZMP
paper_content:
This paper proposes a universal stability criterion of the foot contact of legged robots. The proposed method checks if the sum of the gravity and the inertia wrench applied to the COG of the robot, which is proposed to be the stability criterion, is inside the polyhedral convex cone of the contact wrench between the feet of a robot and its environment. The criterion can be used to determine the strong stability of the foot contact when a robot walks on an arbitrary terrain and/or when the hands of the robot are in contact with it under the sufficient friction assumption. The determination is equivalent to check if the ZMP is inside the support polygon of the feet when the robot walks on a horizontal plane with sufficient friction. The criterion can also be used to determine if the foot contact is sufficiently weakly stable when the friction follows a physical law. Therefore, the proposed criterion can be used to judge what the ZMP can, and it can be used in more universal cases
---
paper_title: Bipedal locomotion control using a four-compartmental central pattern generator
paper_content:
In this paper, we develop a simple bipedal locomotion algorithm based on biological concepts. The algorithm utilized a central pattern generator (CPG) composed of four coupled neural oscillators (NO) to generate control signal for the bipedal robot. Feedbacks from the robot dynamics and the environment are used to update the CPG online. Our algorithm is then tested on a seven link model of bipedal robot. Simulation results suggest the proposed CPG can generate a smooth and continuous walking pattern for the robot.
---
paper_title: Online Generation of Cyclic Leg Trajectories Synchronized with Sensor Measurement
paper_content:
The generation of trajectories for a biped robot is a problem which has been largely studied for several years, and many satisfying offline solutions exist for steady-state walking in absence of disturbances. The question is a little more complex when the generation of the desired trajectories of joints or links has to be achieved or adapted online, i.e. in real time, for example when it is wished to strongly synchronize these trajectories with an external motion. This is precisely the problem addressed in this paper. Indeed, we consider the case where the ''master'' motion is measured by a position sensor embedded on a human leg. We propose a method to synchronize the motion of a robot or of other device with respect to the output signal of the sensor. The main goal is to estimate as accurately as possible the current phase along the gait cycle. We use for that purpose a model based on a nonlinear oscillator, which we associate an observer. Introducing the sensor output in the observer allows us to compute the oscillator phase and to generate a synchronized multilinks trajectory, at a very low computational cost. The paper also presents evaluation results in terms of robustness against parameter estimation errors and velocity changes in the input.
---
paper_title: A Learning Architecture Based on Reinforcement Learning for Adaptive Control of the Walking Machine LAURON
paper_content:
The learning of complex control behaviour of autonomous mobile robots is one of the actual research topics. In this article an intelligent control architecture is presented which integrates learning methods and available domain knowledge. This control architecture is based on Reinforcement Learning and allows continuous input and output parameters, hierarchical learning, multiple goals, self-organized topology of the used networks and online learning. As a testbed this architecture is applied to the six-legged walking machine LAURON to learn leg control and leg coordination.
---
paper_title: Soccer playing humanoid robots: Processing architecture, gait generation and vision system
paper_content:
Research on humanoid robotics in Mechatronics and Automation (MA) Laboratory, Electrical and Computer Engineering (ECE), National University of Singapore (NUS) was started at the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. These humanoids have been successfully participating in various robotic soccer competitions. In this paper, three major research and development aspects of the above humanoid research are discussed. The paper focuses on various practical and theoretical considerations involved in processing architecture, gait generation and vision systems.
---
paper_title: Design of a Central Pattern Generator Using Reservoir Computing for Learning Human Motion
paper_content:
To generate coordinated periodic movements, robot locomotion demands mechanisms which are able to learn and produce stable rhythmic motion in a controllable way. Because systems based on biological central pattern generators (CPGs) can cope with these demands, these kind of systems are gaining in success. In this work we introduce a novel methodology that uses the dynamics of a randomly connected recurrent neural network for the design of CPGs. When a randomly connected recurrent neural network is excited with one or more useful signals, an output can be trained by learning an instantaneous linear mapping of the neuron states. This technique is known as reservoir computing (RC). We will show that RC has the necessary capabilities to be fruitful in designing a CPG that is able to learn human motion which is applicable for imitation learning in humanoid robots.
---
paper_title: Real time gait generation for autonomous humanoid robots: A case study for walking
paper_content:
Abstract As autonomous humanoid robots assume more important roles in everyday life, they are expected to perform many different tasks and quickly adapt to unknown environments. Therefore, humanoid robots must generate quickly the appropriate gait based on information received from visual system. In this work, we present a new method for real time gait generation during walking based on Neural Networks. The minimum consumed energy gaits similar with human motion, are used to teach the Neural Network. After supervised learning, the Neural Network can quickly generate the humanoid robot gait. Simulation and experimental results utilizing the “Bonten-Maru I” humanoid robot show good performance of the proposed method.
---
paper_title: Dynamic balance of a biped robot using fuzzy reinforcement learning agents
paper_content:
This paper presents a general fuzzy reinforcement learning (FRL) method for biped dynamic balance control. Based on a neuro fuzzy network architecture, different kinds of expert knowledge and measurement-based information can be incorporated into the FRL agent to initialise its action network, critic network and/or evaluation feedback module so as to aecelerate its learning. The proposed FRL agent is constructed and verified using the simulation model of a physical biped robot. The sinmtation analysis shows that by incorporation of the human intuitive balancing knowledge and walking evaluation knowledge, the FRL agent's learning rate for side-to-side and front-to-back balance of the simulated biped can be improved. We also demonstrate that it is possible for a biped robot to start its walking with a priori knowledge and then learn to improve its behaviour with the FRL agents.
---
paper_title: Development of Adaptive Modular Active Leg (AMAL) using bipedal robotics technology
paper_content:
The objective of the work presented here is to develop a low cost active above knee prosthetic device exploiting bipedal robotics technology which will work utilizing the available biological motor control circuit properly integrated with a Central Pattern Generator (CPG) based control scheme. The approach is completely different from the existing Active Prosthetic devices, designed primarily as standalone systems utilizing multiple sensors and embedded rigid control schemes. In this research, first we designed a fuzzy logic based methodology for offering suitable gait pattern for an amputee, followed by formulating a suitable algorithm for designing a CPG, based on Rayleigh's oscillator. An indigenous probe, Humanoid Gait Oscillator Detector (HGOD) has been designed for capturing gait patterns from various individuals of different height, weight and age. These data are used to design a Fuzzy inference system which generates most suitable gait pattern for an amputee. The output of the Fuzzy inference system is used for designing a CPG best suitable for the amputee. We then developed a CPG based control scheme for calculating the damping profile in real time for maneuvering a prosthetic device called AMAL (Adaptive Modular Active Leg). Also a number of simulation results are presented which show the stable behavior of knee and hip angles and determine the stable limit cycles of the network.
---
paper_title: Fast Biped Walking with a Sensor-driven Neuronal Controller and Real-time Online Learning
paper_content:
In this paper, we present our design and experiments on a planar biped robot under the control of a pure sensor-driven controller. This design has some special mechanical features, for example small curved feet allowing rolling action and a properly positioned center of mass, that facilitate fast walking through exploitation of the robot's natural dynamics. Our sensor-driven controller is built with biologically inspired sensor- and motor-neuron models, and does not employ any kind of position or trajectory tracking control algorithm. Instead, it allows our biped robot to exploit its own natural dynamics during critical stages of its walking gait cycle. Due to the interaction between the sensor-driven neuronal controller and the properly designed mechanics of the robot, the biped robot can realize stable dynamic walking gaits in a large domain of the neuronal parameters. In addition, this structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the sensor-driven controller in real-time, during walking. This way RunBot can reach a relative speed of 3.5 leg lengths per second after only a few minutes of online learning, which is faster than that of any other biped robot, and is also comparable to the fastest relative speed of human walking.
---
paper_title: Design of central pattern generator for humanoid robot walking based on multi-objective GA
paper_content:
Recently, the field of humanoid robotics attracts more and more interest and the research on humanoid locomotion based on central pattern generators (CPG) reveals many challenging aspects. This paper describes the design of CPG for stable humanoid bipedal locomotion using an evolutionary approach. In this research, each joint of the humanoid is driven by a neuron that consists of two coupled neural oscillators, and corresponding joint's neurons are connected by strength weight. To achieve natural and robust walking pattern, an evolutionary-based multi-objective optimization algorithm is used to solve the weight optimization problem. The fitness functions are formulated based on zero moment point (ZMP), global attitude of the robot and the walking speed. In the algorithms, real value coding and tournament selection are applied, the crossover and mutation operators are chosen as heuristic crossover and boundary mutation respectively. Following evolving, the robot is able to walking in the given environment and a simulation shows the result.
---
paper_title: Pattern generators with sensory feedback for the control of quadruped locomotion
paper_content:
Central pattern generators (CPGs) are becoming a popular model for the control of locomotion of legged robots. Biological CPGs are neural networks responsible for the generation of rhythmic movements, especially locomotion. In robotics, a systematic way of designing such CPGs as artificial neural networks or systems of coupled oscillators with sensory feedback inclusion is still missing. In this contribution, we present a way of designing CPGs with coupled oscillators in which we can independently control the ascending and descending phases of the oscillations (i.e. the swing and stance phases of the limbs). Using insights from dynamical system theory, we construct generic networks of oscillators able to generate several gaits under simple parameter changes. Then we introduce a systematic way of adding sensory feedback from touch sensors in the CPG such that the controller is strongly coupled with the mechanical system it controls. Finally we control three different simulated robots (iCub, Aibo and Ghostdog) using the same controller to show the effectiveness of the approach. Our simulations prove the importance of independent control of swing and stance duration. The strong mutual coupling between the CPG and the robot allows for more robust locomotion, even under non precise parameters and non-flat environment.
---
paper_title: Evolving Dynamical Neural Networks for Adaptive Behavior
paper_content:
We would like the behavior of the artificial agents that we construct to be as well-adapted to their environments as natural animals are to theirs. Unfortunately, designing controllers with these properties is a very difficult task. In this article, we demonstrate that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers. A significant advantage of this approach is that one need specify only a measure of an agent's overall performance rather than the precise motor output trajectories by which it is achieved. By manipulating the performance evaluation, one can place selective pressure on the development of controllers with desired properties. Several novel controllers have been evolved, including a chemotaxis controller that switches between different strategies depending on environmental conditions, and a locomotion controller that takes advantage of sensory feedback if available but th...
---
paper_title: Optimal Path and Gait Generations Simultaneously of a Six-legged Robot Using a GA-Fuzzy Approach
paper_content:
This paper describes a new method for generating optimal path and gait simultaneously of a six-legged robot using a combined GA-fuzzy approach. The problem of combined path and gait generations involves three steps, namely determination of vehicle’s trajectory, foothold selection and design of a sequence of leg movements. It is a complicated task and no single traditional approach is found to be successful in handling this problem. Moreover, the traditional approaches do not consider optimization issues, yet they are computationally expensive. Thus, the generated path and gaits may not be optimal in any sense. To solve such problems optimally, there is still a need for the development of an efficient and computationally faster algorithm. In the proposed genetic-fuzzy approach, optimal path and gaits are generated by using fuzzy logic controllers (FLCs) and genetic algorithms (GAs) are used to find optimized FLCs. The optimization is done off-line on a number of training scenarios and optimal FLCs are found. The hexapod can then use these GA-tuned FLCs to navigate in test-case scenarios.
---
paper_title: Efference copies in neural control of dynamic biped walking
paper_content:
In the early 1950s, von Holst and Mittelstaedt proposed that motor commands copied within the central nervous system (efference copy) help to distinguish 'reafference' activity (afference activity due to self-generated motion) from 'exafference' activity (afference activity due to external stimulus). In addition, an efference copy can be also used to compare it with the actual sensory feedback in order to suppress self-generated sensations. Based on these biological findings, we conduct here two experimental studies on our biped ''RunBot'' where such principles together with neural forward models are applied to RunBot's dynamic locomotion control. The main purpose of this article is to present the modular design of RunBot's control architecture and discuss how the inherent dynamic properties of the different modules lead to the required signal processing. We believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of modern neural control strategies in robots.
---
paper_title: Neuroethological Concepts and their Transfer to Walking Machines
paper_content:
A systems approach to animal motor behavior reveals concepts that can be useful for the pragmatic design of walking machines. This is because the relation of animal behavior to its underlying nervous control algorithms bears many parallels to the relation of machine function to electronic control. Here, three major neuroethological concepts of motor behavior are described in terms of a conceptual framework based on artificial neural networks (ANN). Central patterns of activity and postural reflexes are both interpreted as a result of feedback loops, with the distinction of loops via an internal model from loops via the physical environment (body, external world). This view allows continuous transitions between predictive (centrally driven) and reactive (reflex driven) motor systems. Motor primitives, behavioral modules that are elicited by distinct commands, are also considered. ANNs capture these three major concepts in terms of a formal description, in which the interactions and mutual interdependences ...
---
paper_title: Gait Optimization through Search
paper_content:
We present a search-based method for the generation of a terrain-adaptive optimal gait of a six-legged walking machine. In this, several heuristic rules have been proposed to reduce the search effort. We identify the useful support states of the machine and form a table to indicate for each of these states the list of other states to which a transition can be made. This helps in converging to and maintaining a periodic gait through a limited search while retaining adequate options to deviate from such a gait as and when needed. The criterion for optimization is coded into a function that evaluates the promise of a node in the search graph. We have shown how this function may be designed to generate the common periodic gaits like the wave gait, the equal phase gait, and the follow-the-leader gait. The purpose is to demonstrate that the proposed method is sufficiently general and can cater to a wide range of optimizing requirements.
---
paper_title: Robustness of the dynamic walk of a biped robot subjected to disturbing external forces by using CMAC neural networks
paper_content:
Abstract In this paper, we propose a control strategy allowing us to perform the dynamic walking gait of an under-actuated robot even if this one is subjected to destabilizing external disturbances. This control strategy is based on two stages. The first one consists of using a set of pragmatic rules in order to generate a succession of passive and active phases allowing us to perform a dynamic walking gait of the robot. The joint trajectories of this reference gait are learned by using neural networks. In the second stage, we use these neural networks to generate the learned trajectories during the first stage. The goal of the use of these neural networks is to increase the robustness of the control of the dynamic walking gait of this robot in the case of external disturbances. The first experimental results are also presented.
---
paper_title: A Control Approach for Actuated Dynamic Walking in Biped Robots
paper_content:
This paper presents an approach for the closed-loop control of a fully actuated biped robot that leverages its natural dynamics when walking. Rather than prescribing kinematic trajectories, the approach proposes a set of state-dependent torques, each of which can be constructed from a combination of low-gain spring-damper couples. Accordingly, the limb motion is determined by interaction of the passive control elements and the natural dynamics of the biped, rather than being dictated by a reference trajectory. In order to implement the proposed approach, the authors develop a model-based transformation from the control torques that are defined in a mixed reference frame to the actuator joint torques. The proposed approach is implemented in simulation on an anthropomorphic biped. The simulated biped is shown to converge to a stable, natural-looking walk from a variety of initial configurations. Based on these simulations, the mechanical cost of transport is computed and shown to be significantly lower than that of trajectory-tracking approaches to biped control, thus validating the ability of the proposed idea to provide efficient dynamic walking. Simulations further demonstrate walking at varying speeds and on varying ground slopes. Finally, controller robustness is demonstrated with respect to forward and backward push-type disturbances and with respect to uncertainty in model parameters.
---
paper_title: On-line stable gait generation of a two-legged robot using a genetic–fuzzy system
paper_content:
Abstract Gait generation for legged vehicles has since long been considered as an area of keen interest by the researchers. Soft computing is an emerging technique, whose utility is more stressed, when the problems are ill-defined, difficult to model and exhibit large scale solution spaces. Gait generation for legged vehicles is a complex task. Therefore, soft computing can be applied to solve it. In this work, gait generation problem of a two-legged robot is modeled using a fuzzy logic controller (FLC), whose rule base is optimized offline, using a genetic algorithm (GA). Two different GA-based approaches (to improve the performance of FLC) are developed and their performances are compared to that of manually constructed FLC. Once optimized, the FLCs will be able to generate dynamically stable gait of the biped. As the CPU-time of the algorithm is found to be only 0.002 s in a P-III PC, the algorithm is suitable for on-line (real-time) implementations.
---
paper_title: Dynamic walking and running of a bipedal robot using hybrid central pattern generator method
paper_content:
This paper presents simulation and experimental results of dynamic walking and running of a bipedal robot. We proposed the hybrid central pattern generator (H-CPG) method to realize adaptive dynamic motions including stepping and jumping. This method basically consisted of CPG models, but was added the force control system that controlled the acting force from a leg to a floor in the vertical and the horizontal directions separately. At this moment, 2D walking and running was realized on level ground and a slope with a speed of 1.6 m/s.
---
paper_title: Biped dynamic walking using reinforcement learning
paper_content:
This paper presents some results from a study of biped dynamic walking using reinforcement learning. During this study a hardware biped robot was built, a new reinforcement learning algorithm as well as a new learning architecture were developed. The biped learned dynamic walking without any previous knowledge about its dynamic model. The self scaling reinforcement (SSR) learning algorithm was developed in order to deal with the problem of reinforcement learning in continuous action domains. The learning architecture was developed in order to solve complex control problems. It uses different modules that consist of simple controllers and small neural networks. The architecture allows for easy incorporation of new modules that represent new knowledge, or new requirements for the desired task.
---
paper_title: Locomotion pattern generation of semi-looper type robots using central pattern generators based on van der Pol oscillators
paper_content:
A control problem is studied for generating the locomotion pattern of a semi-looper type robot by applying central pattern generators (CPGs), in which such a robot can realize two-rhythm motion and green caterpillar locomotion depending on the condition of environment. After deriving the dynamical model with two links and one actuator, the simulation of the robot is conducted using a CPG consisting of one van der Pol (VDP) oscillator. A CPG network composed of two VDP oscillators is further constructed to realize four-rhythm motion with three links and two actuators.
---
paper_title: A pattern generator of humanoid robots walking on a rough terrain using a handrail
paper_content:
This paper presents a motion pattern generator of humanoid robots that walks on a flat plane, steps and a rough terrain. It is guaranteed rigorously that the desired contact between a humanoid robot and terrain should be maintained by keeping the contact wrench sum between them inside the contact wrench cone under the sufficient friction assumption. A walking pattern is generated by solving the contact wrench equations and by applying the resolved momentum control.
---
paper_title: Dynamics and balance of a humanoid robot during manipulation tasks
paper_content:
In this paper, we analyze the balance of a humanoid robot during manipulation tasks. By defining the generalized zero-moment point (GZMP), we obtain the region of it for keeping the balance of the robot during manipulation. During manipulation, the convex hull of the supporting points forms the 3-D convex polyhedron. The region of the GZMP is obtained by considering the infinitesimal displacement and the moment about the edges of the convex hull. We show that we can determine whether or not the robot may keep balance for several styles of manipulation tasks, such as pushing and pulling an object. The effectiveness of our proposed method is demonstrated by simulation.
---
paper_title: Dynamically balanced optimal gaits of a ditch-crossing biped robot
paper_content:
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
---
paper_title: An effective trajectory generation method for bipedal walking
paper_content:
This paper presents the virtual height inverted pendulum mode (VHIPM), which is a simple and effective trajectory generation method for the stable walking of biped robots. VHIPM, which is based on the inverted pendulum mode (IPM), can significantly reduce the zero moment point (ZMP) error by adjusting the height in the inverted pendulum. We show the relationship between VHIPM and other popular trajectory generation methods, and compare the ZMP errors in walking when trajectories are generated by various methods including VHIPM. We also investigate the sensitivity of the ZMP error in VHIPM to the step length, walking period and mass distribution of a robot. The simulation results show that VHIPM significantly reduces the ZMP errors compared to other methods under various circumstances.
---
paper_title: Landing Force Control for Humanoid Robot by Time-Domain Passivity Approach
paper_content:
This paper proposes a control method to absorb the landing force or the ground reaction force for a stable dynamic walking of a humanoid robot. Humanoid robot may become unstable during walking due to the impulsive contact force of the sudden landing of its foot. Therefore, a control method to decrease the landing force is required. In this paper, time-domain passivity control approach is applied for this purpose. Ground and the foot of the robot are modeled as two one-port network systems that are connected, and exchange energy with each other. The time-domain passivity controller with admittance causality is implemented, which has the landing force as input and foot's position to trim off the force as output. The proposed landing force controller can enhance the stability of the walking robot from simple computation. The small-sized humanoid robot, HanSaRam-VII that has 27 DOFs, is developed to verify the proposed scheme through dynamic walking experiments.
---
paper_title: OpenHRP: Open Architecture Humanoid Robotics Platform
paper_content:
This paper introduces an open architecture humanoid robotics platform (OpenHRP for short) on which various building blocks of humanoid robotics can be investigated. OpenHRP is a virtual humanoid robot platform with a compatible humanoid robot, and consists of a simulator of humanoid robots and motion control library for them which can also be applied to a compatible humanoid robot as it is. OpenHRP also has a view simulator of humanoid robots on which humanoid robot vision can be studied. The consistency between the simulator and the robot are enhanced by introducing a new algorithm to simulate repulsive force and torque between contacting objects. OpenHRP is expected to initiate the exploration of humanoid robotics on an open architecture software and hardware, thanks to the unification of the controllers and the examined consistency between the simulator and a real humanoid robot.
---
paper_title: Comparison of different gaits with rotation of the feet for a planar biped
paper_content:
Fast human walking includes a phase where the stance heel rises from the ground and the stance foot rotates about the stance toe. This phase where the biped becomes under-actuated is not present during the walk of humanoid robots. The objective of this study is to determine if this phase is useful to reduce the energy consumed in the walking. In order to study the efficiency of this phase, six cyclic gaits are presented for a planar biped robot. The simplest cyclic motion is composed of successive single support phases with flat stance foot on the ground. The most complex cyclic motion is composed of single support phases that include a sub-phase of rotation of the stance foot about the toe and of finite time double support phase. For the synthesis of these walking gaits, optimal motions with respect to the torque cost, are defined by taking into account given performances of actuators. It is shown that for fast motions a foot rotation sub-phase is useful to reduce the criteria cost. In the optimization process, under-actuated phase (foot rotation phase), fully-actuated phase (flat foot phase) and over-actuated phase (double support phase) are considered.
---
paper_title: Modifiable Walking Pattern of a Humanoid Robot by Using Allowable ZMP Variation
paper_content:
In order to handle complex navigational commands, this paper proposes a novel algorithm that can modify a walking period and a step length in both sagittal and lateral planes. By allowing a variation of zero moment point (ZMP) over the convex hull of foot polygon, it is possible to change the center of mass (CM) position and velocity independently throughout the single support phase. This permits a range of dynamic walking motion, which is not achievable using the 3-D linear inverted pendulum mode (3D-LIPM). In addition, the proposed algorithm enables to determine the dynamic feasibility of desired motion via the construction of feasible region, which is explicitly computed from the current CM state with simple ZMP functions. Moreover, adopting the closed-form functions makes it possible to calculate the algorithm in real time. The effectiveness of the proposed algorithm is demonstrated through both computer simulation and experiment on the humanoid robot, HanSaRam-VII, developed at the Robot Intelligence Technology (RIT) laboratory, Korea Advanced Institute of Science and Technology (KAIST).
---
paper_title: Experimental Validation of a Framework for the Design of Controllers that Induce Stable Walking in Planar Bipeds
paper_content:
In this paper we present the experimental validation of a framework for the systematic design, analysis, and performance enhancement of controllers that induce stable walking in N -link underactuated planar biped robots. Controllers designed via this framework act by enforcing virtual constraints—holonomic constraints imposed via feedback—on the robot’s configuration, which create an attracting two-dimensional invariant set in the full walking model’s state space. Stability properties of resulting walking motions are easily analyzed in terms of a two-dimensional subdynamic of the full walking model. A practical introduction to and interpretation of the framework is given. In addition, in this paper we develop the ability to regulate the average walking rate of the biped to a continuum of values by modification of within-stride and stride-boundary characteristics, such as step length.
---
paper_title: Biped gait generation and control based on a unified property of passive dynamic walking
paper_content:
Principal mechanisms of passive dynamic walking are studied from the mechanical energy point of view, and novel gait generation and control methods based on passive dynamic walking are proposed. First, a unified property of passive dynamic walking is derived, which shows that the walking system's mechanical energy increases proportionally with respect to the position of the system's center of mass. This yields an interesting indeterminate equation that determines the relation between the system's control torques and its center of mass. By solving this indeterminate equation for the control torque, active dynamic walking on a level can then be realized. In addition, the applications to the robust energy referenced control are discussed. The effectiveness and control performances of the proposed methods have been investigated through numerical simulations.
---
paper_title: Self-Excited Walking of a Biped Mechanism
paper_content:
The authors studied the self-excited walking of a four-link biped mechanism that possesses an actuated hip joint and passive knee joints with stoppers. They showed that the self-excitation control enables the three-degree-of-freedom planar biped model to walk on level ground by numerical simulation. From the parameter study, it was found that stable walking locomotion is possible over a wide range of feedback gain and link parameter values and that the walking period is almost independent of the feedback gain. Various characteristics of the self-excited walking of a biped mechanism were examined in relation to leg length, and length and mass ratios of the shank. Next, a biped mechanism was manufactured similar to the analytical model. After parameter modification, the authors demonstrated that the biped robot can perform natural dynamic walking on a plane with a 0.8 degree inclination. The simulated results also agree with the experimental walking locomotion.
---
paper_title: Real-time humanoid motion generation through ZMP manipulation based on inverted pendulum control
paper_content:
A humanoid robot is expected to be a rational form of machine to act in the real human environment and support people through interaction with them. Current humanoid robots, however, lack in adaptability, agility, or high-mobility enough to meet the expectations. In order to enhance high-mobility, the humanoid motion should be generated in real-time in accordance with the dynamics, which commonly requires a large amount of computation and has not been implemented so far. We have developed a real-time motion generation method that controls the center of gravity (COG) by indirect manipulation of the zero moment point (ZMP). The real-time response of the method provides humanoid robots with high-mobility. In the paper, the algorithm is presented. It consists of four parts, namely, the referential ZMP planning, the ZMP manipulation, the COG velocity decomposition to joint angles, and local control of joint angles. An advantage of the algorithm lies in its applicability to humanoids with a lot of degrees of freedom. The effectiveness of the proposed method is verified by computer simulations.
---
paper_title: A Pattern Generator of Humanoid Robots Walking on a Rough Terrain
paper_content:
This paper presents a biped humanoid robot that is able to walk on a rough terrain while touching a handrail. The contact wrench sum (CWS for short) is used as the criterion to judge if the contact between the robot and the environment is strongly stable under the sufficient friction assumption, where the contact points are not coplanar and the normal vectors at the points are not identical. It is confirmed that the proposed pattern generator can make the robot walk as desired in dynamics simulations and experiments, and the motions can be improved by a hand position control and using waist joints.
---
paper_title: Hardware design and gait generation of humanoid soccer robot Stepper-3D
paper_content:
This paper presents the hardware design and gait generation of humanoid soccer robot Stepper-3D. Virtual Slope Walking, inspired by Passive Dynamic Walking, is introduced for gait generation. In Virtual Slope Walking, by actively extending the stance leg and shortening the swing leg, the robot walks on level ground as it walks down a virtual slope. In practical, Virtual Slope Walking is generated by connecting three key frames in the sagittal plane with sinusoids. Aiming for improving the walking stability, the parallel double crank mechanism are adopted in the leg structure. Experimental results show that Stepper-3D achieves a fast forward walking speed of 0.5 m/s and accomplishes omnidirectional walking. Stepper-3D performed fast and stable walking in the RoboCup 2008 Humanoid competitions.
---
paper_title: Bipedal Locomotion: Stopping and the Standing/Balance Gait
paper_content:
A bipedal locomotion system is synthesized to characterize some of the previously overlooked aspects of the locomotion process, specifically standing/balance and initiation and stopping. The locomotion system is described by a three-element three-dimensional model consisting of two lower limbs and an upper body. The system equations of motion are derived using variational methods, and are retained in their nonlinear form. The impulsive contact events of impact of the swing limb with the ground and transfer of support are incorporated into the model. Bipedal locomotion is synthesized through numerical simulations. Several control inputs are studied for establishing and sustaining the standing/balance gait. Subsequent motion is analyzed via phase-space portraits. It is shown that an impulsive torque is sufficient for establishing and controlling the standing/balance gait as well as steady locomotion.
---
paper_title: Self-Excited Walking of a Biped Mechanism with Feet
paper_content:
In this paper we present a theoretical and experimental study of the self-excited walking of a biped mechanism with knees and feet on level ground. This biped mechanism possesses a single motor at the hip joint and active lock mechanisms at both knee joints. We first show that the self-excitation control enables the three-degrees-of-freedom planar biped model with feet to walk on level ground, and that a stable walking locomotion is possible over a wide range of feedback gain and a foot radius of up to 0.3 m. Subsequently, we describe the manufactured biped Robot 2 which is similar to the analytical model and we show that the biped robot can perform natural dynamic walking on a level floor. It is also shown that the simulated results agree well with the experimental results. We further describe the wireless-controlled biped Robot 3, which can walk on a level floor carrying batteries and electronic circuits inside the thighs.
---
paper_title: Posture/Walking Control for Humanoid Robot Based on Kinematic Resolution of CoM Jacobian With Embedded Motion
paper_content:
This paper proposes the walking pattern generation method, the kinematic resolution method of center of mass (CoM) Jacobian with embedded motions, and the design method of posture/walking controller for humanoid robots. First, the walking pattern is generated using the simplified model for bipedal robot. Second, the kinematic resolution of CoM Jacobian with embedded motions makes a humanoid robot balanced automatically during movement of all other limbs. Actually, it offers an ability of whole body coordination to humanoid robot. Third, the posture/walking controller is completed by adding the CoM controller minus the zero moment point controller to the suggested kinematic resolution method. We prove that the proposed posture/walking controller brings the disturbance input-to-state stability for the simplified bipedal walking robot model. Finally, the effectiveness of the suggested posture/walking control method is shown through experiments with regard to the arm dancing and walking of humanoid robot.
---
paper_title: Energy-Efficient and High-Speed Dynamic Biped Locomotion Based on Principle of Parametric Excitation
paper_content:
We clarified that the common necessary condition for generating a dynamic gait results from the requirement to restore mechanical energy through studies on passive dynamic walking mechanisms. This paper proposes a novel method of generating a dynamic gait that can be found in the mechanism of a swing inspired by the principle of parametric excitation using telescopic leg actuation. We first introduce a simple underactuated biped model with telescopic legs and semicircular feet and propose a law to control the telescopic leg motion. We found that a high-speed dynamic bipedal gait can easily be generated by only pumping the swing leg mass. We then conducted parametric studies by adjusting the control and physical parameters and determined how well the basic gait performed by introducing some performance indexes. Improvements in energy efficiency by using an elastic-element effect were also numerically investigated. Further, we theoretically proved that semicircular feet have a mechanism that decreases the energy dissipated by heel-strike collisions. We provide insights throughout this paper into how zero-moment-point-free robots can generate a novel biped gait.
---
paper_title: Observer-based dynamic walking control for biped robots
paper_content:
This article presents a novel observer-based control system to achieve reactive motion generation for dynamic biped walking. The proposed approach combines a feedback controller with an online generated feet pattern to assure a stable gait. Using the desired speed of the robot, a preview control system derives the dynamics of the robot's body, and thereby the trajectory of its center of mass, to ensure a zero moment point (ZMP) movement, which results in a stable execution of the calculated step pattern. Extending the control system by an observer, based on this knowledge and the measured sensor values, compensates for errors in the model parameters and disturbances encountering while walking.
---
paper_title: Measurement and comparison of humanoid H7 walking with human being
paper_content:
Abstract This paper describes our research efforts aimed at understanding human being walking functions. Using a motion-capture system, force plates and distributed force sensors, walk motion of both human being and humanoid H7 was captured. Experimental results are shown. Comparisons in between human being and H7 walk made using the following characteristics: (1) ZMP trajectories; (2) torso movement; (3) free leg trajectories; (4) joint angle usage; (5) joint torque usage. Furthermore, implications of the comparisons to the humanoid robot are discussed.
---
paper_title: Fuzzy-logic zero-moment-point trajectory generation for reduced trunk motions of biped robots
paper_content:
Trunk motions are typically used in biped robots to stabilize the locomotion. However, they can be very large for some leg trajectories unless they are carefully designed. This paper proposes a fuzzy-logic zero-moment-point (ZMP) trajectory generator that would eventually reduce the swing motion of the trunk significantly even though the leg trajectory is casually designed, for example, simply to avoid obstacles. The fuzzy-logic ZMP trajectory generator uses the leg trajectory as an input. The resulting ZMP trajectory is similar to that of a human one and continuously moves forward in the direction of the locomotion. The trajectory of the trunk to stabilize the locomotion is determined by solving a differential equation with the ZMP trajectory and the leg trajectory known. The proposed scheme is simulated on a 7-DOF biped robot in the sagittal plane. The simulation results show that the ZMP trajectory generated by the proposed fuzzy-logic generator increases the stability of the locomotion and thus reduces the motion range of the trunk significantly.
---
paper_title: Humanoid robotics platforms developed in HRP
paper_content:
Abstract This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI’s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the robot for biped locomotion, falling and getting up motions. The platform has been used to develop various applications and is expected to initiate more humanoid robotics research.
---
paper_title: Recent progress and development of the humanoid robot HanSaRam
paper_content:
This paper presents an overview of the recent progress and development of the humanoid robot, HanSaRam series, which have been developed in the Robot Intelligence Technology (RIT) Laboratory, KAIST since 2000. The HanSaRam series have been designed and developed as a small-sized robot for researching walking gate generation, navigation, task planning and HuroCup of FIRA. In particular, the performance of the 7th and 8th versions have been remarkably improved in the aspect of walking pattern generation and task planning. This paper describes the overall design and architecture of recently developed two versions of HanSaRam along with a developed vision simulator tool and the real-time walking gate generation scheme, modifiable walking pattern generator.
---
paper_title: Design and construction of a series of compact humanoid robots and development of biped walk control strategies
paper_content:
Abstract Design and construction of compact body humanoid robots and various biped locomotion control strategies implemented onto them in the ESYS humanoid project at the Engineering Systems Laboratory are presented. Design concepts and hardware specifications of the constructed compact size humanoid robots from Mk.1 to Mk.5 are discussed. As for the biped walk control, four biped locomotion control strategies all of which have various advantages such as versatility, high-energy efficiency, smooth loading on hardware, and real-time gait generation, are proposed. 3D biped dynamic walking of the constructed humanoids is realized by implementing the proposed biped control strategies onto them. Results of evaluation experiments on the proposed control strategies are reported.
---
paper_title: A Parametric Optimization Approach to Walking Pattern Synthesis
paper_content:
Walking pattern synthesis is carried out using a spline-based parametric optimization technique. Generalized coordinates are approximated by spline functions of class C3fitted at knots uniformly distributed along the motion time. This high-order differentiability eliminates jerky variations of actuating torques. Through connecting conditions, spline polynomial coefficients are determined as a linear function of the joint coordinates at knots. These values are then dealt with as optimization parameters. An optimal control problem is formulated on the basis of a performance criterion to be minimized, representing an integral quadratic amount of driving torques. Using the above spline approximations, this primary problem is recast into a constrained non-linear optimization problem of mathematical programming, which is solved using a computing code implementing an SQP algorithm. As numerical simulations, complete gait cycles are generated for a seven-link planar biped. The only kinematic data to be accounted for are the walking speeds. Optimization of both phases of gait is carried out globally; it includes the optimization of transition configurations of the biped between successive phases of the gait cycle.
---
paper_title: Programming full-body movements for humanoid robots by observation
paper_content:
Abstract The formulation and optimization of joint trajectories for humanoid robots is quite different from this same task for standard robots because of the complexity of humanoid robots’ kinematics and dynamics. In this paper we exploit the similarity between human motion and humanoid robot motion to generate joint trajectories for humanoids. In particular, we show how to transform human motion information captured by an optical tracking device into a high dimensional trajectory for a humanoid robot. We propose an automatic approach to relate humanoid robot kinematic parameters to the kinematic parameters of a human performer. Based on this relationship we infer the desired trajectories in robot joint space. B-spline wavelets are utilized to efficiently represent the trajectories. The density of the basis functions on the time axis is selected automatically. Large-scale optimization techniques are employed to solve the underlying computational problems efficiently. We applied our method to the task of teaching a humanoid robot how to make various naturally looking movements.
---
paper_title: A universal stability criterion of the foot contact of legged robots - adios ZMP
paper_content:
This paper proposes a universal stability criterion of the foot contact of legged robots. The proposed method checks if the sum of the gravity and the inertia wrench applied to the COG of the robot, which is proposed to be the stability criterion, is inside the polyhedral convex cone of the contact wrench between the feet of a robot and its environment. The criterion can be used to determine the strong stability of the foot contact when a robot walks on an arbitrary terrain and/or when the hands of the robot are in contact with it under the sufficient friction assumption. The determination is equivalent to check if the ZMP is inside the support polygon of the feet when the robot walks on a horizontal plane with sufficient friction. The criterion can also be used to determine if the foot contact is sufficiently weakly stable when the friction follows a physical law. Therefore, the proposed criterion can be used to judge what the ZMP can, and it can be used in more universal cases
---
paper_title: Soccer playing humanoid robots: Processing architecture, gait generation and vision system
paper_content:
Research on humanoid robotics in Mechatronics and Automation (MA) Laboratory, Electrical and Computer Engineering (ECE), National University of Singapore (NUS) was started at the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. These humanoids have been successfully participating in various robotic soccer competitions. In this paper, three major research and development aspects of the above humanoid research are discussed. The paper focuses on various practical and theoretical considerations involved in processing architecture, gait generation and vision systems.
---
paper_title: Postural Stability of Biped Robots and the Foot-Rotation Indicator (FRI) Point
paper_content:
The focus of this paper is the problem of foot rotation in biped robots during the single-support phase. Foot rotation is an indication of postural instability, which should be carefully treated in a dynamically stable walk and avoided altogether in a statically stable walk. We introduce the foot-rotation indicator (FRI) point, which is a point on the foot/ground-contact surface where the net ground-reaction force would have to act to keep the foot stationary. To ensure no foot rotation, the FRI point must remain within the convex hull of the foot-support area. In contrast with the ground projection of the center of mass (GCoM), which is a static criterion, the FRI point incorporates robot dynamics. As opposed to the center of pressure (CoP)—better known as the zero-moment point (ZMP) in the robotics literature—which may not leave the support area, the FRI point may leave the area. In fact, the position of the FRI point outside the footprint indicates the direction of the impending rotation and the magnitude of rotational moment acting on the foot. Owing to these important properties, the FRI point helps not only to monitor the state of postural stability of a biped robot during the entire gait cycle, but indicates the severity of instability of the gait as well. In response to a recent need, the paper also resolves the misconceptions surrounding the CoP/ZMP equivalence.
---
paper_title: Control of walking robots based on manipulation of the zero moment point
paper_content:
In this paper, a new application of the ZMP (Zero Moment Point) control law is presented. The objective of this control method is to obtain a smooth and soft motion based on a real-time control. In the controller, the ZMP is treated as an actuating signal. The coordinates of the robot body are fed back to obtain its position. The proposed control method was applied on two different biped robots, and its validity is verified experimentally.
---
paper_title: Compliant Terrain Adaptation for Biped Humanoids Without Measuring Ground Surface and Contact Forces
paper_content:
This paper reports the applicability of our passivity-based contact force control framework for biped humanoids. We experimentally demonstrate its adaptation to unknown rough terrain. Adaptation to uneven ground is achieved by optimally distributed antigravitational forces applied to preset contact points in a feedforward manner, even without explicitly measuring the external forces or the terrain shape. Adaptation to unknown inclination is also possible by combining an active balancing controller based on the center-of-mass (CoM) measurements with respect to the inertial frame. Furthermore, we show that a simple impedance controller for supporting the feet or hands allows the robot to adapt to low-friction ground without prior knowledge of the ground friction. This presentation includes supplementary experimental videos that show a full-sized biped humanoid robot balancing on uneven ground or time-varying inclination.
---
paper_title: ZERO-MOMENT POINT — THIRTY FIVE YEARS OF ITS LIFE
paper_content:
In Vol. 1, No. 1 of Int. J. Humanoid Rubotics, we gave a review of the Zero-Moment Point (ZMP) concept, on the occasion of thirty five years of its use in modeling dynamics of the biped gait. At the time of writing this article, we thought that thirty five years was a long enough period in which all confusions, if they existed, could be cleared up. However, we were not quite right, and we will indicate here some of the most important points concerning the concept. A survey of the literature that appeared in the last several years related to the widely accepted ZMP method reveals a diversity of terms used to describe the state in which the biped system performs “regular” gait, safe from overturning. The terms in use are: stability (stable gait), dynamic balance, and dynamic equilibrium. Stability is a customary term in the theory of automatic control, and in this domain it has its well-defined meaning and usage, so that it is not appropriate to use it in some other instances, straying from its basic meaning. The term dynamic equilibrium is also unsuitable because it is used in the D’Alambert principle to transform dynamic equations into a static form, with a zero on the right-hand side — hence the term equilibrium. Therefore, dynamic balance remains the most appropriate term, as it fully reflects the nature of the related notion, and is not in use in other areas. We ourselves did not make clear this terminological distinction in the aforementioned article, which would have been prudent. Another issue we want to address is the situation illustrated in Fig. 4 in the paper — to make it easier to follow the explanation we repeat it here — concerning the notions of ZMP and fictitious ZMP (FZMP). As is well known, the crucial characteristic of a dynamically balanced gait is that the point at which the conditions ΣMx = 0, ΣMy = 0 are fulfilled is within the support polygon (Fig. 4), whereby
---
paper_title: Ground Reference Points in Legged Locomotion: Definitions, Biological Trajectories and Control Implications
paper_content:
The zero moment point (ZMP), foot rotation indicator (FRI) and centroidal moment pivot (CMP) are important ground reference points used for motion identification and control in biomechanics and legged robotics. In this paper, we study these reference points for normal human walking, and discuss their applicability in legged machine control. Since the FRI was proposed as an indicator of foot rotation, we hypothesize that the FRI will closely track the ZMP in early single support when the foot remains flat on the ground, but will then significantly diverge from the ZMP in late single support as the foot rolls during heel-off. Additionally, since spin angular momentum has been shown to remain small throughout the walking cycle, we hypothesize that the CMP will never leave the ground support base throughout the entire gait cycle, closely tracking the ZMP. We test these hypotheses using a morphologically realistic human model and kinetic and kinematic gait data measured from ten human subjects walking at self-...
---
paper_title: Running in Three Dimensions: Analysis of a Point-mass Sprung-leg Model
paper_content:
We analyze a simple model for running: a three-dimensional spring-loaded inverted pendulum carrying a point mass (3D-SLIP). Our formulation reduces to the sagittal plane SLIP and horizontal plane lateral leg spring (LLS) models in the appropriate limits. Using the intrinsic geometry and symmetries and appealing to the case of stiff springs, in which gravity may be neglected during stance, we derive an explicit approximate mapping describing stride-to-stride behavior. We thereby show that all left-right symmetric periodic gaits are unstable, deriving a particularly simple mapping for sagittal plane dynamics. Continuation to fixed points for the "exact" mapping confirms instability of these gaits, and we describe a simple feedback stabilization scheme for leg placement at touchdown.
---
paper_title: On the mechanics of natural compliance in frictional contacts and its effect on grasp stiffness and stability
paper_content:
This paper considers the effect of natural material compliance on the stiffness and stability of frictional multi-contact grasps and fixtures. The contact preload profile is a key parameter in the nonlinear compliance laws governing such contacts. The paper introduces the Hertz-Walton contact compliance model which is valid for linear contact loading profiles. The model is specified in a lumped parameter form suitable for on-line grasping applications, and is entirely determined by the contact friction and by the material and geometric properties of the contacting bodies. The model predicts an asymmetric stiffening of the tangential reaction force as the normal load at the contact increases. As a result, the composite stiffness matrix of multi-contact grasps governed by natural compliance effects is asymmetric, indicating that these contact arrangements are not governed by any potential energy function. Based on the compliant grasp dynamics, the paper derives rules indicating which contact point locations and what preload profiles guarantee grasp and fixture stability. The paper also describes preliminary experiments supporting the contact model predictions.
---
paper_title: Biped robot design powered by antagonistic pneumatic actuators for multi-modal locomotion
paper_content:
An antagonistic muscle mechanism that regulates joint compliance contributes enormously to human dynamic locomotion. Antagonism is considered to be the key for realizing more than one locomotion mode. In this paper, we demonstrate how antagonistic pneumatic actuators can be utilized to achieve three dynamic locomotion modes (walking, jumping, and running) in a biped robot. Firstly, we discuss the contribution of joint compliance to dynamic locomotion, which highlights the importance of tunable compliance. Secondly, we introduce the design of a biped robot powered by antagonistic pneumatic actuators. Lastly, we apply simple feedforward controllers for realizing walking, jumping, and running and confirm the contribution of joint compliance to such multimodal dynamic locomotion. Based on the results, we can conclude that the antagonistic pneumatic actuators are superior candidates for constructing a human-like dynamic locomotor.
---
paper_title: Reactive reflex-based control for a four-legged walking machine
paper_content:
Abstract This paper presents methods and experiments of a reactive control architecture for a four-legged walking machine. Starting with a description of the existing control architecture we introduce the concepts of reflexes and behaviours as well as their integration into the system. The used reactive network and the development process is described in detail. The paper concludes with a description of various experiments.
---
paper_title: Modeling and Experiments of Untethered Quadrupedal Running with a Bounding Gait: The Scout II Robot
paper_content:
In this paper we compare models and experiments involving Scout II, an untethered four-legged running robot with only one actuator per compliant leg. Scout II achieves dynamically stable running of up to 1.3 m s-1 on flat ground via a bounding gait. Energetics analysis reveals a highly efficient system with a specific resistance of only 1.4. The running controller requires no task-level or body-state feedback, and relies on the passive dynamics of the mechanical system. These results contribute to the increasing evidence that apparently complex dynamically dexterous tasks may be controlled via simple control laws. We discuss general modeling issues for dynamically stable legged robots. Two simulation models are compared with experimental data to test the validity of common simplifying assumptions. The need for including motor saturation and non-rigid torque transmission characteristics in simulation models is demonstrated. Similar issues are likely to be important in other dynamically stable legged robots as well. An extensive suite of experimental results documents the robot's performance and the validity of the proposed models.
---
paper_title: Stability Analysis of Legged Locomotion Models by Symmetry-Factored Return Maps
paper_content:
We present a new stability analysis for hybrid legged locomotion systems based on the “symmetric” factorization of return maps. We apply this analysis to two-degrees-of-freedom (2DoF) and three-degrees-of-freedom (3DoF) models of the spring loaded inverted pendulum (SLIP) with different leg recirculation strategies. Despite the non-integrability of the SLIP dynamics, we obtain a necessary condition for asymptotic stability (and a sufficient condition for instability) at a fixed point, formulated as an exact algebraic expression in the physical parameters. We use this expression to characterize analytically the sensory cost and stabilizing benefit of various feedback schemes previously proposed for the 2DoF SLIP model, posited as a low-dimensional representation of running. We apply the result as well to a 3DoF SLIP model that will be treated at greater length in a companion paper as a descriptive model for the robot RHex.
---
paper_title: On the Improvement of Walking Performance in Natural Environments by a Compliant Adaptive Gait
paper_content:
It is a widespread idea that animal-legged locomotion is better than wheeled locomotion on natural rough terrain. However, the use of legs as a locomotion system for vehicles and robots still has a long way to go before it can compete with wheels and trucks, even on natural ground. This paper aims to solve two main disadvantages plaguing walking robots: their inability to react to external disturbances (which is also a drawback of wheeled robots); and their extreme slowness. Both problems are reduced here by combining: 1) a gait-parameter-adaptation method that maximizes a dynamic energy stability margin and 2) an active-compliance controller with a new term that compensates for stability variations, thus helping the robot react stably in the face of disturbances. As a result, the combined gait-adaptation approach helps the robot achieve faster, more stable compliant motions than conventional controllers. Experiments performed with the SILO4 quadruped robot show a relevant improvement in the walking gait
---
paper_title: A Trotting Horse Model
paper_content:
A new control strategy is used to stabilize numerical simulations of a horse model in the trotting quadrupedal gait. Several well-established experimental findings are predicted by the model, including how stride frequency and stride length change with forward running speed. Mass is distributed throughout the model's legs, trunk, and head in a realistic manner. Leg and trunk flexion is modeled using four flexible legs, a back joint, and a neck joint. In the control model, pitch stabilization is achieved without directly controlling body pitch, but rather by controlling both the aerial time and the foot speed of each stance leg. The legs behave as ideal springs while in contact with the ground, enabling the model to rebound from the ground with each trotting step. Numerical experiments are conducted to test the model's capacity to overcome a change in ground impedance. Model stability is maximized and the metabolic cost of trotting is minimized within a narrow range of leg stiffness where trotting horses o...
---
paper_title: A Control Approach for Actuated Dynamic Walking in Biped Robots
paper_content:
This paper presents an approach for the closed-loop control of a fully actuated biped robot that leverages its natural dynamics when walking. Rather than prescribing kinematic trajectories, the approach proposes a set of state-dependent torques, each of which can be constructed from a combination of low-gain spring-damper couples. Accordingly, the limb motion is determined by interaction of the passive control elements and the natural dynamics of the biped, rather than being dictated by a reference trajectory. In order to implement the proposed approach, the authors develop a model-based transformation from the control torques that are defined in a mixed reference frame to the actuator joint torques. The proposed approach is implemented in simulation on an anthropomorphic biped. The simulated biped is shown to converge to a stable, natural-looking walk from a variety of initial configurations. Based on these simulations, the mechanical cost of transport is computed and shown to be significantly lower than that of trajectory-tracking approaches to biped control, thus validating the ability of the proposed idea to provide efficient dynamic walking. Simulations further demonstrate walking at varying speeds and on varying ground slopes. Finally, controller robustness is demonstrated with respect to forward and backward push-type disturbances and with respect to uncertainty in model parameters.
---
paper_title: An effective trajectory generation method for bipedal walking
paper_content:
This paper presents the virtual height inverted pendulum mode (VHIPM), which is a simple and effective trajectory generation method for the stable walking of biped robots. VHIPM, which is based on the inverted pendulum mode (IPM), can significantly reduce the zero moment point (ZMP) error by adjusting the height in the inverted pendulum. We show the relationship between VHIPM and other popular trajectory generation methods, and compare the ZMP errors in walking when trajectories are generated by various methods including VHIPM. We also investigate the sensitivity of the ZMP error in VHIPM to the step length, walking period and mass distribution of a robot. The simulation results show that VHIPM significantly reduces the ZMP errors compared to other methods under various circumstances.
---
paper_title: Application of Genetic Algorithms for biped robot gait synthesis optimization during walking and going up-stairs
paper_content:
Selecting an appropriate gait can reduce consumed energy by a biped robot. In this paper, a Genetic Algorithm gait synthesis method is proposed, which generates the angle trajectories based on the minimum consumed energy and minimum torque change. The gait synthesis is considered for two cases: walking and going up-stairs. The proposed method can be applied for a wide range of step lengths and step times during walking; or step lengths, stair heights and step times for going up-stairs. The angle trajectories are generated without neglecting the stability of the biped robot. The angle trajectories can be generated for other tasks to be performed by the biped robot, like going down-stairs, overcoming obstacles, etc. In order to verify the effectiveness of the proposed method, the results for minimum consumed energy and minimum torque change are compared. A Radial Basis Function Neural Network is considered for the real-time application. Simulations are realized based upon the parameters of the 'Bonten-Maru ...
---
paper_title: A combined potential function and graph search approach for free gait generation of quadruped robots
paper_content:
This paper presents an algorithm for planning the foothold positions of quadruped robots on irregular terrain. The input to the algorithm is the robot kinematics, the terrain geometry, a required motion path, as well as initial posture. Our goal is to develop general algorithm that navigate quadruped robots quasi-statically over rough terrain, using an APF (Artificial Potential Field) and graph searching. The algorithm is planning a sequence set of footholds that navigates the robot along the required path with controllable motion characteristics. Simulations results demonstrate the algorithm in a planner environment.
---
paper_title: Experimental Validation of a Framework for the Design of Controllers that Induce Stable Walking in Planar Bipeds
paper_content:
In this paper we present the experimental validation of a framework for the systematic design, analysis, and performance enhancement of controllers that induce stable walking in N -link underactuated planar biped robots. Controllers designed via this framework act by enforcing virtual constraints—holonomic constraints imposed via feedback—on the robot’s configuration, which create an attracting two-dimensional invariant set in the full walking model’s state space. Stability properties of resulting walking motions are easily analyzed in terms of a two-dimensional subdynamic of the full walking model. A practical introduction to and interpretation of the framework is given. In addition, in this paper we develop the ability to regulate the average walking rate of the biped to a continuum of values by modification of within-stride and stride-boundary characteristics, such as step length.
---
paper_title: Efficient Walking Speed Optimization of a Humanoid Robot
paper_content:
The development of optimized motions of humanoid robots that guarantee fast and also stable walking is an important task, especially in the context of autonomous soccer-playing robots in RoboCup. We present a walking motion optimization approach for the humanoid robot prototype HR18 which is equipped with a low-dimensional parameterized walking trajectory generator, joint motor controller and an internal stabilization. The robot is included as hardware-in-the-loop to define a low-dimensional black-box optimization problem. In contrast to previously performed walking optimization approaches, we apply a sequential surrogate optimization approach using stochastic approximation of the underlying objective function and sequential quadratic programming to search for a fast and stable walking motion. This is done under the conditions that only a small number of physical walking experiments should have to be carried out during the online optimization process. For the identified walking motion for the considered 55 cm tall humanoid robot, we measured a forward walking speed of more than 30 cm s-1 . With a modified version of the robot, even more than 40 cm s-1 could be achieved in permanent operation.
---
paper_title: Generation of free gait-a graph search approach
paper_content:
A method is presented for the generation of a free gait for the straight-line motion of a quadruped walking machine. It uses a heuristic graph search procedure based on the A* algorithm. The method essentially looks into the consequences of a move to a certain depth before actually committing to it. Deadlocks and inefficiencies are thus sensed well in advance and avoided. >
---
paper_title: A Parametric Optimization Approach to Walking Pattern Synthesis
paper_content:
Walking pattern synthesis is carried out using a spline-based parametric optimization technique. Generalized coordinates are approximated by spline functions of class C3fitted at knots uniformly distributed along the motion time. This high-order differentiability eliminates jerky variations of actuating torques. Through connecting conditions, spline polynomial coefficients are determined as a linear function of the joint coordinates at knots. These values are then dealt with as optimization parameters. An optimal control problem is formulated on the basis of a performance criterion to be minimized, representing an integral quadratic amount of driving torques. Using the above spline approximations, this primary problem is recast into a constrained non-linear optimization problem of mathematical programming, which is solved using a computing code implementing an SQP algorithm. As numerical simulations, complete gait cycles are generated for a seven-link planar biped. The only kinematic data to be accounted for are the walking speeds. Optimization of both phases of gait is carried out globally; it includes the optimization of transition configurations of the biped between successive phases of the gait cycle.
---
paper_title: Real time gait generation for autonomous humanoid robots: A case study for walking
paper_content:
Abstract As autonomous humanoid robots assume more important roles in everyday life, they are expected to perform many different tasks and quickly adapt to unknown environments. Therefore, humanoid robots must generate quickly the appropriate gait based on information received from visual system. In this work, we present a new method for real time gait generation during walking based on Neural Networks. The minimum consumed energy gaits similar with human motion, are used to teach the Neural Network. After supervised learning, the Neural Network can quickly generate the humanoid robot gait. Simulation and experimental results utilizing the “Bonten-Maru I” humanoid robot show good performance of the proposed method.
---
paper_title: A Simplified Stability Study for a Biped Walk with Underactuated and Overactuated Phases
paper_content:
This paper is devoted to a stability study of a walking gait for a biped. The walking gait is periodic and it is composed of a single-support phase, a passive impact, and a double-support phase. The reference trajectories are described as a function of the shin orientation versus the ground of the stance leg. We use the Poincare map to study the stability of the walking gait of the biped. We only study the stability of dynamics not controlled during the single-support phase, i.e., the dynamics of the shin angle. We then suppose there is no perturbation in the tracking of the references of the other joint angles of the biped. The studied Poincare map is then of dimension one. With a particular control law in double support, it is shown theoretically and in simulation that a perturbation error in the velocity of the shin angle can be eliminated in one step only. The zone of convergence in one step is determined. The condition of existence of a cyclic gait is given, and for a given cyclic gait, the stability condition is also given. It is shown that due to the given control law for the overactuated double-support phase, a cyclic motion is practically guaranteed to be stable. It should be noted it is possible for the biped to reach a periodic regime from a stopped position in one step.
---
paper_title: Survey of locomotion control of legged robots inspired by biological concept
paper_content:
Compared with wheeled mobile robots, legged robots can easily step over obstacles and walk through rugged ground. They have more flexible bodies and therefore, can deal with complex environment. Nevertheless, some other issues make the locomotion control of legged robots a much complicated task, such as the redundant degree of freedoms and balance keeping. From literatures, locomotion control has been solved mainly based on programming mechanism. To use this method, walking trajectories for each leg and the gaits have to be designed, and the adaptability to an unknown environment cannot be guaranteed. From another aspect, studying and simulating animals’ walking mechanism for engineering application is an efficient way to break the bottleneck of locomotion control for legged robots. This has attracted more and more attentions. Inspired by central pattern generator (CPG), a control method has been proved to be a successful attempt within this scope. In this paper, we will review the biological mechanism, the existence evidences, and the network properties of CPG. From the engineering perspective, we will introduce the engineering simulation of CPG, the property analysis, and the research progress of CPG inspired control method in locomotion control of legged robots. Then, in our research, we will further discuss on existing problems, hot issues, and future research directions in this field.
---
paper_title: In vitro reconstruction of the respiratory central pattern generator of the mollusk Lymnaea.
paper_content:
Most rhythmic behaviors such as respiration, locomotion, and feeding are under the control of networks of neurons in the central nervous system known as central pattern generators (CPGs). The respiratory rhythm of the pond snail Lymnaea stagnalis is a relatively simple, CPG-based behavior for which the underlying neural elements have been identified. A three-neuron network capable of generating the respiratory rhythm of this air-breathing mollusk has been reconstructed in culture. The intrinsic and network properties of this neural ensemble have been studied, and the mechanism of postinhibitory rebound excitation was found to be important for the rhythm generation. This in vitro model system enables a better understanding of the neural basis of rhythm generation.
---
paper_title: On Central Pattern Generator of Biological Motor System
paper_content:
This paper presents theoretical results of the neural control mechanism existed in the spinal cord, central pattern generator (CPG), which has the ability to provide rhythmic movement patterns for the invertebrate and vertebrate. It is known that although CPG is verified by biological methods, it still lacks a complete theoretical investigation. The theoretical analysis about CPG from engineering perspective such as parameter selection and conditions of stable oscillation will strengthen the foundation of CPG theory, and it will significantly enhance the effective application of CPG into motor control systems.
---
paper_title: Central pattern generators for locomotion control in animals and robots: a review
paper_content:
The problem of controlling locomotion is an area in which neuroscience and robotics can fruitfully interact. In this article, I will review research carried out on locomotor central pattern generators (CPGs), i.e. neural circuits capable of producing coordinated patterns of high-dimensional rhythmic output signals while receiving only simple, low-dimensional, input signals. The review will first cover neurobiological observations concerning locomotor CPGs and their numerical modelling, with a special focus on vertebrates. It will then cover how CPG models implemented as neural networks or systems of coupled oscillators can be used in robotics for controlling the locomotion of articulated robots. The review also presents how robots can be used as scientific tools to obtain a better understanding of the functioning of biological CPGs. Finally, various methods for designing CPGs to control specific modes of locomotion will be briefly reviewed. In this process, I will discuss different types of CPG models, the pros and cons of using CPGs with robots, and the pros and cons of using robots as scientific tools. Open research topics both in biology and in robotics will also be discussed.
---
paper_title: Central pattern generators and the control of rhythmic movements
paper_content:
Central pattern generators are neuronal circuits that when activated can produce rhythmic motor patterns such as walking, breathing, flying, and swimming in the absence of sensory or descending inputs that carry specific timing information. General principles of the organization of these circuits and their control by higher brain centers have come from the study of smaller circuits found in invertebrates. Recent work on vertebrates highlights the importance of neuromodulatory control pathways in enabling spinal cord and brain stem circuits to generate meaningful motor patterns. Because rhythmic motor patterns are easily quantified and studied, central pattern generators will provide important testing grounds for understanding the effects of numerous genetic mutations on behavior. Moreover, further understanding of the modulation of spinal cord circuitry used in rhythmic behaviors should facilitate the development of new treatments to enhance recovery after spinal cord damage.
---
paper_title: Biomimetic Walking Robot Scorpion: Control and Modeling
paper_content:
Abstract We present the biomimetic control scheme for the walking robot SCORPION. We used a concept of Basic Motion Patterns, which can be combined in a very flexible manner. Also reflexes are introduced to increase the reactivity. In addition our modeling and simulation approach is described, which has been done based on the ADAMS™ simulator. Especially the motion patterns of real scorpions were analyzed and used for walking patterns and smooth acceleration of the robot.
---
paper_title: BoxyBot: a swimming and crawling fish robot controlled by a central pattern generator
paper_content:
We present a novel fish robot capable of swimming and crawling. The robot is driven by DC motors and has three actuated fins, with two pectoral fins and one caudal fin. It is loosely inspired from the boxfish. The control architecture of the robot is constructed around a central pattern generator (CPG) implemented as a system of coupled nonlinear oscillators, which, like its biological counterpart, can produce coordinated patterns of rhythmic activity while being modulated by simple control parameters. Using the CPG model, the robot is capable of performing and switching between a variety of different locomotor behaviors such as swimming forwards, swimming backwards, turning, rolling, moving upwards/downwards, and crawling. These behaviors are triggered and modulated by sensory input provided by light and water sensors. Results are presented demonstrating the agility of the robot, and interesting properties of a CPG-based control approach such as stability of the rhythmic patterns due to limit cycle behavior, and the production of smooth trajectories despite abrupt changes of control parameters
---
paper_title: Experimentally Verified Optimal Serpentine Gait and Hyperredundancy of a Rigid-Link Snake Robot
paper_content:
In this study, we examine, for a six-link snake robot, how an optimal gait might change as a function of the snake- surface interaction model and how the overall locomotion performance changes under nonoptimal conditions such as joint failure. Simulations are evaluated for three different types of friction models, and it is shown that the gait parameters for serpentine motion are very dependant on the frictional model if minimum power expenditure is desired for a given velocity. Experimental investigations then motivate a surface interaction model not commonly used in snake locomotion studies. Using this new model, simulation results are compared to experiments for nominal and nonnominal locomotion cases including actuator faults. It is shown that this model quite accurately predicts locomotion velocities and link profiles, but that the accuracy of these predictions degrades severely at speeds where actuator dynamics become significant.
---
paper_title: Experimental Verification of Open-loop Control for an Underwater Eel-like Robot
paper_content:
In this paper, we describe experimental work using an underwater, biomimetic, eel-like robot to verify a simplified dynamic model and open-loop control routines. We compare experimental results to previous analytically derived, but approximate expressions for proposed gaits for forward/backward swimming, circular swimming, sideways swimming and turning in place. We have developed a five-link, underwater eel-like robot, focusing on modularity, reliability and rapid prototyping, to verify our theoretical predictions. Results from open-loop experiments performed with this robot in an aquatic environment using an off-line vision system for position sensing show good agreement with theory.
---
paper_title: Soccer playing humanoid robots: Processing architecture, gait generation and vision system
paper_content:
Research on humanoid robotics in Mechatronics and Automation (MA) Laboratory, Electrical and Computer Engineering (ECE), National University of Singapore (NUS) was started at the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. These humanoids have been successfully participating in various robotic soccer competitions. In this paper, three major research and development aspects of the above humanoid research are discussed. The paper focuses on various practical and theoretical considerations involved in processing architecture, gait generation and vision systems.
---
paper_title: A Biologically Inspired Biped Locomotion Strategy for Humanoid Robots: Modulation of Sinusoidal Patterns by a Coupled Oscillator Model
paper_content:
Biological systems seem to have a simpler but more robust locomotion strategy than that of the existing biped walking controllers for humanoid robots. We show that a humanoid robot can step and walk using simple sinusoidal desired joint trajectories with their phase adjusted by a coupled oscillator model. We use the center-of-pressure location and velocity to detect the phase of the lateral robot dynamics. This phase information is used to modulate the desired joint trajectories. We do not explicitly use dynamical parameters of the humanoid robot. We hypothesize that a similar mechanism may exist in biological systems. We applied the proposed biologically inspired control strategy to our newly developed human-sized humanoid robot computational brain (CB) and a small size humanoid robot, enabling them to generate successful stepping and walking patterns.
---
paper_title: Construction of Central Pattern Generator Using Piecewise Affine Systems
paper_content:
Central pattern generator (CPG) is a neural circuit which governs rhythmic activities of animal. Synthesis of CPG plays an important role in engineering such as locomotion control. Up to now, CPGs are mainly derived based on biological principles, with their parameters heuristically tuned. In this paper, a class of piecewise affine systems capable of exhibiting CPG-like property of stable limit cycles is presented. A proposed synthesis method for CPGs is also included.
---
paper_title: Design of a novel central pattern generator and the hebbian motion learning
paper_content:
In this paper, we propose a new CPG model and hebbian learning rule for the CPG. The output of the proposed CPG is determined by only phase differences of synchronization of component oscillators. A phase synchronization can be regarded as an adaptive behavior to an environment, so the CPG has an adaptavility despite of only simple connections to the environment. We also propose a motion learning rule for the proposed CPG. Since the rule is described by only simple signal processing, so that it can be realized by electric circuits easily, it will be able to be used efficiently for high degree of freedom robots.
---
paper_title: An analog CMOS central pattern generator for interlimb coordination in quadruped locomotion
paper_content:
This paper proposes a neuromorphic analog CMOS controller for interlimb coordination in quadruped locomotion. Animal locomotion, such as walking, running, swimming, and flying, is based on periodic rhythmic movements. These rhythmic movements are driven by the biological neural network, called the central pattern generator (CPG). In recent years, many researchers have applied CPG to locomotion controllers in robotics. However, most of these have been developed with digital processors and, thus, have several problems, such as high power consumption. In order to overcome such problems, a CPG controller with analog CMOS circuit is proposed. Since the CMOS transistors in the circuit operate in their subthreshold region and under low supply voltage, the controller can reduce power consumption. Moreover, low-cost production and miniaturization of controllers are expected. We have shown through computer simulation, such circuit has the capability to generate several periodic rhythmic patterns and transitions between their patterns promptly.
---
paper_title: Configuring of Spiking Central Pattern Generator Networks for Bipedal Walking Using Genetic Algorthms
paper_content:
In limbed animals, spinal neural circuits responsible for controlling muscular activities during walking are called central pattern generators (CPG). CPG networks display oscillatory activities that actuates individual or groups of muscles in a coordinated fashion so that the limbs of the animal are flexed and extended at the appropriate time and with the required velocity for the animal to efficiently traverse various types of terrain, and to recover from environmental perturbation. Typically, the CPG networks are constructed with many neurons, each of which has a number of control parameters. As the number of muscles increases, it is often impossible to manually, albeit intelligently, select the network parameters for a particular movement. Furthermore, it is virtually impossible to reconfigure the parameters on-line. This paper describes how genetic algorithms (GA) can be used for on-line (re)configuring of CPG networks for a bipedal robot. We show that the neuron parameters and connection weights/network topology of a canonical walking network can be reconfigured within a few of generations of the GA. The networks, constructed with integrate-and-fire-with-adaptation (IFA) neurons, are implemented with a microcontroller and can be reconfigured to vary walking speed from 0.5Hz to 3.5Hz. The phase relationship between the hips and knees can be arbitrarily set (to within 1 degree) and prescribed complex joint angle profiles are realized. This is a powerful approach to generating complex muscle synergies for robots with multiple joints and distributed actuators.
---
paper_title: A central pattern generator for insect gait production
paper_content:
We present a neural network model inspired from both behavioral and neurophysiological data on insect locomotion. The model consists of a central rhythmic pattern generator and a sensory motor network. We show that it exhibits various behavioral properties observed in several insect species: it produces a continuum of stable gaits ranging from metachronal to tripod; the changes in the duration of protraction retraction phases for various walking speeds follow closely the behavorial data; the network is insensitive to the initial position of the legs, in that it can achieve rapidly a coherent phase relationship regardless of the initial conditions. In addition, this transient property also extends to situations where rapid changes occur in the walking speed; the network can reorganize its phase rapidly during fast accelerations as well as fast decelerations.
---
paper_title: Neural control of interlimb oscillations
paper_content:
How do humans and other animals accomplish coordinated movements? How are novel combinations of limb joints rapidly assembled into new behavioral units that move together in in-phase or anti-phase movement patterns during complex movement tasks? A neural central pattern generator (CPG) model simulates data from human bimanual coordination tasks. As in the data, anti-phase oscillations at low frequencies switch to in-phase oscillations at high frequencies, in-phase oscillations occur at both low and high frequencies, phase fluctuations occur at the anti-phase in-phase transition, a “seagull effect” of larger errors occurs at intermediate phases, and oscillations slip toward in-phase and anti-phase when driven at intermediate phases. These oscillations and bifurcations are emergent properties of the CPG model in response to volitional inputs. The CPG model is a version of the Ellias-Grossberg oscillator. Its neurons obey Hodgkin-Huxley type equations whose excitatory signals operate on a faster time scale than their inhibitory signals in a recurrent on-center off-surround anatomy. When an equal command or GO signal activates both model channels, the model CPG can generate both in-phase and anti-phase oscillations at different GO amplitudes. Phase transitions from either in-phase to anti-phase oscillations, or from anti-phase to in-phase oscillations, can occur in different parameter ranges, as the GO signal increases.
---
paper_title: Programmable central pattern generators: an application to biped locomotion control
paper_content:
We present a system of coupled nonlinear oscillators to be used as programmable central pattern generators, and apply it to control the locomotion of a humanoid robot. Central pattern generators are biological neural networks that can produce coordinated multidimensional rhythmic signals, under the control of simple input signals. They are found both in vertebrate and invertebrate animals for the control of locomotion. In this article, we present a novel system composed of coupled adaptive nonlinear oscillators that can learn arbitrary rhythmic signals in a supervised learning framework. Using adaptive rules implemented as differential equations, parameters such as intrinsic frequencies, amplitudes, and coupling weights are automatically adjusted to replicate a teaching signal. Once the teaching signal is removed, the trajectories remain embedded as the limit cycle of the dynamical system. An interesting aspect of this approach is that the learning is completely embedded into the dynamical system, and does not require external optimization algorithms. We use our system to encapsulate rhythmic trajectories for biped locomotion with a simulated humanoid robot, and demonstrate how it can be used to do online trajectory generation. The system can modulate the speed of locomotion, and even allow the reversal of direction (i.e. walking backwards). The integration of sensory feedback allows the online modulation of trajectories such as to increase the basin of stability of the gaits, and therefore the range of speeds that can be produced
---
paper_title: Adaptive Dynamic Walking of a Quadruped Robot on Irregular Terrain Based on Biological Concepts
paper_content:
We have been trying to induce a quadruped robot to walk with medium walking speed on irregular terrain based on biological concepts. We propose the necessary conditions for stable dynamic walking on irregular terrain in general, and we design the mechanical system and the neural system by comparing biological concepts with those necessary conditions described in physical terms. A PD controller at the joints can construct the virtual spring-damper system as the visco-elasticity model of a muscle. The neural system model consists of a central pattern generator (CPG) and reflexes. A CPG receives sensory input and changes the period of its own active phase. The desired angle and P-gain of each joint in the virtual spring-damper system is switched based on the phase signal of the CPG. CPGs, the motion of the virtual spring-damper system of each leg and the rolling motion of the body are mutually entrained through the rolling motion feedback to CPGs, and can generate adaptive walking. We report on our experimen...
---
paper_title: Passive compliant quadruped robot using Central Pattern Generators for locomotion control
paper_content:
We present a new quadruped robot, ldquoCheetahrdquo, featuring three-segment pantographic legs with passive compliant knee joints. Each leg has two degrees of freedom - knee and hip joint can be actuated using proximal mounted RC servo motors, force transmission to the knee is achieved by means of a bowden cable mechanism. Simple electronics to command the actuators from a desktop computer have been designed in order to test the robot. A Central Pattern Generator (CPG) network has been implemented to generate different gaits. A parameter space search was performed and tested on the robot to optimize forward velocity.
---
paper_title: Research on gait planning of artificial leg based on central pattern generator
paper_content:
Biped robot with heterogeneous legs (BRHL) is a novel robot model, which consists of an artificial leg and an intelligent bionic leg. The artificial leg is used to simulate the amputeepsilas healthy leg and the bionic leg works as the intelligent artificial limb. To describe the present gait of the healthy leg and make intelligent bionic leg follow the walking of artificial leg in all phases is the target of BRHLpsilas research. So gait planning of artificial leg is the emphasis of BRHLpsilas research. This paper uses the model of central pattern generator (CPG) in the research of artificial legpsilas gait planning from the point of biology. To obtain natural and robust walking pattern, genetic algorithm is used to optimize parameters of the CPG network model and the fitness function is formulated based on zero moment point (ZMP). Simulation results testify the feasibility of this method.
---
paper_title: Slip-adaptive walk of quadruped robot
paper_content:
Abstract In this paper, we investigated the effects of the friction condition on walking pattern and energy efficiency, and based on the results, we proposed two new “slip-adaptive” strategies for generating a slip-adaptive walk. The first strategy for a slip-adaptive walk uses a slip reflex via a Central Pattern Generator (CPG) to change the walking pattern. The second strategy for a walk uses a force control to immediately compensate a slip. Using these strategies, a walk, which is adaptive to varying friction conditions and slips, becomes possible. The validity of the proposed method is confirmed through simulation and experimentation.
---
paper_title: Sustained oscillations generated by mutually inhibiting neurons with adaptation
paper_content:
Autonomic oscillatory activities exist in almost every living thing and most of them are produced by rhythmic activities of the corresponding neural systems (locomotion, respiration, heart beat, etc.). This paper mathematically discusses sustained oscillations generated by mutual inhibition of the neurons which are represented by a continuous-variable model with a kind of fatigue or adaptation effect. If the neural network has no stable stationary state for constant input stimuli, it will generate and sustain some oscillation for any initial state and for any disturbance. Some sufficient conditions for that are given to three types of neural networks: lateral inhibition networks of linearly arrayed neurons, symmetric inhibition networks and cyclic inhibition networks. The result suggests that the adaptation of the neurons plays a very important role for the appearance of the oscillations. Some computer simulations of rhythmic activities are also presented for cyclic inhibition networks consisting of a few neurons.
---
paper_title: Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot
paper_content:
In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.
---
paper_title: Neural networks that co-ordinate locomotion and body orientation in lamprey
paper_content:
The networks of the brainstem and spinal cord that co-ordinate locomotion and body orientation in lamprey are described. The cycle-to-cycle pattern generation of these networks is produced by interacting glutamatergic and glycinergic neurones, with NMDA receptor-channels playing an important role at lower rates of locomotion. The fine tuning of the networks produced by 5-HT, dopamine and GABA systems involves a modulation of Ca2+-dependent K+ channels, high- and low-threshold voltage-activated Ca2+ channels and presynaptic inhibitory mechanisms. Mathematical modelling has been used to explore the capacity of these biological networks. The vestibular control of the body orientation during swimming is exerted via reticulospinal neurones located in different reticular nuclei. These neurones become activated maximally at different angles of tilt.
---
paper_title: Bipedal locomotion control using a four-compartmental central pattern generator
paper_content:
In this paper, we develop a simple bipedal locomotion algorithm based on biological concepts. The algorithm utilized a central pattern generator (CPG) composed of four coupled neural oscillators (NO) to generate control signal for the bipedal robot. Feedbacks from the robot dynamics and the environment are used to update the CPG online. Our algorithm is then tested on a seven link model of bipedal robot. Simulation results suggest the proposed CPG can generate a smooth and continuous walking pattern for the robot.
---
paper_title: Online Generation of Cyclic Leg Trajectories Synchronized with Sensor Measurement
paper_content:
The generation of trajectories for a biped robot is a problem which has been largely studied for several years, and many satisfying offline solutions exist for steady-state walking in absence of disturbances. The question is a little more complex when the generation of the desired trajectories of joints or links has to be achieved or adapted online, i.e. in real time, for example when it is wished to strongly synchronize these trajectories with an external motion. This is precisely the problem addressed in this paper. Indeed, we consider the case where the ''master'' motion is measured by a position sensor embedded on a human leg. We propose a method to synchronize the motion of a robot or of other device with respect to the output signal of the sensor. The main goal is to estimate as accurately as possible the current phase along the gait cycle. We use for that purpose a model based on a nonlinear oscillator, which we associate an observer. Introducing the sensor output in the observer allows us to compute the oscillator phase and to generate a synchronized multilinks trajectory, at a very low computational cost. The paper also presents evaluation results in terms of robustness against parameter estimation errors and velocity changes in the input.
---
paper_title: Modeling of a Neural Pattern Generator with Coupled nonlinear Oscillators
paper_content:
A set of van der Pol oscillators is arranged in a network in which each oscillator is coupled to each other oscillator. Through the selection of coupling coefficients, the network is made to appear as a ring and as a chain of coupled oscillators. Each oscillator is provided with amplitude, frequency, and offset parameters which have analytically indeterminable effects on the output waves. These systems are simulated on the digital computer in order to study the amplitude, frequency, offset, and phase relationships of the waves versus parameter changes. Based on the simulations, systems of coupled oscillators are configured so that they exhibit stable patterns of signals which can be used to model the central pattern generator (CPG) of living organisms. Using a simple biped as an example locomotory system, the CPG model generates control signals for simulated walking and jumping maneuvers. It is shown that with parameter adjustments, as guided by the simulations, the model can be made to generate kinematic trajectories which closely resemble those for the human walking gait. Further-more, minor tuning of these parameters along with some algebraic sign changes of coupling coefficients can effect a transition in the trajectories to those of a two-legged hopping gait. The generalized CPG model is shown to be versatile enough that it can also generate various n-legged gaits and spinal undulatory motions, as in the swimming motions of a fish.
---
paper_title: Coupled Van Der Pol oscillators utilised as Central pattern generators for quadruped locomotion
paper_content:
Central pattern generators (CPGs) inspired from neural system of animal are widely used for the control of locomotion in robots. The objective of this paper is to modeling a CPG-network formed by a set of mutually coupled Van Der Pol (VDP) oscillators to generating rhythmic movement patterns for multi-joint robot. Firstly, a VDP-CPG network is made up by four coupled VDP oscillators, which can produce multiple phase-locked oscillation patterns that correspond to the four basic quadrupedal gaits. Then the gait transitions between the different gaits are generated by altering the internal oscillator parameters. At last, we use VDP-CPG network to produce the joints trajectories for AIBO to control the locomotion. The simulation and experimental results demonstrate that the designed VDP-CPG network is effective to control the quadruped locomotion.
---
paper_title: Development of Adaptive Modular Active Leg (AMAL) using bipedal robotics technology
paper_content:
The objective of the work presented here is to develop a low cost active above knee prosthetic device exploiting bipedal robotics technology which will work utilizing the available biological motor control circuit properly integrated with a Central Pattern Generator (CPG) based control scheme. The approach is completely different from the existing Active Prosthetic devices, designed primarily as standalone systems utilizing multiple sensors and embedded rigid control schemes. In this research, first we designed a fuzzy logic based methodology for offering suitable gait pattern for an amputee, followed by formulating a suitable algorithm for designing a CPG, based on Rayleigh's oscillator. An indigenous probe, Humanoid Gait Oscillator Detector (HGOD) has been designed for capturing gait patterns from various individuals of different height, weight and age. These data are used to design a Fuzzy inference system which generates most suitable gait pattern for an amputee. The output of the Fuzzy inference system is used for designing a CPG best suitable for the amputee. We then developed a CPG based control scheme for calculating the damping profile in real time for maneuvering a prosthetic device called AMAL (Adaptive Modular Active Leg). Also a number of simulation results are presented which show the stable behavior of knee and hip angles and determine the stable limit cycles of the network.
---
paper_title: Fast Biped Walking with a Sensor-driven Neuronal Controller and Real-time Online Learning
paper_content:
In this paper, we present our design and experiments on a planar biped robot under the control of a pure sensor-driven controller. This design has some special mechanical features, for example small curved feet allowing rolling action and a properly positioned center of mass, that facilitate fast walking through exploitation of the robot's natural dynamics. Our sensor-driven controller is built with biologically inspired sensor- and motor-neuron models, and does not employ any kind of position or trajectory tracking control algorithm. Instead, it allows our biped robot to exploit its own natural dynamics during critical stages of its walking gait cycle. Due to the interaction between the sensor-driven neuronal controller and the properly designed mechanics of the robot, the biped robot can realize stable dynamic walking gaits in a large domain of the neuronal parameters. In addition, this structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the sensor-driven controller in real-time, during walking. This way RunBot can reach a relative speed of 3.5 leg lengths per second after only a few minutes of online learning, which is faster than that of any other biped robot, and is also comparable to the fastest relative speed of human walking.
---
paper_title: Design of central pattern generator for humanoid robot walking based on multi-objective GA
paper_content:
Recently, the field of humanoid robotics attracts more and more interest and the research on humanoid locomotion based on central pattern generators (CPG) reveals many challenging aspects. This paper describes the design of CPG for stable humanoid bipedal locomotion using an evolutionary approach. In this research, each joint of the humanoid is driven by a neuron that consists of two coupled neural oscillators, and corresponding joint's neurons are connected by strength weight. To achieve natural and robust walking pattern, an evolutionary-based multi-objective optimization algorithm is used to solve the weight optimization problem. The fitness functions are formulated based on zero moment point (ZMP), global attitude of the robot and the walking speed. In the algorithms, real value coding and tournament selection are applied, the crossover and mutation operators are chosen as heuristic crossover and boundary mutation respectively. Following evolving, the robot is able to walking in the given environment and a simulation shows the result.
---
paper_title: Pattern generators with sensory feedback for the control of quadruped locomotion
paper_content:
Central pattern generators (CPGs) are becoming a popular model for the control of locomotion of legged robots. Biological CPGs are neural networks responsible for the generation of rhythmic movements, especially locomotion. In robotics, a systematic way of designing such CPGs as artificial neural networks or systems of coupled oscillators with sensory feedback inclusion is still missing. In this contribution, we present a way of designing CPGs with coupled oscillators in which we can independently control the ascending and descending phases of the oscillations (i.e. the swing and stance phases of the limbs). Using insights from dynamical system theory, we construct generic networks of oscillators able to generate several gaits under simple parameter changes. Then we introduce a systematic way of adding sensory feedback from touch sensors in the CPG such that the controller is strongly coupled with the mechanical system it controls. Finally we control three different simulated robots (iCub, Aibo and Ghostdog) using the same controller to show the effectiveness of the approach. Our simulations prove the importance of independent control of swing and stance duration. The strong mutual coupling between the CPG and the robot allows for more robust locomotion, even under non precise parameters and non-flat environment.
---
paper_title: Dynamic walking and running of a bipedal robot using hybrid central pattern generator method
paper_content:
This paper presents simulation and experimental results of dynamic walking and running of a bipedal robot. We proposed the hybrid central pattern generator (H-CPG) method to realize adaptive dynamic motions including stepping and jumping. This method basically consisted of CPG models, but was added the force control system that controlled the acting force from a leg to a floor in the vertical and the horizontal directions separately. At this moment, 2D walking and running was realized on level ground and a slope with a speed of 1.6 m/s.
---
paper_title: Locomotion pattern generation of semi-looper type robots using central pattern generators based on van der Pol oscillators
paper_content:
A control problem is studied for generating the locomotion pattern of a semi-looper type robot by applying central pattern generators (CPGs), in which such a robot can realize two-rhythm motion and green caterpillar locomotion depending on the condition of environment. After deriving the dynamical model with two links and one actuator, the simulation of the robot is conducted using a CPG consisting of one van der Pol (VDP) oscillator. A CPG network composed of two VDP oscillators is further constructed to realize four-rhythm motion with three links and two actuators.
---
paper_title: Evolving Swimming Controllers for a Simulated Lamprey with Inspiration from Neurobiology
paper_content:
This paper presents how neural swimming controllers for a simulated lamprey can be developed using evolutionary algorithms. A genetic algorithm is used for evolving the architecture of a connectionist model which determines the muscular activity of a simulated body in interaction with water. This work is inspired by the biological model developed by Ekeberg which repro duces the central pattern generator observed in the real lamprey (Ekeberg, 1993). In evolving artificial controllers, we demonstrate that a genetic algorithm can be an interesting design tech nique for neural controllers and that there exist alternative solutions to the biological connectiv ity. A variety of neural controllers are evolved which can produce the pattern of oscillations necessary for swimming. These patterns can be modulated through the external excitation ap plied to the network in order to vary the speed and the direction of swimming. The best evolved controllers cover larger ranges of frequencies, phase lags and speeds of s...
---
paper_title: Design of a novel central pattern generator and the hebbian motion learning
paper_content:
In this paper, we propose a new CPG model and hebbian learning rule for the CPG. The output of the proposed CPG is determined by only phase differences of synchronization of component oscillators. A phase synchronization can be regarded as an adaptive behavior to an environment, so the CPG has an adaptavility despite of only simple connections to the environment. We also propose a motion learning rule for the proposed CPG. Since the rule is described by only simple signal processing, so that it can be realized by electric circuits easily, it will be able to be used efficiently for high degree of freedom robots.
---
paper_title: A Theory on Autonomous Distributed Systems with Application to a Gait Pattern Generator of Quadruped
paper_content:
This paper considers a synthesis approach to autonomous distributed systems in which the functional order of the entire system is generated by cooperative interaction among its subsystems, each of which has the autonomy to control a part of the state of the system, and its application to pattern generators of animal locomotion. First, biological locomotory rhythms and their generators, and gait patterns of quadrupeds, are reviewed briefly. Then, a design principle for autonomous coordination of many oscillators is proposed. Using these results, a gait pattern generator is synthesized. Finally, it is shown using computer simulations that the proposed systems generate desirable patterns.
---
paper_title: Configuring of Spiking Central Pattern Generator Networks for Bipedal Walking Using Genetic Algorthms
paper_content:
In limbed animals, spinal neural circuits responsible for controlling muscular activities during walking are called central pattern generators (CPG). CPG networks display oscillatory activities that actuates individual or groups of muscles in a coordinated fashion so that the limbs of the animal are flexed and extended at the appropriate time and with the required velocity for the animal to efficiently traverse various types of terrain, and to recover from environmental perturbation. Typically, the CPG networks are constructed with many neurons, each of which has a number of control parameters. As the number of muscles increases, it is often impossible to manually, albeit intelligently, select the network parameters for a particular movement. Furthermore, it is virtually impossible to reconfigure the parameters on-line. This paper describes how genetic algorithms (GA) can be used for on-line (re)configuring of CPG networks for a bipedal robot. We show that the neuron parameters and connection weights/network topology of a canonical walking network can be reconfigured within a few of generations of the GA. The networks, constructed with integrate-and-fire-with-adaptation (IFA) neurons, are implemented with a microcontroller and can be reconfigured to vary walking speed from 0.5Hz to 3.5Hz. The phase relationship between the hips and knees can be arbitrarily set (to within 1 degree) and prescribed complex joint angle profiles are realized. This is a powerful approach to generating complex muscle synergies for robots with multiple joints and distributed actuators.
---
paper_title: A central pattern generator for insect gait production
paper_content:
We present a neural network model inspired from both behavioral and neurophysiological data on insect locomotion. The model consists of a central rhythmic pattern generator and a sensory motor network. We show that it exhibits various behavioral properties observed in several insect species: it produces a continuum of stable gaits ranging from metachronal to tripod; the changes in the duration of protraction retraction phases for various walking speeds follow closely the behavorial data; the network is insensitive to the initial position of the legs, in that it can achieve rapidly a coherent phase relationship regardless of the initial conditions. In addition, this transient property also extends to situations where rapid changes occur in the walking speed; the network can reorganize its phase rapidly during fast accelerations as well as fast decelerations.
---
paper_title: Programmable central pattern generators: an application to biped locomotion control
paper_content:
We present a system of coupled nonlinear oscillators to be used as programmable central pattern generators, and apply it to control the locomotion of a humanoid robot. Central pattern generators are biological neural networks that can produce coordinated multidimensional rhythmic signals, under the control of simple input signals. They are found both in vertebrate and invertebrate animals for the control of locomotion. In this article, we present a novel system composed of coupled adaptive nonlinear oscillators that can learn arbitrary rhythmic signals in a supervised learning framework. Using adaptive rules implemented as differential equations, parameters such as intrinsic frequencies, amplitudes, and coupling weights are automatically adjusted to replicate a teaching signal. Once the teaching signal is removed, the trajectories remain embedded as the limit cycle of the dynamical system. An interesting aspect of this approach is that the learning is completely embedded into the dynamical system, and does not require external optimization algorithms. We use our system to encapsulate rhythmic trajectories for biped locomotion with a simulated humanoid robot, and demonstrate how it can be used to do online trajectory generation. The system can modulate the speed of locomotion, and even allow the reversal of direction (i.e. walking backwards). The integration of sensory feedback allows the online modulation of trajectories such as to increase the basin of stability of the gaits, and therefore the range of speeds that can be produced
---
paper_title: BoxyBot: a swimming and crawling fish robot controlled by a central pattern generator
paper_content:
We present a novel fish robot capable of swimming and crawling. The robot is driven by DC motors and has three actuated fins, with two pectoral fins and one caudal fin. It is loosely inspired from the boxfish. The control architecture of the robot is constructed around a central pattern generator (CPG) implemented as a system of coupled nonlinear oscillators, which, like its biological counterpart, can produce coordinated patterns of rhythmic activity while being modulated by simple control parameters. Using the CPG model, the robot is capable of performing and switching between a variety of different locomotor behaviors such as swimming forwards, swimming backwards, turning, rolling, moving upwards/downwards, and crawling. These behaviors are triggered and modulated by sensory input provided by light and water sensors. Results are presented demonstrating the agility of the robot, and interesting properties of a CPG-based control approach such as stability of the rhythmic patterns due to limit cycle behavior, and the production of smooth trajectories despite abrupt changes of control parameters
---
paper_title: Automated evolutionary design, robustness, and adaptation of sidewinding locomotion of a simulated snake-like robot
paper_content:
Inspired by the efficient method of locomotion of the rattlesnake Crotalus cerastes, the objective of this work is automatic design through genetic programming (GP) of the fastest possible (sidewinding) locomotion of simulated limbless, wheelless snake-like robot (Snakebot). The realism of simulation is ensured by employing the Open Dynamics Engine (ODE), which facilitates implementation of all physical forces, resulting from the actuators, joints constrains, frictions, gravity, and collisions. Reduction of the search space of the GP is achieved by representation of Snakebot as a system comprising identical morphological segments and by automatic definition of code fragments, shared among (and expressing the correlation between) the evolved dynamics of the vertical and horizontal turning angles of the actuators of Snakebot. Empirically obtained results demonstrate the emergence of sidewinding locomotion from relatively simple motion patterns of morphological segments. Robustness of the sidewinding Snakebot, which is considered to be the ability to retain its velocity when situated in an unanticipated environment, is illustrated by the ease with which Snakebot overcomes various types of obstacles such as a pile of or burial under boxes, rugged terrain, and small walls. The ability of Snakebot to adapt to partial damage by gradually improving its velocity characteristics is discussed. Discovering compensatory locomotion traits, Snakebot recovers completely from single damage and recovers a major extent of its original velocity when more significant damage is inflicted. Exploring the opportunity for automatic design and adaptation of a simulated artifact, this work could be considered as a step toward building real Snakebots, which are able to perform robustly in difficult environments.
---
paper_title: Experimentally Verified Optimal Serpentine Gait and Hyperredundancy of a Rigid-Link Snake Robot
paper_content:
In this study, we examine, for a six-link snake robot, how an optimal gait might change as a function of the snake- surface interaction model and how the overall locomotion performance changes under nonoptimal conditions such as joint failure. Simulations are evaluated for three different types of friction models, and it is shown that the gait parameters for serpentine motion are very dependant on the frictional model if minimum power expenditure is desired for a given velocity. Experimental investigations then motivate a surface interaction model not commonly used in snake locomotion studies. Using this new model, simulation results are compared to experiments for nominal and nonnominal locomotion cases including actuator faults. It is shown that this model quite accurately predicts locomotion velocities and link profiles, but that the accuracy of these predictions degrades severely at speeds where actuator dynamics become significant.
---
paper_title: Structural Evolution of Central Pattern Generators for Bipedal Walking in 3D Simulation
paper_content:
Anthropomorphic walking for a simulated bipedal robot has been realized by means of artificial evolution of central pattern generator (CPG) networks. The approach has been investigated through full rigid-body dynamics simulations in 3D of a bipedal robot with 14 degrees of freedom. The half-center CPG model has been used as an oscillator unit, with interconnection paths between oscillators undergoing structural modifications using a genetic algorithm. In addition, the connection weights in a feedback network of predefined structure were evolved. Furthermore, a supporting structure was added to the robot in order to guide the evolutionary process towards natural, human-like gaits. Subsequently, this structure was removed, and the ability of the best evolved controller to generate a bipedal gait without the help of the supporting structure was verified. Stable, natural gait patterns were obtained, with a maximum walking speed of around 0.9 m/s.
---
paper_title: Passive compliant quadruped robot using Central Pattern Generators for locomotion control
paper_content:
We present a new quadruped robot, ldquoCheetahrdquo, featuring three-segment pantographic legs with passive compliant knee joints. Each leg has two degrees of freedom - knee and hip joint can be actuated using proximal mounted RC servo motors, force transmission to the knee is achieved by means of a bowden cable mechanism. Simple electronics to command the actuators from a desktop computer have been designed in order to test the robot. A Central Pattern Generator (CPG) network has been implemented to generate different gaits. A parameter space search was performed and tested on the robot to optimize forward velocity.
---
paper_title: Research on gait planning of artificial leg based on central pattern generator
paper_content:
Biped robot with heterogeneous legs (BRHL) is a novel robot model, which consists of an artificial leg and an intelligent bionic leg. The artificial leg is used to simulate the amputeepsilas healthy leg and the bionic leg works as the intelligent artificial limb. To describe the present gait of the healthy leg and make intelligent bionic leg follow the walking of artificial leg in all phases is the target of BRHLpsilas research. So gait planning of artificial leg is the emphasis of BRHLpsilas research. This paper uses the model of central pattern generator (CPG) in the research of artificial legpsilas gait planning from the point of biology. To obtain natural and robust walking pattern, genetic algorithm is used to optimize parameters of the CPG network model and the fitness function is formulated based on zero moment point (ZMP). Simulation results testify the feasibility of this method.
---
paper_title: Autonomous evolution of dynamic gaits with two quadruped robots
paper_content:
A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First, we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10 m/min, which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.
---
paper_title: Design of a Central Pattern Generator for Bionic-robot Joint with Angular Frequency Modulation
paper_content:
The paper proposes an artificial central pattern generator (CPG) for bionic-robot joint control. The neural oscillator adopted to produce rhythmic pattern is specially designed from original sin-cosine oscillator model. An amplitude neural estimator consisted of two neurons is presented to provide sensor feedback to CPG control. The artificial CPG can adapt itself to the physical system parameters variety by rhythmic movement angular frequency modulation.
---
paper_title: Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot
paper_content:
In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.
---
paper_title: Bipedal locomotion control using a four-compartmental central pattern generator
paper_content:
In this paper, we develop a simple bipedal locomotion algorithm based on biological concepts. The algorithm utilized a central pattern generator (CPG) composed of four coupled neural oscillators (NO) to generate control signal for the bipedal robot. Feedbacks from the robot dynamics and the environment are used to update the CPG online. Our algorithm is then tested on a seven link model of bipedal robot. Simulation results suggest the proposed CPG can generate a smooth and continuous walking pattern for the robot.
---
paper_title: Online Generation of Cyclic Leg Trajectories Synchronized with Sensor Measurement
paper_content:
The generation of trajectories for a biped robot is a problem which has been largely studied for several years, and many satisfying offline solutions exist for steady-state walking in absence of disturbances. The question is a little more complex when the generation of the desired trajectories of joints or links has to be achieved or adapted online, i.e. in real time, for example when it is wished to strongly synchronize these trajectories with an external motion. This is precisely the problem addressed in this paper. Indeed, we consider the case where the ''master'' motion is measured by a position sensor embedded on a human leg. We propose a method to synchronize the motion of a robot or of other device with respect to the output signal of the sensor. The main goal is to estimate as accurately as possible the current phase along the gait cycle. We use for that purpose a model based on a nonlinear oscillator, which we associate an observer. Introducing the sensor output in the observer allows us to compute the oscillator phase and to generate a synchronized multilinks trajectory, at a very low computational cost. The paper also presents evaluation results in terms of robustness against parameter estimation errors and velocity changes in the input.
---
paper_title: Modeling of a Neural Pattern Generator with Coupled nonlinear Oscillators
paper_content:
A set of van der Pol oscillators is arranged in a network in which each oscillator is coupled to each other oscillator. Through the selection of coupling coefficients, the network is made to appear as a ring and as a chain of coupled oscillators. Each oscillator is provided with amplitude, frequency, and offset parameters which have analytically indeterminable effects on the output waves. These systems are simulated on the digital computer in order to study the amplitude, frequency, offset, and phase relationships of the waves versus parameter changes. Based on the simulations, systems of coupled oscillators are configured so that they exhibit stable patterns of signals which can be used to model the central pattern generator (CPG) of living organisms. Using a simple biped as an example locomotory system, the CPG model generates control signals for simulated walking and jumping maneuvers. It is shown that with parameter adjustments, as guided by the simulations, the model can be made to generate kinematic trajectories which closely resemble those for the human walking gait. Further-more, minor tuning of these parameters along with some algebraic sign changes of coupling coefficients can effect a transition in the trajectories to those of a two-legged hopping gait. The generalized CPG model is shown to be versatile enough that it can also generate various n-legged gaits and spinal undulatory motions, as in the swimming motions of a fish.
---
paper_title: Coupled Van Der Pol oscillators utilised as Central pattern generators for quadruped locomotion
paper_content:
Central pattern generators (CPGs) inspired from neural system of animal are widely used for the control of locomotion in robots. The objective of this paper is to modeling a CPG-network formed by a set of mutually coupled Van Der Pol (VDP) oscillators to generating rhythmic movement patterns for multi-joint robot. Firstly, a VDP-CPG network is made up by four coupled VDP oscillators, which can produce multiple phase-locked oscillation patterns that correspond to the four basic quadrupedal gaits. Then the gait transitions between the different gaits are generated by altering the internal oscillator parameters. At last, we use VDP-CPG network to produce the joints trajectories for AIBO to control the locomotion. The simulation and experimental results demonstrate that the designed VDP-CPG network is effective to control the quadruped locomotion.
---
paper_title: A Biologically Inspired Biped Locomotion Strategy for Humanoid Robots: Modulation of Sinusoidal Patterns by a Coupled Oscillator Model
paper_content:
Biological systems seem to have a simpler but more robust locomotion strategy than that of the existing biped walking controllers for humanoid robots. We show that a humanoid robot can step and walk using simple sinusoidal desired joint trajectories with their phase adjusted by a coupled oscillator model. We use the center-of-pressure location and velocity to detect the phase of the lateral robot dynamics. This phase information is used to modulate the desired joint trajectories. We do not explicitly use dynamical parameters of the humanoid robot. We hypothesize that a similar mechanism may exist in biological systems. We applied the proposed biologically inspired control strategy to our newly developed human-sized humanoid robot computational brain (CB) and a small size humanoid robot, enabling them to generate successful stepping and walking patterns.
---
paper_title: Development of Adaptive Modular Active Leg (AMAL) using bipedal robotics technology
paper_content:
The objective of the work presented here is to develop a low cost active above knee prosthetic device exploiting bipedal robotics technology which will work utilizing the available biological motor control circuit properly integrated with a Central Pattern Generator (CPG) based control scheme. The approach is completely different from the existing Active Prosthetic devices, designed primarily as standalone systems utilizing multiple sensors and embedded rigid control schemes. In this research, first we designed a fuzzy logic based methodology for offering suitable gait pattern for an amputee, followed by formulating a suitable algorithm for designing a CPG, based on Rayleigh's oscillator. An indigenous probe, Humanoid Gait Oscillator Detector (HGOD) has been designed for capturing gait patterns from various individuals of different height, weight and age. These data are used to design a Fuzzy inference system which generates most suitable gait pattern for an amputee. The output of the Fuzzy inference system is used for designing a CPG best suitable for the amputee. We then developed a CPG based control scheme for calculating the damping profile in real time for maneuvering a prosthetic device called AMAL (Adaptive Modular Active Leg). Also a number of simulation results are presented which show the stable behavior of knee and hip angles and determine the stable limit cycles of the network.
---
paper_title: Fast Biped Walking with a Sensor-driven Neuronal Controller and Real-time Online Learning
paper_content:
In this paper, we present our design and experiments on a planar biped robot under the control of a pure sensor-driven controller. This design has some special mechanical features, for example small curved feet allowing rolling action and a properly positioned center of mass, that facilitate fast walking through exploitation of the robot's natural dynamics. Our sensor-driven controller is built with biologically inspired sensor- and motor-neuron models, and does not employ any kind of position or trajectory tracking control algorithm. Instead, it allows our biped robot to exploit its own natural dynamics during critical stages of its walking gait cycle. Due to the interaction between the sensor-driven neuronal controller and the properly designed mechanics of the robot, the biped robot can realize stable dynamic walking gaits in a large domain of the neuronal parameters. In addition, this structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the sensor-driven controller in real-time, during walking. This way RunBot can reach a relative speed of 3.5 leg lengths per second after only a few minutes of online learning, which is faster than that of any other biped robot, and is also comparable to the fastest relative speed of human walking.
---
paper_title: Design of central pattern generator for humanoid robot walking based on multi-objective GA
paper_content:
Recently, the field of humanoid robotics attracts more and more interest and the research on humanoid locomotion based on central pattern generators (CPG) reveals many challenging aspects. This paper describes the design of CPG for stable humanoid bipedal locomotion using an evolutionary approach. In this research, each joint of the humanoid is driven by a neuron that consists of two coupled neural oscillators, and corresponding joint's neurons are connected by strength weight. To achieve natural and robust walking pattern, an evolutionary-based multi-objective optimization algorithm is used to solve the weight optimization problem. The fitness functions are formulated based on zero moment point (ZMP), global attitude of the robot and the walking speed. In the algorithms, real value coding and tournament selection are applied, the crossover and mutation operators are chosen as heuristic crossover and boundary mutation respectively. Following evolving, the robot is able to walking in the given environment and a simulation shows the result.
---
paper_title: Pattern generators with sensory feedback for the control of quadruped locomotion
paper_content:
Central pattern generators (CPGs) are becoming a popular model for the control of locomotion of legged robots. Biological CPGs are neural networks responsible for the generation of rhythmic movements, especially locomotion. In robotics, a systematic way of designing such CPGs as artificial neural networks or systems of coupled oscillators with sensory feedback inclusion is still missing. In this contribution, we present a way of designing CPGs with coupled oscillators in which we can independently control the ascending and descending phases of the oscillations (i.e. the swing and stance phases of the limbs). Using insights from dynamical system theory, we construct generic networks of oscillators able to generate several gaits under simple parameter changes. Then we introduce a systematic way of adding sensory feedback from touch sensors in the CPG such that the controller is strongly coupled with the mechanical system it controls. Finally we control three different simulated robots (iCub, Aibo and Ghostdog) using the same controller to show the effectiveness of the approach. Our simulations prove the importance of independent control of swing and stance duration. The strong mutual coupling between the CPG and the robot allows for more robust locomotion, even under non precise parameters and non-flat environment.
---
paper_title: Gait adaptation method of biped robot for various terrains using central pattern generator (CPG) and learning mechanism
paper_content:
There is evidence showing that animals have inherent rhythmic pattern generators called central pattern generator (CPG) that produce locomotion, respiration, heartbeat, and etc. Until now, the CPG has been widely used in robotic systems especially for locomotion. In this paper, we propose a gait adaptation method of biped robot for various terrains. The CPGs are used for creating desired joint angle of a biped robot. And a learning mechanism is realized with genetic algorithms (GAs) and Neural Network (NN). From an each terrain, the most suitable parameters of CPGs that can produce stable biped robot gait are founded by GAs. The parameter set founded by GAs and sensor data at that time are used for training the NN. After finishing training of the NN, the NN can be used for producing suitable parameters of the CPGs according to a sensor input. Finally the gait of the biped robot is changed according to environment.
---
paper_title: Locomotion pattern generation of semi-looper type robots using central pattern generators based on van der Pol oscillators
paper_content:
A control problem is studied for generating the locomotion pattern of a semi-looper type robot by applying central pattern generators (CPGs), in which such a robot can realize two-rhythm motion and green caterpillar locomotion depending on the condition of environment. After deriving the dynamical model with two links and one actuator, the simulation of the robot is conducted using a CPG consisting of one van der Pol (VDP) oscillator. A CPG network composed of two VDP oscillators is further constructed to realize four-rhythm motion with three links and two actuators.
---
paper_title: Dynamically balanced optimal gaits of a ditch-crossing biped robot
paper_content:
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
---
paper_title: Application of Genetic Algorithms for biped robot gait synthesis optimization during walking and going up-stairs
paper_content:
Selecting an appropriate gait can reduce consumed energy by a biped robot. In this paper, a Genetic Algorithm gait synthesis method is proposed, which generates the angle trajectories based on the minimum consumed energy and minimum torque change. The gait synthesis is considered for two cases: walking and going up-stairs. The proposed method can be applied for a wide range of step lengths and step times during walking; or step lengths, stair heights and step times for going up-stairs. The angle trajectories are generated without neglecting the stability of the biped robot. The angle trajectories can be generated for other tasks to be performed by the biped robot, like going down-stairs, overcoming obstacles, etc. In order to verify the effectiveness of the proposed method, the results for minimum consumed energy and minimum torque change are compared. A Radial Basis Function Neural Network is considered for the real-time application. Simulations are realized based upon the parameters of the 'Bonten-Maru ...
---
paper_title: Real time gait generation for autonomous humanoid robots: A case study for walking
paper_content:
Abstract As autonomous humanoid robots assume more important roles in everyday life, they are expected to perform many different tasks and quickly adapt to unknown environments. Therefore, humanoid robots must generate quickly the appropriate gait based on information received from visual system. In this work, we present a new method for real time gait generation during walking based on Neural Networks. The minimum consumed energy gaits similar with human motion, are used to teach the Neural Network. After supervised learning, the Neural Network can quickly generate the humanoid robot gait. Simulation and experimental results utilizing the “Bonten-Maru I” humanoid robot show good performance of the proposed method.
---
paper_title: A Learning Architecture Based on Reinforcement Learning for Adaptive Control of the Walking Machine LAURON
paper_content:
The learning of complex control behaviour of autonomous mobile robots is one of the actual research topics. In this article an intelligent control architecture is presented which integrates learning methods and available domain knowledge. This control architecture is based on Reinforcement Learning and allows continuous input and output parameters, hierarchical learning, multiple goals, self-organized topology of the used networks and online learning. As a testbed this architecture is applied to the six-legged walking machine LAURON to learn leg control and leg coordination.
---
paper_title: Control Strategy for the Robust Dynamic Walk of a Biped Robot
paper_content:
This paper presents the main simulation and experimental results from studies of the robustness of a proposed new control strategy for the under-actuated robot RABBIT. The disturbances studied include modifications of the characteristics of the foot/ground interaction and the application of external forces to the trunk of the robot. These two kinds of disturbance correspond to those most frequently encountered by biped robots during walking in an unknown environment.
---
paper_title: Robustness of the dynamic walk of a biped robot subjected to disturbing external forces by using CMAC neural networks
paper_content:
Abstract In this paper, we propose a control strategy allowing us to perform the dynamic walking gait of an under-actuated robot even if this one is subjected to destabilizing external disturbances. This control strategy is based on two stages. The first one consists of using a set of pragmatic rules in order to generate a succession of passive and active phases allowing us to perform a dynamic walking gait of the robot. The joint trajectories of this reference gait are learned by using neural networks. In the second stage, we use these neural networks to generate the learned trajectories during the first stage. The goal of the use of these neural networks is to increase the robustness of the control of the dynamic walking gait of this robot in the case of external disturbances. The first experimental results are also presented.
---
paper_title: Biped dynamic walking using reinforcement learning
paper_content:
This paper presents some results from a study of biped dynamic walking using reinforcement learning. During this study a hardware biped robot was built, a new reinforcement learning algorithm as well as a new learning architecture were developed. The biped learned dynamic walking without any previous knowledge about its dynamic model. The self scaling reinforcement (SSR) learning algorithm was developed in order to deal with the problem of reinforcement learning in continuous action domains. The learning architecture was developed in order to solve complex control problems. It uses different modules that consist of simple controllers and small neural networks. The architecture allows for easy incorporation of new modules that represent new knowledge, or new requirements for the desired task.
---
paper_title: Neural networks for the control of a six-legged walking machine
paper_content:
Abstract In this paper a hierarchical control architecture for a six-legged walking machine is presented. The basic components of this architecture are neural networks which are taught in using examples of the control process. It is shown how the basic components “leg control” and “leg coordination” have been implemented by recurrent and feedforward networks respectively. The teaching process and the tests of the walking behaviour have mainly been done in a simulation system. First tests of the leg control on our real walking machine LAURON are also described.
---
paper_title: A movement pattern generator model using artificial neural networks
paper_content:
The authors have developed a movement pattern generator, using an artificial neural network (ANN) for generating periodic movement trajectories. This model is based on the concept of 'central pattern generators'. Jordan's (1986) sequential network, which is capable of learning sequences of patterns, was modified and used to generate several bipedal trajectories (or gaits), coded in task space, at different frequencies. The network model successfully learned all of the trajectories presented to it. The model has many attractive properties, such as limit cycle behavior, generalization of trajectories and frequencies, phase maintenance, and fault tolerance. The movement pattern generator model is potentially applicable for improved understanding of animal locomotion and for use in legged robots and rehabilitation medicine. >
---
paper_title: Evolution of Central Pattern Generators for Bipedal Walking in a Real-Time Physics Environment
paper_content:
We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary in order to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to 3D physically simulated biped locomotion.
---
paper_title: Recurrent neural networks as pattern generators
paper_content:
A fully connected recurrent ANN model is proposed as a generator of stable limit cycles. A hybrid genetic algorithm is used for training the network model. The behaviour of the model is presented through a number of simulation experiments and the stability of the generated limit cycles is tested under several conditions. >
---
paper_title: Evolving Dynamical Neural Networks for Adaptive Behavior
paper_content:
We would like the behavior of the artificial agents that we construct to be as well-adapted to their environments as natural animals are to theirs. Unfortunately, designing controllers with these properties is a very difficult task. In this article, we demonstrate that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers. A significant advantage of this approach is that one need specify only a measure of an agent's overall performance rather than the precise motor output trajectories by which it is achieved. By manipulating the performance evaluation, one can place selective pressure on the development of controllers with desired properties. Several novel controllers have been evolved, including a chemotaxis controller that switches between different strategies depending on environmental conditions, and a locomotion controller that takes advantage of sensory feedback if available but th...
---
paper_title: Application of evolved locomotion controllers to a hexapod robot
paper_content:
In previous work, we demonstrated that genetic algorithms could be used to evolve dynamical neural networks for controlling the locomotion of a simulated hexapod agent. We also demonstrated that these evolved controllers were robust to loss of sensory feedback and other peripheral variations. In this paper, we show that these locomotion controllers, evolved in simulation, are capable of directing the walking of a real six-legged robot, and that many of the desirable properties observed in simulation carry over directly to the real world. In addition, we demonstrate that these controllers are amenable to hardware implementation and can thus be easily embodied within the robot.
---
paper_title: Design of a Central Pattern Generator Using Reservoir Computing for Learning Human Motion
paper_content:
To generate coordinated periodic movements, robot locomotion demands mechanisms which are able to learn and produce stable rhythmic motion in a controllable way. Because systems based on biological central pattern generators (CPGs) can cope with these demands, these kind of systems are gaining in success. In this work we introduce a novel methodology that uses the dynamics of a randomly connected recurrent neural network for the design of CPGs. When a randomly connected recurrent neural network is excited with one or more useful signals, an output can be trained by learning an instantaneous linear mapping of the neuron states. This technique is known as reservoir computing (RC). We will show that RC has the necessary capabilities to be fruitful in designing a CPG that is able to learn human motion which is applicable for imitation learning in humanoid robots.
---
paper_title: Efference copies in neural control of dynamic biped walking
paper_content:
In the early 1950s, von Holst and Mittelstaedt proposed that motor commands copied within the central nervous system (efference copy) help to distinguish 'reafference' activity (afference activity due to self-generated motion) from 'exafference' activity (afference activity due to external stimulus). In addition, an efference copy can be also used to compare it with the actual sensory feedback in order to suppress self-generated sensations. Based on these biological findings, we conduct here two experimental studies on our biped ''RunBot'' where such principles together with neural forward models are applied to RunBot's dynamic locomotion control. The main purpose of this article is to present the modular design of RunBot's control architecture and discuss how the inherent dynamic properties of the different modules lead to the required signal processing. We believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of modern neural control strategies in robots.
---
paper_title: Neuroethological Concepts and their Transfer to Walking Machines
paper_content:
A systems approach to animal motor behavior reveals concepts that can be useful for the pragmatic design of walking machines. This is because the relation of animal behavior to its underlying nervous control algorithms bears many parallels to the relation of machine function to electronic control. Here, three major neuroethological concepts of motor behavior are described in terms of a conceptual framework based on artificial neural networks (ANN). Central patterns of activity and postural reflexes are both interpreted as a result of feedback loops, with the distinction of loops via an internal model from loops via the physical environment (body, external world). This view allows continuous transitions between predictive (centrally driven) and reactive (reflex driven) motor systems. Motor primitives, behavioral modules that are elicited by distinct commands, are also considered. ANNs capture these three major concepts in terms of a formal description, in which the interactions and mutual interdependences ...
---
paper_title: Probabilistic Balance Monitoring for Bipedal Robots
paper_content:
In this paper, a probability-based balance monitoring concept for humanoid robots is proposed. Two algorithms are presented that allow us to distinguish between exceptional situations and normal operations. The first classification approach uses Gaussian-Mixture-Models (GMM) to describe the distribution of the robot's sensor data for typical situations such as stable walking or falling down. With the GMM it is possible to state the probability of the robot being in one of the known situations. The concept of the second algorithm is based on Hidden-Markov-Models (HMM). The objective is to detect and classify unstable situations by means of their typical sequences in the robot's sensor data. When appropriate reflex motions are linked to the critical situations, the robot can prevent most falls or is at least able to execute a controlled falling motion. The proposed algorithms are verified by simulations and experiments with our bipedal robot BARt-UH.
---
paper_title: Embodied Symbol Emergence Based on Mimesis Theory
paper_content:
“Mimesis” theory focused in the cognitive science field and “mirror neurons” found in the biology field show that the behavior generation process is not independent of the behavior cognition process. The generation and cognition processes have a close relationship with each other. During the behavioral imitation period, a human being does not practice simple joint coordinate transformation, but will acknowledge the parents’ behavior. It understands the behavior after abstraction as symbols, and will generate its self-behavior. Focusing on these facts, we propose a new method which carries out the behavior cognition and behavior generation processes at the same time. We also propose a mathematical model based on hidden Markov models in order to integrate four abilities: (1) symbol emergence; (2) behavior recognition; (3) self-behavior generation; (4) acquiring the motion primitives. Finally, the feasibility of this method is shown through several experiments on a humanoid robot.
---
paper_title: Incremental Learning, Clustering and Hierarchy Formation of Whole Body Motion Patterns using Adaptive Hidden Markov Chains
paper_content:
This paper describes a novel approach for autonomous and incremental learning of motion pattern primitives by observation of human motion. Human motion patterns are abstracted into a dynamic stochastic model, which can be used for both subsequent motion recognition and generation, analogous to the mirror neuron hypothesis in primates. The model size is adaptable based on the discrimination requirements in the associated region of the current knowledge base. A new algorithm for sequentially training the Markov chains is developed, to reduce the computation cost during model adaptation. As new motion patterns are observed, they are incrementally grouped together using hierarchical agglomerative clustering based on their relative distance in the model space. The clustering algorithm forms a tree structure, with specialized motions at the tree leaves, and generalized motions closer to the root. The generated tree structure will depend on the type of training data provided, so that the most specialized motions will be those for which the most training has been received. Tests with motion capture data for a variety of motion primitives demonstrate the efficacy of the algorithm.
---
paper_title: Mimesis Model from Partial Observations for a Humanoid Robot
paper_content:
This paper proposes a new mimesis scheme for partial observations, consisting of two strategies; (1) motion understanding from partial observations and (2) proto-symbol-based motion duplication. With the proposed method, whole-body motion imitation is possible even when observing partial motion data. The scheme enables a humanoid robot to imitate a new observed motion by utilizing its own prior knowledge, without learning the demonstrated motion. Evaluation factors, such as inheritance coordinate and matching error, are introduced to evaluate imitation performance. The feasibility of the proposed scheme is demonstrated by an evaluation for a 20-degree-of-freedom humanoid robot.
---
paper_title: Experiments in learning distributed control for a hexapod robot
paper_content:
Abstract This paper reports on experiments involving a hexapod robot. Motivated by neurobiological evidence that control in real hexapod insects is distributed leg-wise, we investigated two approaches to learning distributed controllers: genetic algorithms and reinforcement learning. In the case of reinforcement learning, a new learning algorithm was developed to encourage cooperation between legs. Results from both approaches are presented and compared.
---
paper_title: Gait Optimization through Search
paper_content:
We present a search-based method for the generation of a terrain-adaptive optimal gait of a six-legged walking machine. In this, several heuristic rules have been proposed to reduce the search effort. We identify the useful support states of the machine and form a table to indicate for each of these states the list of other states to which a transition can be made. This helps in converging to and maintaining a periodic gait through a limited search while retaining adequate options to deviate from such a gait as and when needed. The criterion for optimization is coded into a function that evaluates the promise of a node in the search graph. We have shown how this function may be designed to generate the common periodic gaits like the wave gait, the equal phase gait, and the follow-the-leader gait. The purpose is to demonstrate that the proposed method is sufficiently general and can cater to a wide range of optimizing requirements.
---
paper_title: Dynamically balanced optimal gaits of a ditch-crossing biped robot
paper_content:
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
---
paper_title: Integration of linguistic and numerical information for biped control
paper_content:
Bipedal locomotion is an important hallmark of human evolution. Despite of complex control systems, human locomotion is characterized by smooth, regular, and repeating movements. Therefore, there is the potential for applying human locomotion strategies and any knowledge available to the biped control. In order to make the most use of the information available, a linguistic-numerical integration-based biped control method is proposed in this paper. The numerical data from biped measuring instruments, and the linguistic rules obtained from intuitive walking knowledge and biomechanics study have been classified into four categories: direct rules, direct data, indirect rules, and indirect data. Based on inverse learning and data fusion theory, two simple and intuitive integration schemes are proposed to integrate linguistic and numerical information with various forms, such as direct and indirect. One is neurofuzzy-based integration, and another is fuzzy rules extraction-based integration. The simulation results show that the biped gait and joint control performance can be significantly improved by the prescribed synergy method-based neurofuzzy gait synthesis and fuzzy rules extraction-based joint control strategies using linguistic and numerical integrated information.
---
paper_title: Robot learning with GA-based fuzzy reinforcement learning agents
paper_content:
Abstract How to learn from both expert knowledge and measurement-based information for a robot to acquire perception and motor skills is a challenging research topic in the field of autonomous robotic systems. For this reason, a general GA (genetic algorithm)-based fuzzy reinforcement learning (GAFRL) agent is proposed in this paper. We first characterize the robot learning problem and point out some major issues that need to be addressed in conjunction with reinforcement learning. Based on a neural fuzzy network architecture of the GAFRL agent, we then discuss how different kinds of expert knowledge and measurement-based information can be incorporated in the GAFRL agent so as to accelerate its learning. By making use of the global optimization capability of GAs, the GAFRL can solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can evaluate the candidate solutions regularly even during the periods without external feedback from the environment. This can guide GAs to perform a more effective global search. Finally, different types of GAFRL agents are constructed and verified using the simulation model of a physical biped robot.
---
paper_title: Fuzzy-logic zero-moment-point trajectory generation for reduced trunk motions of biped robots
paper_content:
Trunk motions are typically used in biped robots to stabilize the locomotion. However, they can be very large for some leg trajectories unless they are carefully designed. This paper proposes a fuzzy-logic zero-moment-point (ZMP) trajectory generator that would eventually reduce the swing motion of the trunk significantly even though the leg trajectory is casually designed, for example, simply to avoid obstacles. The fuzzy-logic ZMP trajectory generator uses the leg trajectory as an input. The resulting ZMP trajectory is similar to that of a human one and continuously moves forward in the direction of the locomotion. The trajectory of the trunk to stabilize the locomotion is determined by solving a differential equation with the ZMP trajectory and the leg trajectory known. The proposed scheme is simulated on a 7-DOF biped robot in the sagittal plane. The simulation results show that the ZMP trajectory generated by the proposed fuzzy-logic generator increases the stability of the locomotion and thus reduces the motion range of the trunk significantly.
---
paper_title: Dynamic balance of a biped robot using fuzzy reinforcement learning agents
paper_content:
This paper presents a general fuzzy reinforcement learning (FRL) method for biped dynamic balance control. Based on a neuro fuzzy network architecture, different kinds of expert knowledge and measurement-based information can be incorporated into the FRL agent to initialise its action network, critic network and/or evaluation feedback module so as to aecelerate its learning. The proposed FRL agent is constructed and verified using the simulation model of a physical biped robot. The sinmtation analysis shows that by incorporation of the human intuitive balancing knowledge and walking evaluation knowledge, the FRL agent's learning rate for side-to-side and front-to-back balance of the simulated biped can be improved. We also demonstrate that it is possible for a biped robot to start its walking with a priori knowledge and then learn to improve its behaviour with the FRL agents.
---
paper_title: Optimal Path and Gait Generations Simultaneously of a Six-legged Robot Using a GA-Fuzzy Approach
paper_content:
This paper describes a new method for generating optimal path and gait simultaneously of a six-legged robot using a combined GA-fuzzy approach. The problem of combined path and gait generations involves three steps, namely determination of vehicle’s trajectory, foothold selection and design of a sequence of leg movements. It is a complicated task and no single traditional approach is found to be successful in handling this problem. Moreover, the traditional approaches do not consider optimization issues, yet they are computationally expensive. Thus, the generated path and gaits may not be optimal in any sense. To solve such problems optimally, there is still a need for the development of an efficient and computationally faster algorithm. In the proposed genetic-fuzzy approach, optimal path and gaits are generated by using fuzzy logic controllers (FLCs) and genetic algorithms (GAs) are used to find optimized FLCs. The optimization is done off-line on a number of training scenarios and optimal FLCs are found. The hexapod can then use these GA-tuned FLCs to navigate in test-case scenarios.
---
paper_title: On-line stable gait generation of a two-legged robot using a genetic–fuzzy system
paper_content:
Abstract Gait generation for legged vehicles has since long been considered as an area of keen interest by the researchers. Soft computing is an emerging technique, whose utility is more stressed, when the problems are ill-defined, difficult to model and exhibit large scale solution spaces. Gait generation for legged vehicles is a complex task. Therefore, soft computing can be applied to solve it. In this work, gait generation problem of a two-legged robot is modeled using a fuzzy logic controller (FLC), whose rule base is optimized offline, using a genetic algorithm (GA). Two different GA-based approaches (to improve the performance of FLC) are developed and their performances are compared to that of manually constructed FLC. Once optimized, the FLCs will be able to generate dynamically stable gait of the biped. As the CPU-time of the algorithm is found to be only 0.002 s in a P-III PC, the algorithm is suitable for on-line (real-time) implementations.
---
paper_title: Incremental Learning and Memory Consolidation of Whole Body Human Motion Primitives
paper_content:
The ability to learn during continuous and on-line observation would be advantageous for humanoid robots, as it would enable them to learn during co-location and interaction in the human environment. However, when motions are being learned and clustered on-line, there is a trade-off between classification accuracy and the number of training examples, resulting in potential misclassifications both at the motion and hierarchy formation level. This article presents an approach enabling fast on-line incremental learning, combined with an incremental memory consolidation process correcting initial misclassifications and errors in organization, to improve the stability and accuracy of the learned motions, analogous to the memory consolidation process following motor learning observed in humans. Following initial organization, motions are randomly selected for reclassification, at both low and high levels of the hierarchy. If a better reclassification is found, the knowledge structure is reorganized to comply. The approach is validated during incremental acquisition of a motion database containing a variety of full body motions.1
---
paper_title: Dynamically balanced optimal gaits of a ditch-crossing biped robot
paper_content:
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
---
paper_title: Probabilistic Balance Monitoring for Bipedal Robots
paper_content:
In this paper, a probability-based balance monitoring concept for humanoid robots is proposed. Two algorithms are presented that allow us to distinguish between exceptional situations and normal operations. The first classification approach uses Gaussian-Mixture-Models (GMM) to describe the distribution of the robot's sensor data for typical situations such as stable walking or falling down. With the GMM it is possible to state the probability of the robot being in one of the known situations. The concept of the second algorithm is based on Hidden-Markov-Models (HMM). The objective is to detect and classify unstable situations by means of their typical sequences in the robot's sensor data. When appropriate reflex motions are linked to the critical situations, the robot can prevent most falls or is at least able to execute a controlled falling motion. The proposed algorithms are verified by simulations and experiments with our bipedal robot BARt-UH.
---
paper_title: Automated evolutionary design, robustness, and adaptation of sidewinding locomotion of a simulated snake-like robot
paper_content:
Inspired by the efficient method of locomotion of the rattlesnake Crotalus cerastes, the objective of this work is automatic design through genetic programming (GP) of the fastest possible (sidewinding) locomotion of simulated limbless, wheelless snake-like robot (Snakebot). The realism of simulation is ensured by employing the Open Dynamics Engine (ODE), which facilitates implementation of all physical forces, resulting from the actuators, joints constrains, frictions, gravity, and collisions. Reduction of the search space of the GP is achieved by representation of Snakebot as a system comprising identical morphological segments and by automatic definition of code fragments, shared among (and expressing the correlation between) the evolved dynamics of the vertical and horizontal turning angles of the actuators of Snakebot. Empirically obtained results demonstrate the emergence of sidewinding locomotion from relatively simple motion patterns of morphological segments. Robustness of the sidewinding Snakebot, which is considered to be the ability to retain its velocity when situated in an unanticipated environment, is illustrated by the ease with which Snakebot overcomes various types of obstacles such as a pile of or burial under boxes, rugged terrain, and small walls. The ability of Snakebot to adapt to partial damage by gradually improving its velocity characteristics is discussed. Discovering compensatory locomotion traits, Snakebot recovers completely from single damage and recovers a major extent of its original velocity when more significant damage is inflicted. Exploring the opportunity for automatic design and adaptation of a simulated artifact, this work could be considered as a step toward building real Snakebots, which are able to perform robustly in difficult environments.
---
paper_title: Autonomous evolution of dynamic gaits with two quadruped robots
paper_content:
A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First, we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10 m/min, which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.
---
paper_title: A movement pattern generator model using artificial neural networks
paper_content:
The authors have developed a movement pattern generator, using an artificial neural network (ANN) for generating periodic movement trajectories. This model is based on the concept of 'central pattern generators'. Jordan's (1986) sequential network, which is capable of learning sequences of patterns, was modified and used to generate several bipedal trajectories (or gaits), coded in task space, at different frequencies. The network model successfully learned all of the trajectories presented to it. The model has many attractive properties, such as limit cycle behavior, generalization of trajectories and frequencies, phase maintenance, and fault tolerance. The movement pattern generator model is potentially applicable for improved understanding of animal locomotion and for use in legged robots and rehabilitation medicine. >
---
paper_title: A universal stability criterion of the foot contact of legged robots - adios ZMP
paper_content:
This paper proposes a universal stability criterion of the foot contact of legged robots. The proposed method checks if the sum of the gravity and the inertia wrench applied to the COG of the robot, which is proposed to be the stability criterion, is inside the polyhedral convex cone of the contact wrench between the feet of a robot and its environment. The criterion can be used to determine the strong stability of the foot contact when a robot walks on an arbitrary terrain and/or when the hands of the robot are in contact with it under the sufficient friction assumption. The determination is equivalent to check if the ZMP is inside the support polygon of the feet when the robot walks on a horizontal plane with sufficient friction. The criterion can also be used to determine if the foot contact is sufficiently weakly stable when the friction follows a physical law. Therefore, the proposed criterion can be used to judge what the ZMP can, and it can be used in more universal cases
---
paper_title: A Learning Architecture Based on Reinforcement Learning for Adaptive Control of the Walking Machine LAURON
paper_content:
The learning of complex control behaviour of autonomous mobile robots is one of the actual research topics. In this article an intelligent control architecture is presented which integrates learning methods and available domain knowledge. This control architecture is based on Reinforcement Learning and allows continuous input and output parameters, hierarchical learning, multiple goals, self-organized topology of the used networks and online learning. As a testbed this architecture is applied to the six-legged walking machine LAURON to learn leg control and leg coordination.
---
paper_title: Modeling of a Neural Pattern Generator with Coupled nonlinear Oscillators
paper_content:
A set of van der Pol oscillators is arranged in a network in which each oscillator is coupled to each other oscillator. Through the selection of coupling coefficients, the network is made to appear as a ring and as a chain of coupled oscillators. Each oscillator is provided with amplitude, frequency, and offset parameters which have analytically indeterminable effects on the output waves. These systems are simulated on the digital computer in order to study the amplitude, frequency, offset, and phase relationships of the waves versus parameter changes. Based on the simulations, systems of coupled oscillators are configured so that they exhibit stable patterns of signals which can be used to model the central pattern generator (CPG) of living organisms. Using a simple biped as an example locomotory system, the CPG model generates control signals for simulated walking and jumping maneuvers. It is shown that with parameter adjustments, as guided by the simulations, the model can be made to generate kinematic trajectories which closely resemble those for the human walking gait. Further-more, minor tuning of these parameters along with some algebraic sign changes of coupling coefficients can effect a transition in the trajectories to those of a two-legged hopping gait. The generalized CPG model is shown to be versatile enough that it can also generate various n-legged gaits and spinal undulatory motions, as in the swimming motions of a fish.
---
paper_title: Soccer playing humanoid robots: Processing architecture, gait generation and vision system
paper_content:
Research on humanoid robotics in Mechatronics and Automation (MA) Laboratory, Electrical and Computer Engineering (ECE), National University of Singapore (NUS) was started at the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. These humanoids have been successfully participating in various robotic soccer competitions. In this paper, three major research and development aspects of the above humanoid research are discussed. The paper focuses on various practical and theoretical considerations involved in processing architecture, gait generation and vision systems.
---
paper_title: Design of a Central Pattern Generator Using Reservoir Computing for Learning Human Motion
paper_content:
To generate coordinated periodic movements, robot locomotion demands mechanisms which are able to learn and produce stable rhythmic motion in a controllable way. Because systems based on biological central pattern generators (CPGs) can cope with these demands, these kind of systems are gaining in success. In this work we introduce a novel methodology that uses the dynamics of a randomly connected recurrent neural network for the design of CPGs. When a randomly connected recurrent neural network is excited with one or more useful signals, an output can be trained by learning an instantaneous linear mapping of the neuron states. This technique is known as reservoir computing (RC). We will show that RC has the necessary capabilities to be fruitful in designing a CPG that is able to learn human motion which is applicable for imitation learning in humanoid robots.
---
paper_title: Experiments in learning distributed control for a hexapod robot
paper_content:
Abstract This paper reports on experiments involving a hexapod robot. Motivated by neurobiological evidence that control in real hexapod insects is distributed leg-wise, we investigated two approaches to learning distributed controllers: genetic algorithms and reinforcement learning. In the case of reinforcement learning, a new learning algorithm was developed to encourage cooperation between legs. Results from both approaches are presented and compared.
---
paper_title: Evolving Dynamical Neural Networks for Adaptive Behavior
paper_content:
We would like the behavior of the artificial agents that we construct to be as well-adapted to their environments as natural animals are to theirs. Unfortunately, designing controllers with these properties is a very difficult task. In this article, we demonstrate that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers. A significant advantage of this approach is that one need specify only a measure of an agent's overall performance rather than the precise motor output trajectories by which it is achieved. By manipulating the performance evaluation, one can place selective pressure on the development of controllers with desired properties. Several novel controllers have been evolved, including a chemotaxis controller that switches between different strategies depending on environmental conditions, and a locomotion controller that takes advantage of sensory feedback if available but th...
---
paper_title: Optimal Path and Gait Generations Simultaneously of a Six-legged Robot Using a GA-Fuzzy Approach
paper_content:
This paper describes a new method for generating optimal path and gait simultaneously of a six-legged robot using a combined GA-fuzzy approach. The problem of combined path and gait generations involves three steps, namely determination of vehicle’s trajectory, foothold selection and design of a sequence of leg movements. It is a complicated task and no single traditional approach is found to be successful in handling this problem. Moreover, the traditional approaches do not consider optimization issues, yet they are computationally expensive. Thus, the generated path and gaits may not be optimal in any sense. To solve such problems optimally, there is still a need for the development of an efficient and computationally faster algorithm. In the proposed genetic-fuzzy approach, optimal path and gaits are generated by using fuzzy logic controllers (FLCs) and genetic algorithms (GAs) are used to find optimized FLCs. The optimization is done off-line on a number of training scenarios and optimal FLCs are found. The hexapod can then use these GA-tuned FLCs to navigate in test-case scenarios.
---
paper_title: On-line stable gait generation of a two-legged robot using a genetic–fuzzy system
paper_content:
Abstract Gait generation for legged vehicles has since long been considered as an area of keen interest by the researchers. Soft computing is an emerging technique, whose utility is more stressed, when the problems are ill-defined, difficult to model and exhibit large scale solution spaces. Gait generation for legged vehicles is a complex task. Therefore, soft computing can be applied to solve it. In this work, gait generation problem of a two-legged robot is modeled using a fuzzy logic controller (FLC), whose rule base is optimized offline, using a genetic algorithm (GA). Two different GA-based approaches (to improve the performance of FLC) are developed and their performances are compared to that of manually constructed FLC. Once optimized, the FLCs will be able to generate dynamically stable gait of the biped. As the CPU-time of the algorithm is found to be only 0.002 s in a P-III PC, the algorithm is suitable for on-line (real-time) implementations.
---
| Title: Intelligent Approaches in Locomotion - A Review
Section 1: Introduction
Description 1: An overview of the paper and its objectives, highlighting the scope of the review and the method of categorization and comparison of techniques.
Section 2: Analytical approaches
Description 2: Discuss the oldest and frequently used method in locomotion control, including detailed explanation of biped and non-biped trajectories and optimization techniques.
Section 3: Central pattern generators and oscillators
Description 3: Explain the use of CPGs and oscillator models, their biological inspiration, and how they are implemented in robotic systems, including parameter selection and optimization.
Section 4: Neural networks
Description 4: Describe the use of conventional neural networks in locomotion, differentiating between feed-forward and recurrent neural networks, and their applications and training techniques.
Section 5: Hidden Markov Models
Description 5: Present the application of HMMs in robotic learning by imitation and their use in skill observation, classification, and imitation, including recognition and synthesis methods.
Section 6: Rule based systems
Description 6: Discuss systems that use transition tables and fuzzy logic for locomotion control, emphasizing their simplicity and ease of optimization, as well as advantages and disadvantages.
Section 7: Conclusion
Description 7: Summarize the comparison of various approaches, highlight the strengths and weaknesses of each method, and suggest criteria for selecting appropriate techniques based on application requirements. |
Interconnection Structures, Management and Routing Challenges in Cloud-Service Data Center Networks: A Survey | 6 | ---
paper_title: Provisioning high-availability datacenter networks for full bandwidth communication
paper_content:
One critical challenge in datacenter network design is full bandwidth communication. Recent advances have enabled this communication paradigm based on the notion of Valiant load balancing (VLB). In this paper, we target full bandwidth communication among all servers, for all valid traffic patterns, and under k arbitrary link failures. We focus on two typical datacenter topologies, VL2 and fat-tree, and propose a mechanism to perform VLB on fat-tree. We develop the minimum link capacity required on both topologies, where edge and core links are handled separately. These results can help datacenter providers to provision their networks with guaranteed availability. Based on the results, we evaluate the minimum total link capacity required on each topology and characterize the capacity increase trend with k and with the total number of supported servers. These studies are important for datacenter providers to project their capital expenditures on datacenter design, upgrade, and expansion. Next, we compare the total link capacity between the two topologies. We find that given the same server scale, fat-tree requires less total capacity than does VL2 for small k. For large k, there exists a turning point at which VL2 becomes more capacity-efficient.
---
paper_title: Scalable and cost-effective interconnection of data-center servers using dual server ports
paper_content:
The goal of data-center networking is to interconnect a large number of server machines with low equipment cost while providing high network capacity and high bisection width. It is well understood that the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements. In this paper, we explore a new server-interconnection structure. We observe that the commodity server machines used in today's data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purposes. We believe that if both ports are actively used in network connections, we can build a scalable, cost-effective interconnection structure without either the expensive higher-level large switches or any additional hardware on servers. We design such a networking structure called FiConn. Although the server node degree is only 2 in this structure, we have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. We have developed a low-overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. We have also proposed how to incrementally deploy FiConn.
---
paper_title: Survey on routing in data centers: insights and future directions
paper_content:
Recently, a series of data center network architectures have been proposed. The goal of these works is to interconnect a large number of servers with significant bandwidth requirements. Coupled with these new DCN structures, routing protocols play an important role in exploring the network capacities that can be potentially delivered by the topologies. This article conducts a survey on the current state of the art of DCN routing techniques. The article focuses on the insights behind these routing schemes and also points out the open research issues hoping to spark new interests and developments in this field.
---
paper_title: Server-storage virtualization: Integration and load balancing in data centers
paper_content:
We describe the design of an agile data center with integrated server and storage virtualization technologies. Such data centers form a key building block for new cloud computing architectures.We also show how to leverage this integrated agility for non-disruptive load balancing in data centers across multiple resource layers - servers, switches, and storage. We propose a novel load balancing algorithm called VectorDot for handling the hierarchical and multi-dimensional resource constraints in such systems. The algorithm, inspired by the successful Toyoda method for multi-dimensional knapsacks, is the first of its kind. We evaluate our system on a range of synthetic and real data center testbeds comprising of VMware ESX servers, IBM SAN Volume Controller, Cisco and Brocade switches. Experiments under varied conditions demonstrate the end-to-end validity of our system and the ability of VectorDot to efficiently remove overloads on server, switch and storage nodes.
---
paper_title: PortLand: a scalable fault-tolerant layer 2 data center network fabric
paper_content:
This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, inflexible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet/IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a ``plug-and-play" large-scale, data center network.
---
paper_title: Server virtualization in autonomic management of heterogeneous workloads
paper_content:
Server virtualization opens up a range of new possibilities for autonomic datacenter management, through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines. This offers not only new and more flexible control to the operator using a management console, but also more powerful and flexible autonomic control, through management software that maintains the system in a desired state in the face of changing workload and demand. This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads. We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation. We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management.
---
paper_title: Autopilot: automatic data center management
paper_content:
Microsoft is rapidly increasing the number of large-scale web services that it operates. Services such as Windows Live Search and Windows Live Mail operate from data centers that contain tens or hundreds of thousands of computers, and it is essential that these data centers function reliably with minimal human intervention. This paper describes the first version of Autopilot, the automatic data center management infrastructure developed within Microsoft over the last few years. Autopilot is responsible for automating software provisioning and deployment; system monitoring; and carrying out repair actions to deal with faulty software and hardware. A key assumption underlying Autopilot is that the services built on it must be designed to be manageable. We also therefore outline the best practices adopted by applications that run on Autopilot.
---
paper_title: Application Performance Management in Virtualized Server Environments
paper_content:
As businesses have grown, so has the need to deploy I/T applications rapidly to support the expanding business processes. Often, this growth was achieved in an unplanned way: each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased. In many cases this has led to what is often referred to as "server sprawl", resulting in low server utilization and high system management costs. An architectural approach that is becoming increasingly popular to address this problem is known as server virtualization. In this paper we introduce the concept of server consolidation using virtualization and point out associated issues that arise in the area of application performance. We show how some of these problems can be solved by monitoring key performance metrics and using the data to trigger migration of Virtual Machines within physical servers. The algorithms we present attempt to minimize the cost of migration and maintain acceptable application performance levels.
---
paper_title: Shares and utilities based power consolidation in virtualized server environments
paper_content:
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47% or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the min-max and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for power-performance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the end-to-end validity of our approach and demonstrate that our final candidate algorithm, PowerExpandMinMax, consistently yields the best overall utility across a broad spectrum of inputs - varying VM sizes and utilities, varying server capacities and varying power costs - thus providing a practical solution for administrators.
---
paper_title: Reliability and Survivability Analysis of Data Center Network Topologies
paper_content:
The architecture of several data centers have been proposed as alternatives to the conventional three-layer one. Most of them employ commodity equipment for cost reduction. Thus, robustness to failures becomes even more important, because commodity equipment is more failure-prone. Each architecture has a different network topology design with a specific level of redundancy. In this work, we aim at analyzing the benefits of different data center topologies taking the reliability and survivability requirements into account. We consider the topologies of three alternative data center architecture: Fat-tree, BCube, and DCell. Also, we compare these topologies with a conventional three-layer data center topology. Our analysis is independent of specific equipment, traffic patterns, or network protocols, for the sake of generality. We derive closed-form formulas for the Mean Time To Failure of each topology. The results allow us to indicate the best topology for each failure scenario. In particular, we conclude that BCube is more robust to link failures than the other topologies, whereas DCell has the most robust topology when considering switch failures. Additionally, we show that all considered alternative topologies outperform a three-layer topology for both types of failures. We also determine to which extent the robustness of BCube and DCell is influenced by the number of network interfaces per server.
---
paper_title: Configuration management at massive scale: system design and experience
paper_content:
The development and maintenance of network device configurations is one of the central challenges faced by large network providers. Current network management systems fail to meet this challenge primarily because of their inability to adapt to rapidly evolving customer and provider-network needs, and because of mismatches between the conceptual models of the tools and the services they must support. In this paper, we present the PRESTO configuration management system that attempts to address these failings in a comprehensive and flexible way. Developed for and deployed over the last 4 years within a large ISP network, PRESTO constructs device-native configurations based on the composition of configlets representing different services or service options. Configlets are compiled by extracting and manipulating data from external systems as directed by the PRESTO configuration scripting and template language. We outline the configuration management needs of large-scale network providers, introduce the PRESTO system and configuration language, and demonstrate the use, workflows, and ultimately the platform's flexibility via an example of VPN service. We conclude by considering future work and reflect on the operators' experiences with PRESTO.
---
paper_title: Virtual network based autonomic network resource control and management system
paper_content:
Traditional telecommunications service providers are undergoing a transition to a shared infrastructure in which multiple services will be delivered by peer and server computers interconnected by IP networks. IP transport networks that can transfer packets according to differentiated levels of QoS, availability and price are a key element to generating revenue through a rich offering of services. Automated service and network management are essential to creating and maintaining a flexible and agile service delivery infrastructure that also has much lower operations expense than existing systems. In this paper we focus on the SLA-based IP packet transport service on a core network infrastructure and we argue that the above requirements can be met by a self-management system based on autonomic computing and virtual network concepts. We present a control and management system based on this approach.
---
paper_title: The cost of a cloud: research problems in data center networks
paper_content:
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
---
paper_title: Resource provisioning for cloud PON AWGR-based data center architecture
paper_content:
Recent years have witnessed an ever-increasing growth for cloud computing services and applications housed by data centers. PON based optical interconnects for data center networks is a promising technology to offer high bandwidth, efficient utilization of resources, reduced latency and reduced energy consumption compared to current data center networks based on electronic switches. This paper presents our proposed scheme for data center interconnection to manage intra/inter communication traffic based on readily available low cost and power PON components. In this work, we tackle the problem of resource provisioning optimization for cloud applications in our proposed PON data centers architecture. We use Mixed Integer Linear Programming (MILP) to optimize the power consumption and delay for different cloud applications. The results show that delay can be decreased by 62% for delay-sensitive applications and power consumption can be decreased by 22% for non-delay sensitive applications.
---
paper_title: REWIRE: An optimization-based framework for unstructured data center network design
paper_content:
Despite the many proposals for data center network (DCN) architectures, designing a DCN remains challenging. DCN design is especially difficult when expanding an existing network, because traditional DCN design places strict constraints on the topology (e.g., a fat-tree). Recent advances in routing protocols allow data center servers to fully utilize arbitrary networks, so there is no need to require restricted, regular topologies in the data center. Therefore, we propose a data center network design framework, that we call REWIRE, to design networks using an optimization algorithm. Our algorithm finds a network with maximal bisection bandwidth and minimal end-to-end latency while meeting user-defined constraints and accurately modeling the predicted cost of the network. We evaluate REWIRE on a wide range of inputs and find that it significantly outperforms previous solutions—its network designs have up to 100–500% more bisection bandwidth and less end-to-end network latency than equivalent-cost DCNs built with best practices.
---
paper_title: A scalable, commodity data center network architecture
paper_content:
Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.
---
paper_title: Myrinet: A Gigabit-per-Second Local Area Network
paper_content:
The Myrinet local area network employs the same technology used for packet communication and switching within massively parallel processors. In realizing this distributed MPP network, we developed specialized communication channels, cut-through switches, host interfaces, and software. To our knowledge, Myrinet demonstrates the highest performance per unit cost of any current LAN. >
---
paper_title: Towards a next generation data center architecture: scalability and commoditization
paper_content:
Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components). Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.
---
paper_title: Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud
paper_content:
Data centers today consume tremendous amount of energy in terms of power distribution and cooling. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands. However, despite extensive studies of the problem, existing solutions for dynamic capacity provisioning have not fully considered the heterogeneity of both workload and machine hardware found in production environments. In particular, production data centers often comprise several generations of machines with different capacities, capabilities and energy consumption characteristics. Meanwhile, the workloads running in these data centers typically consist of a wide variety of applications with different priorities, performance objectives and resource requirements. Failure to consider heterogenous characteristics will lead to both sub-optimal energy-savings and long scheduling delays, due to incompatibility between workload requirements and the resources offered by the provisioned machines. To address this limitation, in this paper we present HARMONY, a Heterogeneity-Aware Resource Management System for dynamic capacity provisioning in cloud computing environments. Specifically, we first use the K-means clustering algorithm to divide the workload into distinct task classes with similar characteristics in terms of resource and performance requirements. Then we present a novel technique for dynamically adjusting the number of machines of each type to minimize total energy consumption and performance penalty in terms of scheduling delay. Through simulations using real traces from Google's compute clusters, we found that our approach can improve data center energy efficiency by up to 28% compared to heterogeneity-oblivious solutions.
---
paper_title: A scalable, commodity data center network architecture
paper_content:
Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.
---
paper_title: FiConn: Using Backup Port for Server Interconnection in Data Centers
paper_content:
The goal of data center networking is to interconnect a large number of server machines with low equipment cost, high and balanced network capacity, and robustness to link/server faults. It is well understood that, the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements (8), (9). In this paper, we explore a new server-interconnection struc- ture. We observe that the commodity server machines used in today's data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purpose. We believe that, if both ports are actively used in network connections, we can build a low-cost interconnection structure without the expensive higher-level large switches. Our new network design, called FiConn, utilizes both ports and only the low-end commodity switches to form a scalable and highly effective structure. Although the server node degree is only two in this structure, we have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. The routing mechanism in FiConn balances different levels of links. We have further developed a low- overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. Simulation results have demonstrated that the routing mechanisms indeed achieve high networking throughput.
---
paper_title: A study of non-blocking switching networks
paper_content:
This paper describes a method of designing arrays of crosspoints for use in telephone switching systems in which it will always be possible to establish a connection from an idle inlet to an idle outlet regardless of the number of calls served by the system.
---
paper_title: Data center TCP (DCTCP)
paper_content:
Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today's state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP's demands on the limited buffer space available in data center switches. For example, bandwidth hungry "background" flows build up queues at the switches, and thus impact the performance of latency sensitive "foreground" traffic. To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.
---
paper_title: Dcell: a scalable and fault-tolerant network structure for data centers
paper_content:
A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional tree-based structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers.
---
paper_title: Server-centric PON data center architecture
paper_content:
Over the last decade, the evolution of data center architecture designs has been mainly driven by the ever increasing bandwidth demands, high power consumption and cost. With all these in mind, a significant potential to improve bandwidth capacity and reduce power consumption and cost can be achieved by introducing PONs in the design of the networking fabric infrastructure in data centers. This work presents a novel server-centric PON design for future cloud data center architecture. We avoided the use of power hungry devices such as switches and tuneable lasers and encouraged the use of low power passive optical backplanes and PONs to facilitate intra and inter rack communication. We also tackle the problem of resource provisioning optimization and present our MILP model results for energy efficient routing and resource provisioning within the PON cell. We optimized the selection of hosting servers, routing paths and relay servers to achieve efficient resource utilization reaching 95% and optimum saving in energy consumption reaching 59%.
---
paper_title: Dcell: a scalable and fault-tolerant network structure for data centers
paper_content:
A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional tree-based structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers.
---
paper_title: FiConn: Using Backup Port for Server Interconnection in Data Centers
paper_content:
The goal of data center networking is to interconnect a large number of server machines with low equipment cost, high and balanced network capacity, and robustness to link/server faults. It is well understood that, the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements (8), (9). In this paper, we explore a new server-interconnection struc- ture. We observe that the commodity server machines used in today's data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purpose. We believe that, if both ports are actively used in network connections, we can build a low-cost interconnection structure without the expensive higher-level large switches. Our new network design, called FiConn, utilizes both ports and only the low-end commodity switches to form a scalable and highly effective structure. Although the server node degree is only two in this structure, we have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. The routing mechanism in FiConn balances different levels of links. We have further developed a low- overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. Simulation results have demonstrated that the routing mechanisms indeed achieve high networking throughput.
---
paper_title: BCube: a high performance, server-centric network architecture for modular data centers
paper_content:
This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic. BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components. Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.
---
paper_title: BCube: a high performance, server-centric network architecture for modular data centers
paper_content:
This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic. BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components. Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.
---
paper_title: FiConn: Using Backup Port for Server Interconnection in Data Centers
paper_content:
The goal of data center networking is to interconnect a large number of server machines with low equipment cost, high and balanced network capacity, and robustness to link/server faults. It is well understood that, the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements (8), (9). In this paper, we explore a new server-interconnection struc- ture. We observe that the commodity server machines used in today's data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purpose. We believe that, if both ports are actively used in network connections, we can build a low-cost interconnection structure without the expensive higher-level large switches. Our new network design, called FiConn, utilizes both ports and only the low-end commodity switches to form a scalable and highly effective structure. Although the server node degree is only two in this structure, we have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. The routing mechanism in FiConn balances different levels of links. We have further developed a low- overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. Simulation results have demonstrated that the routing mechanisms indeed achieve high networking throughput.
---
paper_title: Dcell: a scalable and fault-tolerant network structure for data centers
paper_content:
A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional tree-based structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers.
---
paper_title: FiConn: Using Backup Port for Server Interconnection in Data Centers
paper_content:
The goal of data center networking is to interconnect a large number of server machines with low equipment cost, high and balanced network capacity, and robustness to link/server faults. It is well understood that, the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements (8), (9). In this paper, we explore a new server-interconnection struc- ture. We observe that the commodity server machines used in today's data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purpose. We believe that, if both ports are actively used in network connections, we can build a low-cost interconnection structure without the expensive higher-level large switches. Our new network design, called FiConn, utilizes both ports and only the low-end commodity switches to form a scalable and highly effective structure. Although the server node degree is only two in this structure, we have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. The routing mechanism in FiConn balances different levels of links. We have further developed a low- overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. Simulation results have demonstrated that the routing mechanisms indeed achieve high networking throughput.
---
paper_title: BCube: a high performance, server-centric network architecture for modular data centers
paper_content:
This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic. BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components. Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.
---
paper_title: A study of non-blocking switching networks
paper_content:
This paper describes a method of designing arrays of crosspoints for use in telephone switching systems in which it will always be possible to establish a connection from an idle inlet to an idle outlet regardless of the number of calls served by the system.
---
paper_title: VL2: a scalable and flexible data center network
paper_content:
To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds - sustaining a rate that is 94% of the maximum possible.
---
paper_title: PortLand: a scalable fault-tolerant layer 2 data center network fabric
paper_content:
This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, inflexible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet/IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a ``plug-and-play" large-scale, data center network.
---
paper_title: Survey on routing in data centers: insights and future directions
paper_content:
Recently, a series of data center network architectures have been proposed. The goal of these works is to interconnect a large number of servers with significant bandwidth requirements. Coupled with these new DCN structures, routing protocols play an important role in exploring the network capacities that can be potentially delivered by the topologies. This article conducts a survey on the current state of the art of DCN routing techniques. The article focuses on the insights behind these routing schemes and also points out the open research issues hoping to spark new interests and developments in this field.
---
paper_title: PortLand: a scalable fault-tolerant layer 2 data center network fabric
paper_content:
This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, inflexible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet/IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a ``plug-and-play" large-scale, data center network.
---
paper_title: Survey on routing in data centers: insights and future directions
paper_content:
Recently, a series of data center network architectures have been proposed. The goal of these works is to interconnect a large number of servers with significant bandwidth requirements. Coupled with these new DCN structures, routing protocols play an important role in exploring the network capacities that can be potentially delivered by the topologies. This article conducts a survey on the current state of the art of DCN routing techniques. The article focuses on the insights behind these routing schemes and also points out the open research issues hoping to spark new interests and developments in this field.
---
paper_title: Helios: a hybrid electrical/optical switch architecture for modular data centers
paper_content:
The basic building block of ever larger data centers has shifted from a rack to a modular container with hundreds or even thousands of servers. Delivering scalable bandwidth among such containers is a challenge. A number of recent efforts promise full bisection bandwidth between all servers, though with significant cost, complexity, and power consumption. We present Helios, a hybrid electrical/optical switch architecture that can deliver significant reductions in the number of switching elements, cabling, cost, and power consumption relative to recently proposed data center network architectures. We explore architectural trade offs and challenges associated with realizing these benefits through the evaluation of a fully functional Helios prototype.
---
paper_title: Uniform price auction for allocation of dynamic cloud bandwidth
paper_content:
With the ubiquitous adoption of Cloud services by both companies and consumers alike, lack of an efficient system to explicitly price and allocate limited bandwidth has severely impacted the performance of Cloud user-applications. In this context, we consider a two-tier pricing model - consisting of Reservation Phase and Dynamic Phase - that caters to the needs of different kinds of applications. While the Reservation Phase can be used by Cloud users to obtain guarantees on minimum bandwidth well ahead in time, Dynamic Phase can be used to demand and obtain (possibly) additional bandwidth dynamically. Bandwidth being a limited resource, we develop a unique multi-stage uniform price auction with supply uncertainty to dynamically allocate bandwidth to users in the Dynamic Phase. We study the proposed model using a game theoretical approach. Our results prove that proposed auction mechanism is a promising approach for bandwidth allocation. We show that the model promotes the dual advantage of market efficiency and maximum revenue for the Cloud provider. We also demonstrate the price stability using numerical simulations. We argue that for rational, payoff-maximizing tenants of Cloud, the price is stable over the long run which makes the mechanism suitable for practical use.
---
paper_title: A Shapley-Value Mechanism for Bandwidth On Demand between Datacenters
paper_content:
Recent studies in cloud resource allocation and pricing have focused on computing and storage resources but not network bandwidth. Cloud users nowadays customarily deploy services across multiple geo-distributed datacenters, with significant inter-datacenter traffic generated, paid by cloud providers to ISPs. An effective bandwidth allocation and charging mechanism is needed between the cloud provider and the cloud users. Existing volume based static charging schemes lack market efficiency. This work presents the first dynamic pricing mechanism for inter-datacenter on-demand bandwidth, via a Shapley value based auction. Our auction is expressive enough to accept bids as a flat bandwidth rate plus a time duration, or a data volume with a transfer deadline. We start with an offline auction, design an optimal end-to-end traffic scheduling approach, and exploit the Shapley value in computing payments. Our auction is truthful, individual rational, budget balanced and approximately efficient in social welfare. An online version of the auction follows, where decisions are made instantly upon the arrival of each user's realtime transmission demand. We propose an efficient online traffic scheduling algorithm, and approximate the offline Shapley value based payments on the fly. We validate our mechanism design with solid theoretical analysis, as well as trace-driven simulation studies.
---
paper_title: A cooperative game based allocation for sharing data center networks
paper_content:
In current IaaS datacenters, tenants are suffering unfairness since the network bandwidth is shared in a besteffort manner. To achieve predictable network performance for rented virtual machines (VMs), cloud providers should guarantee minimum bandwidth for VMs or allocate the network bandwidth in a fairness fashion at VM-level. At the same time, the network should be efficiently utilized in order to maximize cloud providers' revenue. In this paper, we model the bandwidth sharing problem as a Nash bargaining game, and propose the allocation principles by defining a tunable base bandwidth for each VM. Specifically, we guarantee bandwidth for those VMs with lower network rates than their base bandwidth, while maintaining fairness among other VMs with higher network rates than their base bandwidth. Based on rigorous cooperative game-theoretic approaches, we design a distributed algorithm to achieve efficient and fair bandwidth allocation corresponding to the Nash bargaining solution (NBS). With simulations under typical scenarios, we show that our strategy can meet the two desirable requirements towards predictable performance for tenants as well as high utilization for providers. And by tuning the base bandwidth, our solution can enable cloud providers to flexibly balance the tradeoff between minimum guarantees and fair sharing of datacenter networks.
---
paper_title: SOAR: Strategy-proof auction mechanisms for distributed cloud bandwidth reservation
paper_content:
Bandwidth reservation is envisioned to be a value-added feature for the cloud provider in the following years. We consider the bandwidth reservation trading between the cloud provider and tenants as an open market, and design practical mechanisms under an auction-based market model. To the best of our knowledge, we propose the first family of Strategy-pr_Qof Auction mechanisms for cloud bandwidth Reservation (SOAR). First, we present SOAR-VCG that achieves both optimal social welfare and strategy-proofness when the tenants accept partially filled demands. Then, we propose SOAR-GDY that guarantees strategy-proofness and achieves good social welfare when the tenants do not satisfy with partial bandwidth reservations. We do not only theoretically prove the properties of SOAR family of auction mechanisms, but also extensively show that they achieve good performance in terms of social welfare, bandwidth satisfaction ratio, and bandwidth utilization in the simulation.
---
paper_title: Uncertain Shapley value of coalitional game with application to supply chain alliance
paper_content:
Uncertain coalitional game deals with situations in which the transferable payoffs are uncertain variables. The uncertain core has been proposed as the solution of uncertain coalitional game. This paper goes further by presenting two definitions of uncertain Shapley value: expected Shapley value and α-optimistic Shapley value. Meanwhile, some characterizations of the uncertain Shapley value are investigated. Finally, as an application, uncertain Shapley value is used to solve a profit allocation problem of supply chain alliance.
---
paper_title: Modeling and pricing cloud service elasticity for geographically distributed applications
paper_content:
Cloud service providers (CSP) strive to effectively provision their cloud resources to ensure that their hosted distributed applications meet their performance guarantees. However, accurately provisioning the inter-data centers network resources remains a challenging problem due to the cloud hosted applications' workload fluctuation. In this paper, we propose a novel approach that enables a CSP to offer Elasticity-as-a-Service (EaaS) for inter-data centers communication in order to guarantee the performance of distributed cloud applications. The contributions of the proposed work are two fold; first, we develop an efficient approach that enables the CSP to estimate and reserve the pool of network resources needed to fulfill the demands imposed by the network workload fluctuations of applications subscribing to this service. The approach allows the CSP to offer communication EaaS at differentiated levels based on the degree of bandwidth-sensitivity of the distributed cloud applications. In order to capture the inter-data centers network activity of hosted applications, we model their workloads using Markovian modeling. The second contribution is a novel dynamic pricing mechanism for network EaaS offerings that can be employed by the CSP to maximize the expected long-term revenue, and to regulate network elastic demands. Performance evaluation results demonstrate the efficiency of our proposed approach, the higher accuracy of our prediction method, and the increase in the CSPs net profit.
---
| Title: Interconnection Structures, Management and Routing Challenges in Cloud-Service Data Center Networks: A Survey
Section 1: Introduction
Description 1: This section introduces the growing importance of Data Center Networks (DCNs), outlines their challenges and objectives, and discusses the role of virtualization in improving performance and reliability.
Section 2: Cloud Service Data Center Networks
Description 2: This section defines cloud computing, compares cloud and traditional computing architectures, discusses the benefits, and highlights the risks associated with adopting cloud DCNs.
Section 3: Interconnection Structures for Data Center Networks
Description 3: This section surveys the state of the art interconnection structures in DCNs, focusing on the architecture, cost-efficiency, scalability, and reliability of structures such as Fat-Tree, DCell, BCube, and FiConn.
Section 4: Routing and Traffic Engineering in Data Center Networks
Description 4: This section reviews routing protocols, data forwarding techniques, and Traffic Engineering (TE) principles and practices that cater to the unique requirements of DCNs.
Section 5: Directions for Open Issues and Future Research
Description 5: This section identifies and discusses open issues and future research directions, focusing on efficient resource allocation and the challenges of migrating Virtual Data Center Networks (VDCNs).
Section 6: Conclusions
Description 6: This section provides the concluding remarks about the continuous evolution towards cloud DCNs, emphasizing the importance of virtualization, agility, reliability, and cost-efficiency. |
Ontology evolution: a process-centric survey | 8 | ---
paper_title: Ontology change: classification and survey
paper_content:
Ontologies play a key role in the advent of the Semantic Web. An important problem when dealing with ontologies is the modification of an existing ontology in response to a certain need for change. This problem is a complex and multifaceted one, because it can take several different forms and includes several related subproblems, like heterogeneity resolution or keeping track of ontology versions. As a result, it is being addressed by several different, but closely related and often overlapping research disciplines. Unfortunately, the boundaries of each such discipline are not clear, as the same term is often used with different meanings in the relevant literature, creating a certain amount of confusion. The purpose of this paper is to identify the exact relationships between these research areas and to determine the boundaries of each field, by performing a broad review of the relevant literature.
---
paper_title: Knowledge Engineering: Principles and Methods
paper_content:
This paper gives an overview of the development of the field of Knowledge Engineering over the last 15 years. We discuss the paradigm shift from a transfer view to a modeling view and describe two approaches which considerably shaped research in Knowledge Engineering: Role-limiting Methods and Generic Tasks. To illustrate various concepts and methods which evolved in recent years we describe three modeling frameworks: CommonKADS, MIKE and PROTEGE-II. This description is supplemented by discussing some important methodological developments in more detail: specification languages for knowledge-based systems, problem-solving methods and ontologies. We conclude by outlining the relationship of Knowledge Engineering to Software Engineering, Information Integration and Knowledge Management.
---
paper_title: A Semantic Web Primer
paper_content:
The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its uses. A Semantic Web Primer provides an introduction and guide to this continuously evolving field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for independent study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials.The third edition of this widely used text has been thoroughly updated, with significant new material that reflects a rapidly developing field. Treatment of the different languages (OWL2, rules) expands the coverage of RDF and OWL, defining the data model independently of XML and including coverage of N3/Turtle and RDFa. A chapter is devoted to OWL2, the new W3C standard. This edition also features additional coverage of the query language SPARQL, the rule language RIF and the possibility of interaction between rules and ontology languages and applications. The chapter on Semantic Web applications reflects the rapid developments of the past few years. A new chapter offers ideas for term projects. Additional material, including updates on the technological trends and research directions, can be found at http://www.semanticwebprimer.org.
---
paper_title: Towards a framework for ontology evolution
paper_content:
This paper presents design decisions involved in developing an ontology management framework that supports the user in creating and managing the ontology construction process. This framework supports the functions of an object-oriented representation, generic versioning control, and security of ontologies. Details on design considerations for each component within the framework are presented
---
paper_title: A model theoretic semantics for ontology versioning
paper_content:
We show that the Semantic Web needs a formal semantics for the various kinds of links between ontologies and other documents. We provide a model theoretic semantics that takes into account ontology extension and ontology versioning. Since the Web is the product of a diverse community, as opposed to a single agent, this semantics accommodates different viewpoints by having different entailment relations for different ontology perspectives. We discuss how this theory can be practically applied to RDF and OWL and provide a theorem that shows how to compute perspective-based entailment using existing logical reasoners. We illustrate these concepts using examples and conclude with a discussion of future work.
---
paper_title: Consistent Evolution of OWL Ontologies
paper_content:
Support for ontology evolution is extremely important in ontology engineering and application of ontologies in dynamic environments. A core aspect in the evolution process is the to guarantee consistency of the ontology when changes occur. In this paper we discuss the consistent evolution of OWL ontologies. We present a model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency. We introduce resolution strategies to ensure that consistency is maintained as the ontology evolves.
---
paper_title: Ontology change: classification and survey
paper_content:
Ontologies play a key role in the advent of the Semantic Web. An important problem when dealing with ontologies is the modification of an existing ontology in response to a certain need for change. This problem is a complex and multifaceted one, because it can take several different forms and includes several related subproblems, like heterogeneity resolution or keeping track of ontology versions. As a result, it is being addressed by several different, but closely related and often overlapping research disciplines. Unfortunately, the boundaries of each such discipline are not clear, as the same term is often used with different meanings in the relevant literature, creating a certain amount of confusion. The purpose of this paper is to identify the exact relationships between these research areas and to determine the boundaries of each field, by performing a broad review of the relevant literature.
---
paper_title: Ontology change: classification and survey
paper_content:
Ontologies play a key role in the advent of the Semantic Web. An important problem when dealing with ontologies is the modification of an existing ontology in response to a certain need for change. This problem is a complex and multifaceted one, because it can take several different forms and includes several related subproblems, like heterogeneity resolution or keeping track of ontology versions. As a result, it is being addressed by several different, but closely related and often overlapping research disciplines. Unfortunately, the boundaries of each such discipline are not clear, as the same term is often used with different meanings in the relevant literature, creating a certain amount of confusion. The purpose of this paper is to identify the exact relationships between these research areas and to determine the boundaries of each field, by performing a broad review of the relevant literature.
---
paper_title: A Framework for Ontology Evolution in Collaborative Environments
paper_content:
With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily.
---
paper_title: A Component-Based Framework For Ontology Evolution
paper_content:
Support for ontology evolution becomes extremely important in distributed development and use of ontologies. Information about change can be represented in many different ways. We describe these different representations and propose a framework that integrates them. We show how different representations in the framework are related by describing some techniques and heuristics that supplement information in one representation with information from other representations. We present an ontology of change operations, which is the kernel of our framework. 1 Support for Ontology Evolution Ontologies are increasing in popularity, and researchers and developers use them in more and more application areas. Ontologies are used as shared vocabularies, to improve information retrieval, or to help data integration. Neither the ontology development itself nor its product—the ontology— is a single-person enterprise. Large standardized ontologies are often developed by several researchers in parallel (e.g. SUO1 [9]); a number of ontologies grow in the context of peer-to-peer applications (e.g. Edutella [5]); other ontologies are constructed dynamically [2]. Successful applications of ontologies in such uncontrolled, de-centralized and distributed environments require substantial support for change management in ontologies and ontology evolution [7]. Given an ontology O and its two versions, Vold and Vnew, a complete support for change management in an ontology environment includes support for the following tasks.2 Data Transformation: When an ontology version Vold is changed to Vnew, data described by Vold might need to translated to bring it in line with Vnew. For example, if we merge two concepts A and B from Vold into C in Vnew, we must combine instances of A and B as well. http://suo.ieee.org/ Note that Vnew is not necessarily a unique replacement for Vold. There might be several new versions based on the old version, and all of them could exist in parallel. The labels are just used to refer to two versions of an ontology where Vnew has evolved from Vold. Data Access: Even if data is not being transformed, if there exists data conforming to Vold, we often want to access this data and interpret it correctly via Vnew. That is, we should be able to retrieve all data that was accessible via queries in terms of Vold with queries in terms of Vnew. Furthermore, instances of concepts in Vold should be instances of equivalent concepts in Vnew. This task is a very common one in the context of the Semantic Web, where ontologies describe pieces of data on the web. Ontology Update: When we adapt a remote ontology to specific local needs, and the remote ontology changes, we must propagate the changes in the remote ontology to the adapted local ontology [8]. Consistent Reasoning: Ontologies, being formal descriptions, are often used as logical theories. When ontology changes occur, we must analyze the changes to determine whether specific axioms that were valid in Vold are still valid in Vnew. For example, it might be useful to know that a change does not affect the subsumption relationship between two concepts: if A v B is valid in Vold it is also valid in Vnew. While a change in the logical theory will always affects reasoning in general, answers to specific queries may remain unchanged. Verification and Approval: Sometimes developers need to verify and approve ontology changes. This situation often happens when several people are developing a centralized ontology, or when developers want to apply changes selectively. There must be a user interface that simplifies such verification and allows developers to accept or reject specific changes, enabling execution of some changes and rolling back of others. This list of tasks is not exhaustive. The tools that exist today support these tasks in isolation. For example, the KAON framework [10] supports evolution strategies, allowing developers to specify strategies for updating data when changes in an ontology occur. The SHOE versioning system specifies which versions of the ontology the current version is backward compatible with [3]. Many ontology-editing environments (e.g., Protege [1]) provide logs of changes between versions. While these tools support some of the ontologyevolution tasks, there is no interaction or sharing of information among the tools. However, many of these tasks require the same elements in the representation of change. Imple-
---
paper_title: The DILIGENT Knowledge Processes
paper_content:
Purpose – Aims to present the ontology engineering methodology DILIGENT, a methodology focussing on the evolution of ontologies instead of the initial design, thus recognizing that knowledge is a tangible and moving target. Design/methodology/approach – First describes the methodology as a whole, then detailing one of the five main steps of DILIGENT. The second part describes case studies, either already performed or planned, and what we learned (or expect to learn) from them. Findings – With the case studies it was discovered the strengths and weaknesses of DILIGENT. During the evolution of ontologies, arguments need to be exchanged about the suggested changes. Identifies those kind of arguments which work best for the discussion of ontology changes. Research implications – DILIGENT recognizes ontology engineering methodologies like OnToKnowledge or Methontology as proven useful for the initial design, but expands them with its strong focus on the user-centric further development of the ontology and the provided integration of automatic agents in the process of ontology evolution. Practical implications – With DILIGENT the experience distilled from a number of case studies and this offers the knowledge manager a methodology to work in an ever-changing environment. Originality/value – DILIGENT is the first methodology to put focus not on the initial development of the ontology, but on the user and his usage of the ontology, and on the changes introduced by the user. We take the user’s own view seriously and enable feedback towards the evolution of the ontology, stressing the ontology’s role as a shared conceptualisation.
---
paper_title: Winnowing Ontologies Based on Application Use
paper_content:
The requirements of specific applications and services are often over estimated when ontologies are reused or built. This sometimes results in many ontologies being too large for their intended purposes. It is not uncommon that when applications and services are deployed over an ontology, only a few parts of the ontology are queried and used. Identifying which parts of an ontology are being used could be helpful to winnow the ontology, i.e., simplify or shrink the ontology to smaller, more fit for purpose size. Some approaches to handle this problem have already been suggested in the literature. However, none of that work showed how ontology-based applications can be used in the ontology-resizing process, or how they might be affected by it. This paper presents a study on the use of the AKT Reference Ontology by a number of applications and services, and investigates the possibility of relying on this usage information to winnow that ontology.
---
paper_title: A translation approach to portable ontology specifications
paper_content:
Abstract To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse—definitions of classes, relations, functions, and other objects—is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages. This allows researchers to share and reuse ontologies, while retaining the computational benefits of specialized implementations. We discuss how the translation approach to portability addresses several technical problems. One problem is how to accommodate the stylistic and organizational differences among representations while preserving declarative content. Another is how to translate from a very expressive language into restricted languages, remaining system-independent while preserving the computational efficiency of implemented systems. We describe how these problems are addressed by basing Ontolingua itself on an ontology of domain-independent, representational idioms.
---
paper_title: COnto-Diff: generation of complex evolution mappings for life science ontologies
paper_content:
Life science ontologies evolve frequently to meet new requirements or to better reflect the current domain knowledge. The development and adaptation of large and complex ontologies is typically performed collaboratively by several curators. To effectively manage the evolution of ontologies it is essential to identify the difference (Diff) between ontology versions. Such a Diff supports the synchronization of changes in collaborative curation, the adaptation of dependent data such as annotations, and ontology version management. We propose a novel approach COnto-Diff to determine an expressive and invertible diff evolution mapping between given versions of an ontology. Our approach first matches the ontology versions and determines an initial evolution mapping consisting of basic change operations (insert/update/delete). To semantically enrich the evolution mapping we adopt a rule-based approach to transform the basic change operations into a smaller set of more complex change operations, such as merge, split, or changes of entire subgraphs. The proposed algorithm is customizable in different ways to meet the requirements of diverse ontologies and application scenarios. We evaluate the proposed approach for large life science ontologies including the Gene Ontology and the NCI Thesaurus and compare it with PromptDiff. We further show how the Diff results can be used for version management and annotation migration in collaborative curation.
---
paper_title: Semi-automatic Integration of Learned Ontologies into a Collaborative Framework ⋆
paper_content:
The paper presents a novel ontology lifecycle scenario that ::: explicitly takes the dynamics and data-intensiveness of real ::: world applications into account. Changing and growing knowledge ::: is handled by semi-automatic incorporation of ontology learning ::: results into a collaborative ontology development framework. ::: This integration bases mainly on automatic negotiation of ::: agreed alignments, inconsistency resolution, an ontology ::: versioning system and support of natural language generation ::: tools, which alleviate the end-user effort in the incorporation ::: of new knowledge. The architecture of the respective framework ::: and notes on its progressive implementation are presented.
---
paper_title: A multi-agent system for building dynamic ontologies
paper_content:
Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.
---
paper_title: Winnowing Ontologies Based on Application Use
paper_content:
The requirements of specific applications and services are often over estimated when ontologies are reused or built. This sometimes results in many ontologies being too large for their intended purposes. It is not uncommon that when applications and services are deployed over an ontology, only a few parts of the ontology are queried and used. Identifying which parts of an ontology are being used could be helpful to winnow the ontology, i.e., simplify or shrink the ontology to smaller, more fit for purpose size. Some approaches to handle this problem have already been suggested in the literature. However, none of that work showed how ontology-based applications can be used in the ontology-resizing process, or how they might be affected by it. This paper presents a study on the use of the AKT Reference Ontology by a number of applications and services, and investigates the possibility of relying on this usage information to winnow that ontology.
---
paper_title: Graph-based Discovery of Ontology Change Patterns
paper_content:
Ontologies can support a variety of purposes, ranging from capturing conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. We investigate ontology change representation and discovery of change patterns. ::: Ontology changes are formalised as graph-based change logs. We use attributed graphs, which are typed over a generic graph with node and edge attribution.We analyse ontology change logs, represented as graphs, and identify frequent change sequences. Such sequences are applied as a reference in order to discover reusable, often domain-specific and usagedriven change patterns. We describe the pattern discovery algorithms and measure their performance using experimental results
---
paper_title: Graph-based Discovery of Ontology Change Patterns
paper_content:
Ontologies can support a variety of purposes, ranging from capturing conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. We investigate ontology change representation and discovery of change patterns. ::: Ontology changes are formalised as graph-based change logs. We use attributed graphs, which are typed over a generic graph with node and edge attribution.We analyse ontology change logs, represented as graphs, and identify frequent change sequences. Such sequences are applied as a reference in order to discover reusable, often domain-specific and usagedriven change patterns. We describe the pattern discovery algorithms and measure their performance using experimental results
---
paper_title: WordNet: An Electronic Lexical Database
paper_content:
A teaching device to acquaint dental students and also patients with endodontic root canal techniques performed by dentists and utilizing an electronic oscillator having a scale reading in electric current measurement and a pair of electrical circuit conductors being connected at one end to the terminals of the oscillator and the opposite ends thereof respectively being connectable to one or more small diameter metal wires which simulate dental reamers and files which are movable in root canal-simulating passages of uniform diameter complementary to that of said wires and formed in a transparent model of a human tooth including a root and cusp thereon and mounted in a transparent enclosure in which the root portion of the tooth extends with the cusp of the model extending above the upper end of the enclosure.
---
paper_title: An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet
paper_content:
This paper presents an adaptation of Lesk's dictionary-based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the SENSEVAL-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation systems.
---
paper_title: A WordNet-based Algorithm for Word Sense Disambiguation
paper_content:
We present an algorithm for automatic word sense disambiguation based on lexical knowledge contained in WordNet and on the results of surface-syntactic analysis The algorithm is part of a system that analyzes texts in order to acquire knowledge in the presence of as little pre-coded semantic knowledge as possible On the other hand, we want to make the besl use of public-domain information sources such as WordNet Rather than depend on large amounts of hand-crafted knowledge or statistical data from large corpora, we use syntactic information and information in WordNet and minimize the need for other knowledge sources in the word sense disambiguation process We propose to guide disambiguation by semantic similarity between words and heuristic rules based on this similarity The algorithm has been applied to the Canadian Income Tax Guide Test results indicate that even on a relatively small text the proposed method produces correct noun meaning more than 72% of the time.
---
paper_title: Semantically enriching folksonomies with FLOR
paper_content:
While the increasing popularity of folksonomies has lead to a vast quantity of tagged data, resource retrieval in these systems is limited by them being agnostic to the meaning (i.e., semantics) of tags. Our goal is to automatically enrich folksonomy tags (and implicitly the related resources) with formal semantics by associating them to relevant concepts defined in online ontologies. We introduce FLOR, a mechanism for automatic folksonomy enrichment by combining knowledge from WordNet and online ontologies.We experimentally tested FLOR on tag sets drawn ::: from 226 Flickr photos and obtained a precision value of 93% and an approximate recall of 49%.
---
paper_title: Large scale integration of senses for the semantic web
paper_content:
Nowadays, the increasing amount of semantic data available on the Web leads to a new stage in the potential of Semantic Web applications. However, it also introduces new issues due to the heterogeneity of the available semantic resources. One of the most remarkable is redundancy, that is, the excess of different semantic descriptions, coming from different sources, to describe the same intended meaning. In this paper, we propose a technique to perform a large scale integration of senses (expressed as ontology terms), in order to cluster the most similar ones, when indexing large amounts of online semantic information. It can dramatically reduce the redundancy problem on the current Semantic Web. In order to make this objective feasible, we have studied the adaptability and scalability of our previous work on sense integration, to be translated to the much larger scenario of the Semantic Web. Our evaluation shows a good behaviour of these techniques when used in large scale experiments, then making feasible the proposed approach.
---
paper_title: Sindice.com: A document-oriented lookup index for open linked data
paper_content:
Data discovery on the Semantic Web requires crawling and indexing of statements, in addition to the 'linked-data' approach of de-referencing resource URIs. Existing Semantic Web search engines are focused on database-like functionality, compromising on index size, query performance and live updates. We present Sindice, a lookup index over Semantic Web resources. Our index allows applications to automatically locate documents containing information about a given resource. In addition, we allow resource retrieval through inverse-functional properties, offer a full-text search and index SPARQL endpoints. Finally, we extend the sitemap protocol to efficiently index large datasets with minimal impact on data providers.
---
paper_title: Using Ontological Contexts to Assess the Relevance of Statements in Ontology Evolution
paper_content:
Ontology evolution tools often propose new ontological changes in the form of statements. While different methods exist to check the quality of such statements to be added to the ontology (e.g., in terms of consistency and impact), their relevance is usually left to the user to assess. Relevance in this context is a notion of how well the statement fits in the target ontology. We present an approach to automatically assess such relevance. It is acknowledged in cognitive science and other research areas that a piece of information flowing between two entities is relevant if there is an agreement on the context used between the entities. In our approach, we derive the context of a statement from online ontologies in which it is used, and study how this context matches with the target ontology. We identify relevance patterns that give an indication of relevance when the statement context and the target ontology fulfill specific conditions. We validate our approach through an experiment in three different domains, and show how our pattern-based technique outperforms a naive overlap-based approach.
---
paper_title: Bridging the gap between OWL and relational databases
paper_content:
Schema statements in OWL are interpreted quite differently from analogous statements in relational databases. If these statements are meant to be interpreted as integrity constraints (ICs), OWL's interpretation may seem confusing and/or inappropriate. Therefore, we propose an extension of OWL with ICs that captures the intuition behind ICs in relational databases. We discuss the algorithms for checking IC satisfaction for different types of knowledge bases, and show that, if the constraints are satisfied, we can disregard them while answering a broad range of positive queries.
---
paper_title: The knowledge model of Protege-2000: combining interoperability and flexibility
paper_content:
Knowledge-based systems have become ubiquitous in recent years. Knowledge-base developers need to be able to share and reuse knowledge bases that they build. Therefore, interoperability among different knowledge-representation systems is essential. The Open Knowledge-Base Connectivity protocol (OKBC) is a common query and construction interface for frame-based systems that facilitates this interoperability. ProtEgE-2000 is an OKBC-compatible knowledge-base-editing environment developed in our laboratory. We describe ProtEgE-2000 knowledge model that makes the import and export of knowledge bases from and to other knowledge-base servers easy. We discuss how the requirements of being a usable and configurable knowledge-acquisition tool affected our decisions in the knowledge-model design. ProtEgE-2000 also has a flexible metaclass architecture which provides configurable templates for new classes in the knowledge base. The use of metaclasses makes ProtEgE-2000 easily extensible and enables its use with other knowledge models. We demonstrate that we can resolve many of the differences between the knowledge models of ProtEgE-2000 and Resource Description Framework (RDF)--a system for annotating Web pages with knowledge elements--by defining a new metaclass set. Resolving the differences between the knowledge models in declarative way enables easy adaptation of ProtEgE-2000 as an editor for other knowledge-representation systems.
---
paper_title: EvoPat - pattern-based evolution and refactoring of RDF knowledge bases
paper_content:
Facilitating the seamless evolution of RDF knowledge bases on the Semantic Web presents still a major challenge. In this work we devise EvoPat - a pattern-based approach for the evolution and refactoring of knowledge bases. The approach is based on the definition of basic evolution patterns, which are represented declaratively and can capture simple evolution and refactoring operations on both data and schema levels. For more advanced and domain-specific evolution and refactorings, several simple evolution patterns can be combined into a compound one. We performed a comprehensive survey of possible evolution patterns with a combinatorial analysis of all possible before/after combinations, resulting in an extensive catalog of usable evolution patterns. Our approach was implemented as an extension for the OntoWiki semantic collaboration platform and framework.
---
paper_title: Ontology change: classification and survey
paper_content:
Ontologies play a key role in the advent of the Semantic Web. An important problem when dealing with ontologies is the modification of an existing ontology in response to a certain need for change. This problem is a complex and multifaceted one, because it can take several different forms and includes several related subproblems, like heterogeneity resolution or keeping track of ontology versions. As a result, it is being addressed by several different, but closely related and often overlapping research disciplines. Unfortunately, the boundaries of each such discipline are not clear, as the same term is often used with different meanings in the relevant literature, creating a certain amount of confusion. The purpose of this paper is to identify the exact relationships between these research areas and to determine the boundaries of each field, by performing a broad review of the relevant literature.
---
paper_title: Toward Updates in Description Logics
paper_content:
The use of DL systems to add reasoning capabilities to database is now a major trend in convergence between knowledge base and database system but is still confronted with the issue of data update. DL systems provide fact additions and retractions but no real object update mechanisms. We present a semantics for update that favours attribute values to concept membership and deals with incomplete information. After relating our work to an existing previous one on update semantics we address implementation issues.
---
paper_title: A Framework for Ontology Evolution in Collaborative Environments
paper_content:
With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily.
---
paper_title: A Formal Approach for RDF/S Ontology Evolution
paper_content:
In this paper, we consider the problem of ontology evolution in the face of a change operation. We devise a general-purpose algorithm for determining the effects and side-effects of a requested elementary or complex change operation. Our work is inspired by belief revision principles (i.e., validity, success and minimal change) and allows us to handle any change operation in a provably rational and consistent manner. To the best of our knowledge, this is the first approach overcoming the limitations of existing solutions, which deal with each change operation on a per-case basis. Additionally, we rely on our general change handling algorithm to implement specialized versions of it, one per desired change operation, in order to compute the equivalent set of effects and side-effects. This work was partially supported by the EU projects CASPAR (FP6-2005-IST-033572) and KP-LAB (FP6-2004-IST-27490).
---
paper_title: ON THE LOGIC OF THEORY CHANGE: PARTIAL MEET CONTRACTION AND REVISION FUNCTIONS
paper_content:
This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate “partial meet contraction functions”, which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are “relational” and “transitively relational”, are studied in detail, and their connections with certain “supplementary postulates” of Gardenfors investigated, with a further representation theorem established.
---
paper_title: Extending OWL with Integrity Constraints
paper_content:
Since m1 and m2 are not explicitly defined to be different from each other, they will be inferred to be same due to the cardinality restriction. However, in many cases, the reason to use functional properties is not to draw this inference, but to detect an inconsistency. When the information about instances are coming from multiple sources we cannot always assume explicit inequalities will be present. In these scenarios, there is a strong need to use OWL as an Integrity Constraint (IC) language with closed world semantics. That is, we would like to adopt the OWA without the UNA for parts of the domain where we have incomplete
---
paper_title: RUL : A Declarative Update Language for RDF
paper_content:
We propose a declarative update language for RDF graphs which is based on the paradigms of query and view languages RQL and RVL. Our language, called RUL, ensures that the execution of the update primitives on nodes and arcs neither violates the semantics of the RDF model nor the semantics of the given RDFS schema. In addition, RUL supports fine-grained updates at the class and property instance level, set-oriented updates with a deterministic semantics and takes benefit of the full expressive power of RQL for restricting the range of variables to nodes and arcs of RDF graphs.
---
paper_title: SPARQLing constraints for RDF
paper_content:
The goal of the Semantic Web is to support semantic interoperability between applications exchanging data on the web. The idea heavily relies on data being made available in machine readable format, using semantic markup languages. In this regard, the W3C has standardized RDF as the basic markup language for the Semantic Web. In contrast to relational databases, where data relationships are implicitly given by schema information as well as primary and foreign key constraints, relationships in semantic markup languages are made explicit. When mapping relational data into RDF, it is desirable to maintain the information implied by the origin constraints. As an improvement over existing approaches, our scheme allows for translating conventional databases into RDF without losing general constraints and vital key information. As much as in the relational model, those information are indispensable for data consistency and, as shown by example, can serve as a basis for semantic query optimization. We underline the practicability of our approach by showing that SPARQL, the most popular query language for RDF, can be used as a constraint language, akin to SQL in the relational context. As a theoretical contribution, we also discuss satisfiability for interesting classes of constraints and combinations thereof.
---
paper_title: OntoEdit: Multifaceted Inferencing for Ontology Engineering
paper_content:
Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. The terms are used for concise communication across people and applications. Tools such as ontology editors facilitate the creation and maintenance of ontologies. OntoEdit is an ontology editor that has been developed keeping five main objectives in mind: 1. Ease of use. 2. Methodology-guided development of ontologies. 3. Ontology development with help of inferencing. 4. Development of ontology axioms. 5. Extensibility through plug-in structure.
---
paper_title: Updating Description Logic ABoxes
paper_content:
Description logic (DL) ABoxes are a tool for describing the state of affairs in an application domain. In this paper, we consider the problem of updating ABoxes when the state changes. We assume that changes are described at an atomic level, i.e., in terms of possibly negated ABox assertions that involve only atomic concepts and roles. We analyze such basic ABox updates in several standard DLs by investigating whether the updated ABox can be expressed in these DLs and, if so, whether it is computable and what is its size. It turns out that DLs have to include nominals and the "@" constructor of hybrid logic (or, equivalently, admit Boolean ABoxes) for updated ABoxes to be expressible. We devise algorithms to compute updated ABoxes in several expressive DLs and show that an exponential blowup in the size of the whole input (original ABox + update information) cannot be avoided unless every PTIME problem is LOGTIME-parallelizable. We also exhibit ways to avoid an exponential blowup in the size of the original ABox, which is usually large compared to the update information.
---
paper_title: A Theoretical Model to Handle Ontology Debugging & Change Through Argumentation
paper_content:
A miniature optical source which includes at least one thin film of active laser materials having, in addition, non-linear properties. A pump laser for emitting a beam of a wavelength allowing for the pumping of the thin film of active laser material perpendicularly to the plane of the thin film. Also, an optical cavity constituted by two mirrors whose coefficients of reflection are maximum at the laser wavelength of the active laser material is further provided. The miniature optical source may find particular application in a physically compact laser emitting in the visible spectrum.
---
paper_title: Formal foundations for RDF/S KB evolution
paper_content:
There are ongoing efforts to provide declarative formalisms of integrity constraints over RDF/S data. In this context, addressing the evolution of RDF/S knowledge bases while respecting associated constraints is a challenging issue, yet to receive a formal treatment. We provide a theoretical framework for dealing with both schema and data change requests. We define the notion of a rational change operator as one that satisfies the belief revision principles of Success, Validity and Minimal Change. The semantics of such an operator are subject to customization, by tuning the properties that a rational change should adhere to. We prove some interesting theoretical results and propose a general-purpose algorithm for implementing rational change operators in knowledge bases with integrity constraints, which allows us to handle uniformly any possible change request in a provably rational and consistent manner. Then, we apply our framework to a well-studied RDF/S variant, for which we suggest a specific notion of minimality. For efficiency purposes, we also describe specialized versions of the general evolution algorithm for the RDF/S case, which provably have the same semantics as the general-purpose one for a limited set of (useful in practice) types of change requests.
---
paper_title: Containment and minimization of RDF/S query patterns
paper_content:
Semantic query optimization (SQO) has been proved to be quite useful in various applications (e.g., data integration, graphical query generators, caching, etc.) and has been extensively studied for relational, deductive, object, and XML databases. However, less attention to SQO has been devoted in the context of the Semantic Web. In this paper, we present sound and complete algorithms for the containment and minimization of RDF/S query patterns. More precisely, we consider two widely used RDF/S query fragments supporting pattern matching at the data, but also, at the schema level. To this end, we advocate a logic framework for capturing the RDF/S data model and semantics and we employ well-established techniques proposed in the relational context, in particular, the Chase and Backchase algorithms.
---
paper_title: Change Management for Metadata Evolution
paper_content:
In order to meet the demands of the Semantic Web, today’s ontologies need to be dynamic, networked structures. One important challenge, therefore, is to develop an integrated approach to the evolution process of ontologies and related metadata. Within this context, the specific goal of this work is to capture the evolution of metadata due to changed concepts, relations or metadata in one of the ontologies, and to capture changes to the ontology caused by changes to the metadata. After a short discussion of the nature of metadata, we propose a methodology to capture (1) the evolution of metadata induced by changes to the ontologies, and (2) the evolution of the ontology induced by changes to the underlying metadata. This will lead to the implementation of an approach for evolution of metadata related to ontologies.
---
paper_title: Belief contraction without recovery
paper_content:
The postulate of recovery is commonly regarded to be the intuitively least compelling of the six basic Gärdenfors postulates for belief contraction. We replace recovery by the seemingly much weaker postulate of core-retainment, which ensures that if x is excluded from K when p is contracted, then x plays some role for the fact that K implies p. Surprisingly enough, core-retainment together with four of the other Gärdenfors postulates implies recovery for logically closed belief sets. Reasonable contraction operators without recovery do not seem to be possible for such sets. Instead, however, they can be obtained for non-closed belief bases. Some results on partial meet contractions on belief bases are given, including an axiomatic characterization and a non-vacuous extension of the AGM closure condition.
---
paper_title: Model-based Revision Operators for Terminologies in Description Logics
paper_content:
The problem of revising an ontology consistently is closely related to the problem of belief revision which has been widely discussed in the literature. Some syntax-based belief revision operators have been adapted to revise ontologies in Description Logics (DLs). However, these operators remove the whole axioms to resolve logical contradictions and thus are not fine-grained. In this paper, we propose three model-based revision operators to revise terminologies in DLs. We show that one of them is more rational than others by comparing their logical properties. Therefore, we focus on this revision operator. We also consider the problem of computing the result of revision by our operator with the help of the notion of concept forgetting. Finally, we analyze the computational complexity of our revision operator.
---
paper_title: Generalizing the AGM postulates: preliminary results and applications
paper_content:
One of the crucial actions any reasoning system must undertake is the updating of its Knowledge Base (KB). This problem is usually referred to as the problem of belief change. The AGM approach, introduced in (Alchourron, Gardenfors, and Makinson 1985), is the dominating paradigm in the area but it makes some non-elementary assumptions about the logic at hand which disallow its direct application in some classes of logics. In this paper, we drop all such assumptions and determine the necessary and sufficient conditions for a logic to support AGMcompliant operators. Our approach is directly applicable to a much broader class of logics. We apply our results to establish connections between the problem of updating in Description Logics (DLs) and the AGM postulates. Finally, we investigate why belief base operators cannot satisfy the AGM postulates in standard logics.
---
paper_title: R.: On instance-level update and erasure in description logic ontologies
paper_content:
A Description Logic (DL) ontology is constituted by two components, a TBox that expresses general knowledge about the concepts and their relationships, and an ABox that describes the properties of individuals that are instances of concepts. We address the problem of how to deal with changes to a DL ontology, when these changes affect only the ABox, i.e. when the TBox is considered invariant. We consider two basic changes, namely instance-level update and instance-level erasure, roughly corresponding to the addition and the deletion of a set of facts involving individuals. We characterize the semantics of instance-level update and erasure on the basis of the approaches proposed by Winslett and by Katsuno and Mendelzon. Interestingly, DLs are typically not closed with respect to instance-level update and erasure, in the sense that the set of models corresponding to the application of any of these operations to a knowledge base in a DL L may not be expressible by ABoxes in L. In particular, we show that this is true for DL-LiteF, a tractable DL that is oriented towards data-intensive applications. To deal with this problem, we first introduce DL-LiteFS, a DL that minimally extends DL-LiteF and is closed with respect to instance-level update, and present a polynomial algorithm for computing instance-level update in this logic. Then, we provide a principled notion of best approximation with respect to a fixed language L of instance-level update and erasure, and exploit the algorithm for instance-level update for DL-LiteFS to get polynomial algorithms for approximated instance-level update and erasure for DL-LiteF. These results confirm the nice computational properties of DL-LiteF for data intensive applications, even where information about instances is not only read, but also written.
---
paper_title: ON THE LOGIC OF THEORY CHANGE: PARTIAL MEET CONTRACTION AND REVISION FUNCTIONS
paper_content:
This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate “partial meet contraction functions”, which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are “relational” and “transitively relational”, are studied in detail, and their connections with certain “supplementary postulates” of Gardenfors investigated, with a further representation theorem established.
---
paper_title: First Steps Towards Revising Ontologies
paper_content:
When modeling an ontology, one very often wants to add new information and keep the resulting ontology consistent. Belief Revision deals with the problem of consistently adding new formulas to a knowledge base. In this paper, we present some steps towards applying belief revision methods to ontologies based on description logics. We depart from the well known AGM-paradigm and show how it can be adapted in order to be applied to description logics.
---
paper_title: Weakening conflicting information for iterated revision and knowledge integration
paper_content:
The ability to handle exceptions, to perform iterated belief revision and to integrate information from multiple sources is essential for a commonsense reasoning agent. These important skills are related in the sense that they all rely on resolving inconsistent information. In this paper we develop a novel and useful strategy for conflict resolution, and compare and contrast it with existing strategies. Ideally the process of conflict resolution should conform with the principle of Minimal Change and should result in the minimal loss of information. Our approach to minimizing the loss of information is to weaken information involved in conflicts rather than completely discarding it. We implemented and tested the relative performance of our new strategy in three different ways. Surprisingly, we are able to demonstrate that it provides a computationally effective compilation of the lexicographical strategy; a strategy which is known to have desirable theoretical properties.
---
paper_title: A new approach to knowledge base revision in DL-lite
paper_content:
Revising knowledge bases (KBs) in description logics (DLs) in a syntax-independent manner is an important, nontrivial problem for the ontology management and DL communities. Several attempts have been made to adapt classical model-based belief revision and update techniques to DLs, but they are restricted in several ways. In particular, they do not provide operators or algorithms for general DL KB revision. The key difficulty is that, unlike propositional logic, a DL KB may have infinitely many models with complex (and possibly infinite) structures, making it difficult to define and compute revisions in terms of models. In this paper, we study general KBs in a specific DL in the DL-Lite family. We introduce the concept of features for such KBs, develop an alternative semantic characterization of KBs using features (instead of models), define two specific revision operators for KBs, and present the first algorithm for computing best approximations for syntax-independent revisions of KBs.
---
paper_title: The Meaning of Erasing in RDF under the Katsuno-Mendelzon Approach
paper_content:
The basic data model for the Semantic Web is RDF. In this paper we address updates in RDF. It is known that the semantics of updates for data models becomes unclear when the model turns, even slightly, more general than a simple relational structure. Using the framework of KatsunoMendelzon, we define a semantics for updates in RDF. Particularly we explore the behavior of this semantics for the “erase” operator (which in general is not expressible in RDF). Our results include a proposal of sound semantics for RDF updates, a characterization of the maximal RDF graph which captures exactly all consequences of the erase operation expressible in RDF, and complexity results about the computation of this graph and updates in RDF in general.
---
paper_title: A classification of ontology modification
paper_content:
Recent research in ontologies and descriptions logics has focused on compromising between expressiveness and reasoning ability, with many other issues being neglected One major issue that has been neglected is how one should go about in modifying ontologies as inconsistency arises The central concern of this problem is therefore to determine the most rational way of modifying ontologies, such that no extra knowledge would be retained in or retracted from the knowledge base The purpose of this paper is to outline the complexities in this and to present some insights into the problem of ontology modification Description logic (DL) is used in this paper as the underlying logic for the representation of ontology, and ontology modification is performed based on this logic.
---
paper_title: On Applying the AGM Theory to DLs and OWL
paper_content:
It is generally acknowledged that any Knowledge Base (KB) should be able to adapt itself to new information received. This problem has been extensively studied in the field of belief change, the dominating approach being the AGM theory. This theory set the standard for determining the rationality of a given belief change mechanism but was placed in a certain context which makes it inapplicable to logics used in the Semantic Web, such as Description Logics (DLs) and OWL. We believe the Semantic Web community would benefit from the application of the AGM theory to such logics. This paper is a preliminary study towards the feasibility of this application. Our approach raises interesting theoretical challenges and has an important practical impact too, given the central role that DLs and OWL play in the Semantic Web.
---
paper_title: Knowledge base revision in description logics
paper_content:
Ontology evolution is an important problem in the Semantic Web research. Recently, Alchourron, Gardenfors and Markinson's (AGM) theory on belief change has been applied to deal with this problem. However, most of current work only focuses on the feasibility of the application of AGM postulates on contraction to description logics (DLs), a family of ontology languages. So the explicit construction of a revision operator is ignored. In this paper, we first generalize the AGM postulates on revision to DLs. We then define two revision operators in DLs. One is the weakening-based revision operator which is defined by weakening of statements in a DL knowledge base and the other is its refinement. We show that both operators capture some notions of minimal change and satisfy the generalized AGM postulates for revision.
---
paper_title: A First Step Towards Stream Reasoning
paper_content:
While reasoners are year after year scaling up in the classical, time invariant domain of ontological knowledge, reasoning upon rapidly changing information has been neglected or forgotten. On the contrary, processing of data streams has been largely investigated and specialized Stream Database Management Systems exist. In this paper, by coupling reasoners with powerful, reactive, throughput-efficient stream management systems, we introduce the concept of Stream Reasoning. We expect future realization of such concept to have high impact on the future Internet because it enables reasoning in real time, at a throughput and with a reactivity not obtained in previous works.
---
paper_title: HECATAEUS: Regulating schema evolution
paper_content:
HECATAEUS is an open-source software tool for enabling impact prediction, what-if analysis, and regulation of relational database schema evolution. We follow a graph theoretic approach and represent database schemas and database constructs, like queries and views, as graphs. Our tool enables the user to create hypothetical evolution events and examine their impact over the overall graph before these are actually enforced on it. It also allows definition of rules for regulating the impact of evolution via (a) default values for all the nodes of the graph and (b) simple annotations for nodes deviating from the default behavior. Finally, HECATAEUS includes a metric suite for evaluating the impact of evolution events and detecting crucial and vulnerable parts of the system.
---
paper_title: Ontology evolution in data integration: query rewriting to the rescue
paper_content:
The evolution of ontologies is an undisputed necessity in ontologybased data integration. In such systems ontologies are used as global schema in order to formulate queries that are answered by the data integration systems. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. In most of these systems, when ontologies change their relations with the data sources, i.e., the mappings, are recreated manually, a process which is known to be error-prone and timeconsuming. In this paper, we provide a solution that allows query answering under evolving ontologies without mapping redefinition. To achieve that, query rewriting techniques are exploited in order to produce equivalent rewritings among ontology versions. Whenever equivalent rewritings cannot be produced we a) guide query redefinition or b) provide the best "over-approximations". We show that our approach can greatly reduce human effort spent since continuous mapping redefinition on evolving ontologies is no longer necessary.
---
paper_title: Optimising ontology stream reasoning with truth maintenance system
paper_content:
So far researchers in the Description Logics / Ontology communities mainly consider ontology reasoning services for static ontologies. The rapid development of the Semantic Web and its emerging data ask for reasoning technologies for dynamic knowledge streams. Existing work on stream reasoning is focused on lightweight languages such as RDF and RDFS. In this paper, we introduce the notion of Ontology Stream Management System (OSMS) and present a stream-reasoning approach based on Truth Maintenance System (TMS). We present optimised EL++ algorithm to reduce memory consumption. Our evaluations show that the optimisation improves TMS-enabled EL++ reasoning to deal with relatively large volumes of data and update efficiently.
---
paper_title: Y.: Policy-regulated Management of ETL Evolution
paper_content:
In this paper, we discuss the problem of performing impact prediction for changes that occur in the schema/structure of the data warehouse sources. We abstract Extract-Transform-Load (ETL) activities as queries and sequences of views. ETL activities and its sources are uniformly modeled as a graph that is annotated with policies for the management of evolution events. Given a change at an element of the graph, our method detects the parts of the graph that are affected by this change and highlights the way they are tuned to respond to it. For many cases of ETL source evolution, we present rules so that both syntactical and semantic correctness of activities are retained. Finally, we experiment with the evaluation of our approach over real-world ETL workflows used in the Greek public sector.
---
paper_title: C-SPARQL: SPARQL for continuous querying
paper_content:
C-SPARQL is an extension of SPARQL to support continuous queries, registered and continuously executed over RDF data streams, considering windows of such streams. Supporting streams in RDF format guarantees interoperability and opens up important applications, in which reasoners can deal with knowledge that evolves over time. We present C-SPARQL by means of examples in Urban Computing.
---
paper_title: Changing Ontology Breaks Queries
paper_content:
Updating an ontology that is in use may result in inconsistencies between the ontology and the knowledge base, dependent ontologies and applications/services. Current research concentrates on the creation of ontologies and how to manage ontology changes in terms of mapping ontology versions and keeping consistent with the instances. Very little work investigated controlling the impact on dependent applications/services; which is the aim of the system presented in this paper. The approach we propose is to make use of ontology change logs to analyse incoming RDQL queries and amend them as necessary. Revised queries can then be used to query the ontology and knowledge base as requested by the applications and services. We describe our prototype system and discuss related problems and future directions.
---
paper_title: Enabling Active Ontology Change Management within Semantic Web-based Applications
paper_content:
Enabling traceable ontology changes is becoming a critical issue for ontology-based applications. Updating an ontology that is in use may result in inconsistencies between the ontology and the knowledge base, dependent ontologies and applications/services. Current research concentrates on the creation of ontologies and how to manage ontology changes in terms of mapping ontology versions and keeping consistent with the instances. Very little work investigated on-the-fly keeping track of ontology changes while update (active ontology versioning) and using these information to control the impact on dependent applications/services, which is the aim of our research presented in this thesis. The approach we propose is to make use of ontology change logs as a check-point to analyse changed entities related to the requested services via end-user’s incoming queries (RDQL/SPARQL) and amend them as necessary to maintain the validation and continuousness of the dependent application. Firstly, We build up Log Ontology I as the concept structure to organize and construct the change information, develop our prototype system to demonstrate how the change information retrieved from Log Ontology I could be used to control the impacts brought by the ontology changes on the dependent applications and services. And then, by analysing the limitations and difficulties of our prototype system in maintaining the services related to the more complex ontology changes, we identify that the problem which fails the system facing the more complex ontology changes is the inabilities of Log Ontology I to represent complex change information in a semantic fashion. Therefore, we retract to put more focuses on Log Ontology I to enable the implementation of the mechanism to on-the-fly keep track of ontology change information, forming Log Ontology II, in order to reserve the semantics of ontology change from the beginning of ontology update process. Finally we discuss the future direction in terms of how the improved Log Ontology II enables the better service validation and continuousness maintenance of changing-ontology-based applications.
---
paper_title: Evaluating the validity of data instances against ontology evolution over the Semantic Web
paper_content:
It is natural for ontologies to evolve over time. These changes could be at structural and semantic levels. Due to changes to an ontology, its data instances may become invalid, and as a result, may become non-interpretable. In this paper, we address precisely this problem, validity of data instances due to ontological evolution. Towards this end, we make the following three novel contributions to the area of Semantic Web. First, we propose formal notions of structural validity and semantic validity of data instances, and then present approaches to ensure them. Second, we propose semantic view as part of an ontology, and demonstrate that it is sufficient to validate a data instance against the semantic view rather than the entire ontology. We discuss how the semantic view can be generated through an implication analysis, i.e., how semantic changes to one component imply semantic changes to other components in the ontology. Third, we propose a validity identification approach that employs locally maintaining a hash value of the semantic view at the data instance.
---
paper_title: Ontology evolution: assisting query migration
paper_content:
Information systems rely more and more on semantic web ontologies to share and interpret data within and across research domains. However, an important problem when dealing with ontologies is the fact that they are living artefacts and subject to change. When ontologies evolve, queries formulated using a past ontology version might become invalid and should be redefined or adapted. In this paper we propose a solution in order to identify the impact of ontology evolution on queries and to ease query migration. We present a module that receives as input the sequence of changes between the two ontology versions along with a set of queries and automatically identifies the specific change operations that affect the input queries. Besides the automatic identification of the affecting change operations, query migration is further aided by providing an explanation for the specific invalidation. This explanation is presented graphically by means of change paths that represent the evolution of the specific parts of the ontology that invalidate the query. We evaluate the time complexity of our approach and show how it can possibly reduce the human effort spent on query redefinition/adaptation.
---
paper_title: Ontology versioning on the Semantic Web
paper_content:
Ontologies are often seen as basic building blocks for the Semantic Web, as they provide a reusable piece of knowledge about a specific domain. However, those pieces of knowledge are not static, but evolve over time. Domain changes, adaptations to different tasks, or changes in the conceptualization require modifications of the ontology. The evolution of ontologies causes operability problems, which will hamper their effective reuse. A versioning mechanism might help to reduce those problems, as it will make the relations between different revisions of an ontology explicit. This paper will discuss the problem of ontology versioning. Inspired by the work done in database schema versioning and program interface versioning, it will also propose building blocks for the most important aspects of a versioning mechanism, i.e., ontology identification and change specification.
---
paper_title: Evolution Management for Interconnected Ontologies
paper_content:
Mappings between ontologies are easily harmed by changes in the ontologies. In this paper we explain a mechanism to define modular ontologies and mappings in a way that allows for local containment of terminological reasoning. We have also developed a change detection and analysis method that predicts the effect of changes on the concept hierarchy. This method determines whether the changes in one ontology affect the reasoning inside other ontologies or not. Together, these mechanisms allow ontologies to evolve without unpredictable effects on other ontologies. In this paper, we also apply these methods in a case study that is undertaken in a EU IST project.
---
paper_title: Analyzing the evolution of life science ontologies and mappings
paper_content:
Ontologies are heavily developed and used in life sciences and undergo continuous changes. However, the evolution of life science ontologies and references to them (e.g., annotations) is not well understood and has received little attention so far. We therefore propose a generic framework for analyzing both the evolution of ontologies and the evolution of ontology-related mappings, in particular annotations referring to ontologies and similarity (match) mappings between ontologies. We use our framework for an extensive comparative evaluation of evolution measures for 16 life science ontologies. Moreover, we analyze the evolution of annotation mappings and ontology mappings for the Gene Ontology.
---
paper_title: Ontology Evolution Issues in Adaptable Information Management Systems
paper_content:
Currently, ontology based information management systems have drawn great attentions. However, one of the challenges is how to manage the evolution of the ontology. If ontology evolution was not considered beforehand in the modeling phase of the system, then it may cause potential inconsistencies among the components of the ontology and the dependent applications. Besides the evolution of an ontology may interfere with the running of its dependent applications. Hence, in this paper the approaches of maintaining the consistency and keeping the continuousness of the dependent applications during the evolution are analyzed. Firstly, a virtual space based framework is put forward and most of the changes are to be made in the virtual space. Then two types of property changes are analyzed and solutions for processing them are given in detail. Besides, a register record approach is addressed for checking the dependency between the applications and the ontology schema.
---
paper_title: Exelixis: evolving ontology-based data integration system
paper_content:
The evolution of ontologies is an undisputed necessity in ontology-based data integration. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. We present Exelixis, a web platform that enables query answering over evolving ontologies without mapping redefinition. This is achieved by rewriting queries among ontology versions. First, changes between ontologies are automatically detected and described using a high level language of changes. Those changes are interpreted as sound global-as-view (GAV) mappings. Then query expansion is applied in order to consider constraints from the ontology and unfolding to apply the GAV mappings. Whenever equivalent rewritings cannot be produced we a) guide query redefinition and/or b) provide the best "over-approximations", i.e. the minimally-containing and minimally-generalized rewritings. For the demonstration we will use four versions of the CIDOC-CRM ontology and real user queries to show the functionality of the system. Then we will allow conference participants to directly interact with the system to test its capabilities.
---
paper_title: Automatic Support for Formative Ontology Evaluation
paper_content:
Just as testing is an integral part of software engineering, so is ontology evaluation an integral part of ontology engineering. We have implemented automated support for formative ontology evaluation based on the two principles of i) checking for compliance with modelling guidelines and ii) reviewing entailed statements in MoKi, a wiki based ontology engineering environment. These principles exist in state of the art literature and good ontology engineering and evaluation practice, but have not so far been widely integrated into ontology engineering tools.
---
paper_title: Consistent Evolution of OWL Ontologies
paper_content:
Support for ontology evolution is extremely important in ontology engineering and application of ontologies in dynamic environments. A core aspect in the evolution process is the to guarantee consistency of the ontology when changes occur. In this paper we discuss the consistent evolution of OWL ontologies. We present a model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency. We introduce resolution strategies to ensure that consistency is maintained as the ontology evolves.
---
paper_title: Automatic Support for Formative Ontology Evaluation
paper_content:
Just as testing is an integral part of software engineering, so is ontology evaluation an integral part of ontology engineering. We have implemented automated support for formative ontology evaluation based on the two principles of i) checking for compliance with modelling guidelines and ii) reviewing entailed statements in MoKi, a wiki based ontology engineering environment. These principles exist in state of the art literature and good ontology engineering and evaluation practice, but have not so far been widely integrated into ontology engineering tools.
---
paper_title: Consistent Evolution of OWL Ontologies
paper_content:
Support for ontology evolution is extremely important in ontology engineering and application of ontologies in dynamic environments. A core aspect in the evolution process is the to guarantee consistency of the ontology when changes occur. In this paper we discuss the consistent evolution of OWL ontologies. We present a model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency. We introduce resolution strategies to ensure that consistency is maintained as the ontology evolves.
---
paper_title: HECATAEUS: Regulating schema evolution
paper_content:
HECATAEUS is an open-source software tool for enabling impact prediction, what-if analysis, and regulation of relational database schema evolution. We follow a graph theoretic approach and represent database schemas and database constructs, like queries and views, as graphs. Our tool enables the user to create hypothetical evolution events and examine their impact over the overall graph before these are actually enforced on it. It also allows definition of rules for regulating the impact of evolution via (a) default values for all the nodes of the graph and (b) simple annotations for nodes deviating from the default behavior. Finally, HECATAEUS includes a metric suite for evaluating the impact of evolution events and detecting crucial and vulnerable parts of the system.
---
paper_title: Ontology evolution in data integration: query rewriting to the rescue
paper_content:
The evolution of ontologies is an undisputed necessity in ontologybased data integration. In such systems ontologies are used as global schema in order to formulate queries that are answered by the data integration systems. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. In most of these systems, when ontologies change their relations with the data sources, i.e., the mappings, are recreated manually, a process which is known to be error-prone and timeconsuming. In this paper, we provide a solution that allows query answering under evolving ontologies without mapping redefinition. To achieve that, query rewriting techniques are exploited in order to produce equivalent rewritings among ontology versions. Whenever equivalent rewritings cannot be produced we a) guide query redefinition or b) provide the best "over-approximations". We show that our approach can greatly reduce human effort spent since continuous mapping redefinition on evolving ontologies is no longer necessary.
---
paper_title: Y.: Policy-regulated Management of ETL Evolution
paper_content:
In this paper, we discuss the problem of performing impact prediction for changes that occur in the schema/structure of the data warehouse sources. We abstract Extract-Transform-Load (ETL) activities as queries and sequences of views. ETL activities and its sources are uniformly modeled as a graph that is annotated with policies for the management of evolution events. Given a change at an element of the graph, our method detects the parts of the graph that are affected by this change and highlights the way they are tuned to respond to it. For many cases of ETL source evolution, we present rules so that both syntactical and semantic correctness of activities are retained. Finally, we experiment with the evaluation of our approach over real-world ETL workflows used in the Greek public sector.
---
paper_title: Dynamic Change Evaluation for Ontology Evolution in the Semantic Web
paper_content:
Changes in an ontology may have a disruptive impact on any system using it. This impact may depend on structural changes such as introduction or removal of concept definitions, or it may be related to a change in the expected performance of the reasoning tasks. As the number of systems using ontologies is expected to increase, and given the open nature of the semantic Web, introduction of new ontologies and modifications to existing ones are to be expected. Dynamically handling such changes, without requiring human intervention, becomes crucial. This paper presents a framework that isolates groups of related axioms in an OWL ontology, so that a change in one or more axioms can be automatically localised to a part of the ontology.
---
paper_title: Changing Ontology Breaks Queries
paper_content:
Updating an ontology that is in use may result in inconsistencies between the ontology and the knowledge base, dependent ontologies and applications/services. Current research concentrates on the creation of ontologies and how to manage ontology changes in terms of mapping ontology versions and keeping consistent with the instances. Very little work investigated controlling the impact on dependent applications/services; which is the aim of the system presented in this paper. The approach we propose is to make use of ontology change logs to analyse incoming RDQL queries and amend them as necessary. Revised queries can then be used to query the ontology and knowledge base as requested by the applications and services. We describe our prototype system and discuss related problems and future directions.
---
paper_title: Enabling Active Ontology Change Management within Semantic Web-based Applications
paper_content:
Enabling traceable ontology changes is becoming a critical issue for ontology-based applications. Updating an ontology that is in use may result in inconsistencies between the ontology and the knowledge base, dependent ontologies and applications/services. Current research concentrates on the creation of ontologies and how to manage ontology changes in terms of mapping ontology versions and keeping consistent with the instances. Very little work investigated on-the-fly keeping track of ontology changes while update (active ontology versioning) and using these information to control the impact on dependent applications/services, which is the aim of our research presented in this thesis. The approach we propose is to make use of ontology change logs as a check-point to analyse changed entities related to the requested services via end-user’s incoming queries (RDQL/SPARQL) and amend them as necessary to maintain the validation and continuousness of the dependent application. Firstly, We build up Log Ontology I as the concept structure to organize and construct the change information, develop our prototype system to demonstrate how the change information retrieved from Log Ontology I could be used to control the impacts brought by the ontology changes on the dependent applications and services. And then, by analysing the limitations and difficulties of our prototype system in maintaining the services related to the more complex ontology changes, we identify that the problem which fails the system facing the more complex ontology changes is the inabilities of Log Ontology I to represent complex change information in a semantic fashion. Therefore, we retract to put more focuses on Log Ontology I to enable the implementation of the mechanism to on-the-fly keep track of ontology change information, forming Log Ontology II, in order to reserve the semantics of ontology change from the beginning of ontology update process. Finally we discuss the future direction in terms of how the improved Log Ontology II enables the better service validation and continuousness maintenance of changing-ontology-based applications.
---
paper_title: Evolution Management for Interconnected Ontologies
paper_content:
Mappings between ontologies are easily harmed by changes in the ontologies. In this paper we explain a mechanism to define modular ontologies and mappings in a way that allows for local containment of terminological reasoning. We have also developed a change detection and analysis method that predicts the effect of changes on the concept hierarchy. This method determines whether the changes in one ontology affect the reasoning inside other ontologies or not. Together, these mechanisms allow ontologies to evolve without unpredictable effects on other ontologies. In this paper, we also apply these methods in a case study that is undertaken in a EU IST project.
---
paper_title: Analyzing the evolution of life science ontologies and mappings
paper_content:
Ontologies are heavily developed and used in life sciences and undergo continuous changes. However, the evolution of life science ontologies and references to them (e.g., annotations) is not well understood and has received little attention so far. We therefore propose a generic framework for analyzing both the evolution of ontologies and the evolution of ontology-related mappings, in particular annotations referring to ontologies and similarity (match) mappings between ontologies. We use our framework for an extensive comparative evaluation of evolution measures for 16 life science ontologies. Moreover, we analyze the evolution of annotation mappings and ontology mappings for the Gene Ontology.
---
paper_title: Ontology Evolution Issues in Adaptable Information Management Systems
paper_content:
Currently, ontology based information management systems have drawn great attentions. However, one of the challenges is how to manage the evolution of the ontology. If ontology evolution was not considered beforehand in the modeling phase of the system, then it may cause potential inconsistencies among the components of the ontology and the dependent applications. Besides the evolution of an ontology may interfere with the running of its dependent applications. Hence, in this paper the approaches of maintaining the consistency and keeping the continuousness of the dependent applications during the evolution are analyzed. Firstly, a virtual space based framework is put forward and most of the changes are to be made in the virtual space. Then two types of property changes are analyzed and solutions for processing them are given in detail. Besides, a register record approach is addressed for checking the dependency between the applications and the ontology schema.
---
paper_title: Exelixis: evolving ontology-based data integration system
paper_content:
The evolution of ontologies is an undisputed necessity in ontology-based data integration. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. We present Exelixis, a web platform that enables query answering over evolving ontologies without mapping redefinition. This is achieved by rewriting queries among ontology versions. First, changes between ontologies are automatically detected and described using a high level language of changes. Those changes are interpreted as sound global-as-view (GAV) mappings. Then query expansion is applied in order to consider constraints from the ontology and unfolding to apply the GAV mappings. Whenever equivalent rewritings cannot be produced we a) guide query redefinition and/or b) provide the best "over-approximations", i.e. the minimally-containing and minimally-generalized rewritings. For the demonstration we will use four versions of the CIDOC-CRM ontology and real user queries to show the functionality of the system. Then we will allow conference participants to directly interact with the system to test its capabilities.
---
paper_title: High-level change detection in RDF(S) KBs
paper_content:
With the increasing use of Web 2.0 to create, disseminate, and consume large volumes of data, more and more information is published and becomes available for potential data consumers, that is, applications/services, individual users and communities, outside their production site. The most representative example of this trend is Linked Open Data (LOD), a set of interlinked data and knowledge bases. The main challenge in this context is data governance within loosely coordinated organizations that are publishing added-value interlinked data on the Web, bringing together issues related to data management and data quality, in order to support the full lifecycle of data production, consumption, and management. In this article, we are interested in curation issues for RDF(S) data, which is the default data model for LOD. In particular, we are addressing change management for RDF(S) data maintained by large communities (scientists, librarians, etc.) which act as curators to ensure high quality of data. Such curated Knowledge Bases (KBs) are constantly evolving for various reasons, such as the inclusion of new experimental evidence or observations, or the correction of erroneous conceptualizations. Managing such changes poses several research problems, including the problem of detecting the changes (delta) between versions of the same KB developed and maintained by different groups of curators, a crucial task for assisting them in understanding the involved changes. This becomes all the more important as curated KBs are interconnected (through copying or referencing) and thus changes need to be propagated from one KB to another either within or across communities. This article addresses this problem by proposing a change language which allows the formulation of concise and intuitive deltas. The language is expressive enough to describe unambiguously any possible change encountered in curated KBs expressed in RDF(S), and can be efficiently and deterministically detected in an automated way. Moreover, we devise a change detection algorithm which is sound and complete with respect to the aforementioned language, and study appropriate semantics for executing the deltas expressed in our language in order to move backwards and forwards in a multiversion repository, using only the corresponding deltas. Finally, we evaluate through experiments the effectiveness and efficiency of our algorithms using real ontologies from the cultural, bioinformatics, and entertainment domains.
---
paper_title: PROMPTDIFF: A Fixed-Point Algorithm for Comparing Ontology Versions
paper_content:
As ontology development becomes a more ubiquitous and collaborative process, the developers face the problem of maintaining versions of ontologies akin to maintaining versions of software code in large software projects. Versioning systems for software code provide mechanisms for tracking versions, checking out versions for editing, comparing different versions, and so on. We can directly reuse many of these mechanisms for ontology versioning. However, version comparison for code is based on comparing text files--an approach that does not work for comparing ontologies. Two ontologies can be identical but have different text representation. We have developed the PROMPTDIFF algorithm, which integrates different heuristic matchers for comparing ontology versions. We combine these matchers in a fixed-point manner, using the results of one matcher as an input for others until the matchers produce no more changes. The current implementation includes ten matchers but the approach is easily extendable to an arbitrary number of matchers. Our evaluation showed that PROMPTDIFF correctly identified 96% of the matches in ontology versions from large projects.
---
paper_title: On Detecting High-Level Changes in RDF/S KBs
paper_content:
An increasing number of scientific communities rely on Semantic Web ontologies to share and interpret data within and across research domains. These common knowledge representation resources are usually developed and maintained manually and essentially co-evolve along with experimental evidence produced by scientists worldwide. Detecting automatically the differences between (two) versions of the same ontology in order to store or visualize their deltas is a challenging task for e-science. In this paper, we focus on languages allowing the formulation of concise and intuitive deltas, which are expressive enough to describe unambiguously any possible change and that can be effectively and efficiently detected. We propose a specific language that provably exhibits those characteristics and provide a change detection algorithm which is sound and complete with respect to the proposed language. Finally, we provide a promising experimental evaluation of our framework using real ontologies from the cultural and bioinformatics domains.
---
paper_title: Managing ontology changes on the semantic Web
paper_content:
Although the ontology evolution plays a key role in the semantic Web, methods and tools to support it are missing. Thus, this paper proposes a component-based framework for managing ontology changes. The main functionalities of the OntoAnalyzer framework are: (1) to track changes and to formalize them using a language that we propose for representing ontology changes and (2) to identify changes a posteriori to ontology evolution and to analyze their effect on the ontology-based annotation of resources.
---
paper_title: High-level change detection in RDF(S) KBs
paper_content:
With the increasing use of Web 2.0 to create, disseminate, and consume large volumes of data, more and more information is published and becomes available for potential data consumers, that is, applications/services, individual users and communities, outside their production site. The most representative example of this trend is Linked Open Data (LOD), a set of interlinked data and knowledge bases. The main challenge in this context is data governance within loosely coordinated organizations that are publishing added-value interlinked data on the Web, bringing together issues related to data management and data quality, in order to support the full lifecycle of data production, consumption, and management. In this article, we are interested in curation issues for RDF(S) data, which is the default data model for LOD. In particular, we are addressing change management for RDF(S) data maintained by large communities (scientists, librarians, etc.) which act as curators to ensure high quality of data. Such curated Knowledge Bases (KBs) are constantly evolving for various reasons, such as the inclusion of new experimental evidence or observations, or the correction of erroneous conceptualizations. Managing such changes poses several research problems, including the problem of detecting the changes (delta) between versions of the same KB developed and maintained by different groups of curators, a crucial task for assisting them in understanding the involved changes. This becomes all the more important as curated KBs are interconnected (through copying or referencing) and thus changes need to be propagated from one KB to another either within or across communities. This article addresses this problem by proposing a change language which allows the formulation of concise and intuitive deltas. The language is expressive enough to describe unambiguously any possible change encountered in curated KBs expressed in RDF(S), and can be efficiently and deterministically detected in an automated way. Moreover, we devise a change detection algorithm which is sound and complete with respect to the aforementioned language, and study appropriate semantics for executing the deltas expressed in our language in order to move backwards and forwards in a multiversion repository, using only the corresponding deltas. Finally, we evaluate through experiments the effectiveness and efficiency of our algorithms using real ontologies from the cultural, bioinformatics, and entertainment domains.
---
paper_title: Representation of Change in Controlled Medical Terminologies
paper_content:
Computer-based systems that support health care require large controlled terminologies to manage names and meanings of data elements. These terminologies are not static, because change in health care is inevitable. To share data and applications in health care, we need standards not only for terminologies and concept representation, but also for representing change. To develop a principled approach to managing change, we analyze the requirements of controlled medical terminologies and consider features that frame knowledge-representation systems have to offer. Based on our analysis, we present a concept model, a set of change operations, and a change-documentation model that may be appropriate for controlled terminologies in health care. We are currently implementing our modeling approach within a computational architecture.
---
paper_title: A Framework for Ontology Evolution in Collaborative Environments
paper_content:
With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily.
---
paper_title: A Versioning and Evolution Framework for RDF Knowledge Bases
paper_content:
We present an approach to support the evolution of online, distributed, reusable, and extendable ontologies based on the RDF data model. The approach works on the basis of atomic changes, basically additions or deletions of statements to or from an RDF graph. Such atomic changes are aggregated to compound changes, resulting in a hierarchy of changes, thus facilitating the human reviewing process on various levels of detail. These derived compound changes may be annotated with meta-information and classified as ontology evolution patterns. The introduced ontology evolution patterns in conjunction with appropriate data migration algorithms enable the automatic migration of instance data in distributed environments.
---
paper_title: C.: A pattern-based framework of change operators for ontology evolution
paper_content:
Change operators are the building blocks of ontology evolution. Different layers of change operators have been suggested. In this paper, we present a novel approach to deal with ontology evolution, in particular, change representation as a pattern-based layered operator framework. As a result of an empirical study, we identify four different levels of change operators based on the granularity, domain-specificity and abstraction of changes. The first two layers are based on generic structural change operators, whereas the next two layers are domain-specific change patterns. These layers of change patterns capture the real changes in the selected domains. We discuss identification and integration of the different layers.
---
paper_title: A Component-Based Framework For Ontology Evolution
paper_content:
Support for ontology evolution becomes extremely important in distributed development and use of ontologies. Information about change can be represented in many different ways. We describe these different representations and propose a framework that integrates them. We show how different representations in the framework are related by describing some techniques and heuristics that supplement information in one representation with information from other representations. We present an ontology of change operations, which is the kernel of our framework. 1 Support for Ontology Evolution Ontologies are increasing in popularity, and researchers and developers use them in more and more application areas. Ontologies are used as shared vocabularies, to improve information retrieval, or to help data integration. Neither the ontology development itself nor its product—the ontology— is a single-person enterprise. Large standardized ontologies are often developed by several researchers in parallel (e.g. SUO1 [9]); a number of ontologies grow in the context of peer-to-peer applications (e.g. Edutella [5]); other ontologies are constructed dynamically [2]. Successful applications of ontologies in such uncontrolled, de-centralized and distributed environments require substantial support for change management in ontologies and ontology evolution [7]. Given an ontology O and its two versions, Vold and Vnew, a complete support for change management in an ontology environment includes support for the following tasks.2 Data Transformation: When an ontology version Vold is changed to Vnew, data described by Vold might need to translated to bring it in line with Vnew. For example, if we merge two concepts A and B from Vold into C in Vnew, we must combine instances of A and B as well. http://suo.ieee.org/ Note that Vnew is not necessarily a unique replacement for Vold. There might be several new versions based on the old version, and all of them could exist in parallel. The labels are just used to refer to two versions of an ontology where Vnew has evolved from Vold. Data Access: Even if data is not being transformed, if there exists data conforming to Vold, we often want to access this data and interpret it correctly via Vnew. That is, we should be able to retrieve all data that was accessible via queries in terms of Vold with queries in terms of Vnew. Furthermore, instances of concepts in Vold should be instances of equivalent concepts in Vnew. This task is a very common one in the context of the Semantic Web, where ontologies describe pieces of data on the web. Ontology Update: When we adapt a remote ontology to specific local needs, and the remote ontology changes, we must propagate the changes in the remote ontology to the adapted local ontology [8]. Consistent Reasoning: Ontologies, being formal descriptions, are often used as logical theories. When ontology changes occur, we must analyze the changes to determine whether specific axioms that were valid in Vold are still valid in Vnew. For example, it might be useful to know that a change does not affect the subsumption relationship between two concepts: if A v B is valid in Vold it is also valid in Vnew. While a change in the logical theory will always affects reasoning in general, answers to specific queries may remain unchanged. Verification and Approval: Sometimes developers need to verify and approve ontology changes. This situation often happens when several people are developing a centralized ontology, or when developers want to apply changes selectively. There must be a user interface that simplifies such verification and allows developers to accept or reject specific changes, enabling execution of some changes and rolling back of others. This list of tasks is not exhaustive. The tools that exist today support these tasks in isolation. For example, the KAON framework [10] supports evolution strategies, allowing developers to specify strategies for updating data when changes in an ontology occur. The SHOE versioning system specifies which versions of the ontology the current version is backward compatible with [3]. Many ontology-editing environments (e.g., Protege [1]) provide logs of changes between versions. While these tools support some of the ontologyevolution tasks, there is no interaction or sharing of information among the tools. However, many of these tasks require the same elements in the representation of change. Imple-
---
paper_title: COnto-Diff: generation of complex evolution mappings for life science ontologies
paper_content:
Life science ontologies evolve frequently to meet new requirements or to better reflect the current domain knowledge. The development and adaptation of large and complex ontologies is typically performed collaboratively by several curators. To effectively manage the evolution of ontologies it is essential to identify the difference (Diff) between ontology versions. Such a Diff supports the synchronization of changes in collaborative curation, the adaptation of dependent data such as annotations, and ontology version management. We propose a novel approach COnto-Diff to determine an expressive and invertible diff evolution mapping between given versions of an ontology. Our approach first matches the ontology versions and determines an initial evolution mapping consisting of basic change operations (insert/update/delete). To semantically enrich the evolution mapping we adopt a rule-based approach to transform the basic change operations into a smaller set of more complex change operations, such as merge, split, or changes of entire subgraphs. The proposed algorithm is customizable in different ways to meet the requirements of diverse ontologies and application scenarios. We evaluate the proposed approach for large life science ontologies including the Gene Ontology and the NCI Thesaurus and compare it with PromptDiff. We further show how the Diff results can be used for version management and annotation migration in collaborative curation.
---
paper_title: Understanding ontology evolution: A change detection approach
paper_content:
In this article, we propose a change detection approach in the context of an ontology evolution framework for OWL DL ontologies. The framework allows ontology engineers to request and apply changes to the ontology they manage. Furthermore, the framework assures that the ontology and its depending artifacts remain consistent after changes have been applied. Innovative is that the framework includes a change detection mechanism that allows generating automatically a detailed overview of changes that have occurred based on a set of change definitions. In addition, different users (such as maintainers of depending artifacts) may have their own set of change definitions, which results into different overviews of the changes, each providing a different view on how the ontology has been changed. Using these change definitions, also different levels of abstraction are supported. Both features will enhance the understanding of the evolution of an ontology for different users.
---
paper_title: PROMPTDIFF: A Fixed-Point Algorithm for Comparing Ontology Versions
paper_content:
As ontology development becomes a more ubiquitous and collaborative process, the developers face the problem of maintaining versions of ontologies akin to maintaining versions of software code in large software projects. Versioning systems for software code provide mechanisms for tracking versions, checking out versions for editing, comparing different versions, and so on. We can directly reuse many of these mechanisms for ontology versioning. However, version comparison for code is based on comparing text files--an approach that does not work for comparing ontologies. Two ontologies can be identical but have different text representation. We have developed the PROMPTDIFF algorithm, which integrates different heuristic matchers for comparing ontology versions. We combine these matchers in a fixed-point manner, using the results of one matcher as an input for others until the matchers produce no more changes. The current implementation includes ten matchers but the approach is easily extendable to an arbitrary number of matchers. Our evaluation showed that PROMPTDIFF correctly identified 96% of the matches in ontology versions from large projects.
---
paper_title: Semantic Diff as the Basis for Knowledge Base Versioning
paper_content:
13th International Workshop on Non-Monotonic Reasoning (NMR). 14-16 May 2010, Toronto, Canada
---
paper_title: On Detecting High-Level Changes in RDF/S KBs
paper_content:
An increasing number of scientific communities rely on Semantic Web ontologies to share and interpret data within and across research domains. These common knowledge representation resources are usually developed and maintained manually and essentially co-evolve along with experimental evidence produced by scientists worldwide. Detecting automatically the differences between (two) versions of the same ontology in order to store or visualize their deltas is a challenging task for e-science. In this paper, we focus on languages allowing the formulation of concise and intuitive deltas, which are expressive enough to describe unambiguously any possible change and that can be effectively and efficiently detected. We propose a specific language that provably exhibits those characteristics and provide a change detection algorithm which is sound and complete with respect to the proposed language. Finally, we provide a promising experimental evaluation of our framework using real ontologies from the cultural and bioinformatics domains.
---
paper_title: Managing ontology changes on the semantic Web
paper_content:
Although the ontology evolution plays a key role in the semantic Web, methods and tools to support it are missing. Thus, this paper proposes a component-based framework for managing ontology changes. The main functionalities of the OntoAnalyzer framework are: (1) to track changes and to formalize them using a language that we propose for representing ontology changes and (2) to identify changes a posteriori to ontology evolution and to analyze their effect on the ontology-based annotation of resources.
---
paper_title: High-level change detection in RDF(S) KBs
paper_content:
With the increasing use of Web 2.0 to create, disseminate, and consume large volumes of data, more and more information is published and becomes available for potential data consumers, that is, applications/services, individual users and communities, outside their production site. The most representative example of this trend is Linked Open Data (LOD), a set of interlinked data and knowledge bases. The main challenge in this context is data governance within loosely coordinated organizations that are publishing added-value interlinked data on the Web, bringing together issues related to data management and data quality, in order to support the full lifecycle of data production, consumption, and management. In this article, we are interested in curation issues for RDF(S) data, which is the default data model for LOD. In particular, we are addressing change management for RDF(S) data maintained by large communities (scientists, librarians, etc.) which act as curators to ensure high quality of data. Such curated Knowledge Bases (KBs) are constantly evolving for various reasons, such as the inclusion of new experimental evidence or observations, or the correction of erroneous conceptualizations. Managing such changes poses several research problems, including the problem of detecting the changes (delta) between versions of the same KB developed and maintained by different groups of curators, a crucial task for assisting them in understanding the involved changes. This becomes all the more important as curated KBs are interconnected (through copying or referencing) and thus changes need to be propagated from one KB to another either within or across communities. This article addresses this problem by proposing a change language which allows the formulation of concise and intuitive deltas. The language is expressive enough to describe unambiguously any possible change encountered in curated KBs expressed in RDF(S), and can be efficiently and deterministically detected in an automated way. Moreover, we devise a change detection algorithm which is sound and complete with respect to the aforementioned language, and study appropriate semantics for executing the deltas expressed in our language in order to move backwards and forwards in a multiversion repository, using only the corresponding deltas. Finally, we evaluate through experiments the effectiveness and efficiency of our algorithms using real ontologies from the cultural, bioinformatics, and entertainment domains.
---
paper_title: COnto-Diff: generation of complex evolution mappings for life science ontologies
paper_content:
Life science ontologies evolve frequently to meet new requirements or to better reflect the current domain knowledge. The development and adaptation of large and complex ontologies is typically performed collaboratively by several curators. To effectively manage the evolution of ontologies it is essential to identify the difference (Diff) between ontology versions. Such a Diff supports the synchronization of changes in collaborative curation, the adaptation of dependent data such as annotations, and ontology version management. We propose a novel approach COnto-Diff to determine an expressive and invertible diff evolution mapping between given versions of an ontology. Our approach first matches the ontology versions and determines an initial evolution mapping consisting of basic change operations (insert/update/delete). To semantically enrich the evolution mapping we adopt a rule-based approach to transform the basic change operations into a smaller set of more complex change operations, such as merge, split, or changes of entire subgraphs. The proposed algorithm is customizable in different ways to meet the requirements of diverse ontologies and application scenarios. We evaluate the proposed approach for large life science ontologies including the Gene Ontology and the NCI Thesaurus and compare it with PromptDiff. We further show how the Diff results can be used for version management and annotation migration in collaborative curation.
---
paper_title: Understanding ontology evolution: A change detection approach
paper_content:
In this article, we propose a change detection approach in the context of an ontology evolution framework for OWL DL ontologies. The framework allows ontology engineers to request and apply changes to the ontology they manage. Furthermore, the framework assures that the ontology and its depending artifacts remain consistent after changes have been applied. Innovative is that the framework includes a change detection mechanism that allows generating automatically a detailed overview of changes that have occurred based on a set of change definitions. In addition, different users (such as maintainers of depending artifacts) may have their own set of change definitions, which results into different overviews of the changes, each providing a different view on how the ontology has been changed. Using these change definitions, also different levels of abstraction are supported. Both features will enhance the understanding of the evolution of an ontology for different users.
---
paper_title: Can you tell the difference between DL-Lite ontologies
paper_content:
We develop a formal framework for comparing different versions of DL-Lite ontologies. Four notions of difference and entailment between ontologies are introduced and their applications in ontology development and maintenance discussed. These notions are obtained by distinguishing between differences that can be observed among concept inclusions, answers to queries over ABoxes, and by taking into account additional context ontologies. We compare these notions, study their meta-properties, and determine the computational complexity of the corresponding reasoning tasks. Moreover, we show that checking difference and entailment can be automated by means of encoding into QBF satisfiability and using off-the-shelf QBF solvers. Finally, we explore the relationship between the notion of forgetting (or uniform interpolation) and our notions of difference between ontologies.
---
paper_title: On the Foundations of Computing Deltas between RDF models
paper_content:
The ability to compute the differences that exist between two RDF models is an important step to cope with the evolving nature of the Semantic Web (SW). In particular, RDF Deltas can be employed to reduce the amount of data that need to be exchanged and managed over the network and hence build advanced SW synchronization and versioning services. By considering Deltas as sets of change operations, in this paper we study various RDF comparison functions in conjunction with the semantics of the underlying change operations and formally analyze their possible combinations in terms of correctness, minimality, semantic identity and redundancy properties.
---
paper_title: SemVersion: A Versioning System for RDF and Ontologies
paper_content:
Knowledge domains and their semantic representations via ontologies are typically subject to change in practical applications. Additionally, engineering of ontologies often takes place in distributed settings where multiple independent users interact. Therefore, change management for ontologies becomes a crucial aspect for any kind of ontology management environment. This paper introduces a new RDF-centric versioning approach and an implementation called SemVersion. SemVersion provides structural and semantic versioning for RDF models and RDFbased ontology languages like RDFS. The requirements for our system are derived from a practical scenario in the librarian domain, i.e. the
---
paper_title: Efficient Management of Biomedical Ontology Versions
paper_content:
Ontologies have become very popular in life sciences and other domains. They mostly undergo continuous changes and new ontology versions are frequently released. However, current analysis studies do not consider the ontology changes reflected in different versions but typically limit themselves to a specific ontology version which may quickly become obsolete. To allow applications easy access to different ontology versions we propose a central and uniform management of the versions of different biomedical ontologies. The proposed database approach takes concept and structural changes of succeeding ontology versions into account thereby supporting different kinds of change analysis. Furthermore, it is very space-efficient by avoiding redundant storage of ontology components which remain unchanged in different versions. We evaluate the storage requirements and query performance of the proposed approach for the Gene Ontology.
---
paper_title: Coping with Changing Ontologies in a Distributed Environment
paper_content:
We discuss the problems associated with versioning ontologies in distributed environments. This is an important issue because ontologies can be of great use in structuring and querying intemet information, but many of the Intemet’s characteristics, such as distributed ownership, rapid evolution, and heterogeneity, make ontology management difficult. We present SHOE, a web-based knowledge representation language that supports multiple versions of ontologies. We then discuss the features of SHOE that address ontology versioning, the affects of ontology revision on SHOE web pages, and methods for implementing ontology integration using SHOE’s extension and version mechanisms.
---
paper_title: Detecting Different Versions of Ontologies in Large Ontology Repositories
paper_content:
There exist a number of large repositories and search engines collecting ontologies from the web or directly from users. While mechanisms exist to help the authors of these ontologies manage their evolution locally, the links between different versions of the same ontology are often lost when the ontologies are collected by such systems. By inspecting a large collection of ontologies as part of the Watson search engine, we can see that this information is often encoded in the identifier of the ontologies, their URIs, using a variety of conventions and formats. We therefore devise an algorithm, the Ontology Version Detector, which implements a set of rules analyzing and comparing URIs of ontologies to discover versioning relations between ontologies. Through an experiment realized with 7000 ontologies, we show that such a simple and extensible approach actually provides large amounts of useful and relevant results. Indeed, the information derived from this algorithm helps us in understanding how version information is encoded in URIs and how ontologies evolve on the Web, ultimately supporting users in better exploiting the content of large ontology repositories.
---
paper_title: A model theoretic semantics for ontology versioning
paper_content:
We show that the Semantic Web needs a formal semantics for the various kinds of links between ontologies and other documents. We provide a model theoretic semantics that takes into account ontology extension and ontology versioning. Since the Web is the product of a diverse community, as opposed to a single agent, this semantics accommodates different viewpoints by having different entailment relations for different ontology perspectives. We discuss how this theory can be practically applied to RDF and OWL and provide a theorem that shows how to compute perspective-based entailment using existing logical reasoners. We illustrate these concepts using examples and conclude with a discussion of future work.
---
paper_title: Ontology versioning on the Semantic Web
paper_content:
Ontologies are often seen as basic building blocks for the Semantic Web, as they provide a reusable piece of knowledge about a specific domain. However, those pieces of knowledge are not static, but evolve over time. Domain changes, adaptations to different tasks, or changes in the conceptualization require modifications of the ontology. The evolution of ontologies causes operability problems, which will hamper their effective reuse. A versioning mechanism might help to reduce those problems, as it will make the relations between different revisions of an ontology explicit. This paper will discuss the problem of ontology versioning. Inspired by the work done in database schema versioning and program interface versioning, it will also propose building blocks for the most important aspects of a versioning mechanism, i.e., ontology identification and change specification.
---
paper_title: Ontology versioning and change detection on the Web
paper_content:
To effectively use ontologies on the Web, it is essential that changes in ontologies are managed well. This paper analyzes the topic of ontology versioning in the context of the Web by looking at the characteristics of the version relation between ontologies and at the identification of online ontologies. Then, it describes the design of a web-based system that helps users to manage changes in ontologies. The system helps to keep different versions of web-based ontologies interoperable, by maintaining not only the transformations between ontologies, but also the conceptual relation between concepts in different versions. The system allows ontology engineers to compare versions of ontology and to specify these conceptual relations. For the visualization of differences, it uses an adaptable rule-based mechanism that finds and classifies changes in RDF-based ontologies.
---
paper_title: Managing Change: An Ontology Version Control System
paper_content:
In this paper we present the basic requirements and initial design of a system which manages and facilitates changes to an OWL ontology in a multi-editor environment. This system uses a centralized client-server architecture in which the server maintains the current state and full history of all managed ontologies. Clients can access the current ontology version, all historical revisions, and differences between arbitrary revisions, as well as metadata associated with revisions. This system will be used by many other ontology based services, such as incremental reasoning, collaborative ontology development, advanced ontology search, and ontology module extraction. Taken holistically, this network of services will provide a rich environment for the development and management of ontology based information systems. 1 Motivation and Requirements We need for a system that manages access to a changing ontology. This requirement is experienced by a variety applications with different stakeholders. An illustrative use case is presented below. A large distributed organization requires integration and alignment of many heterogeneous data sources and information artifacts. They facilitate such integration by employing one or more expressive OWL ontologies that exist in defined relations to data sources, information artifacts, and an enterprise conceptual model. These ontologies, as a critical infrastructure components, have stakeholders throughout the organization and outside its boundaries. Further, they are developed and maintained concurrently by many parties. Individual stakeholders participate in the ontology engineering process in different ways. Some are primarily consumers, but may make detailed edits to areas of the ontologies critical to them. Others are charged with maintaining high-level ontology coherence and use an integrated ontology development environment, such as Protege-OWL, to collaborate with similar editors in realtime, leveraging tools to maintain a dynamic view of the ontology. All stakeholders rely on the ontologies being available and consistent across the organization. This use case illustrates a set of requirements: Client Performance The network is a potential bottleneck of any distributed or client-server system, but the critical work of ontology development is
---
paper_title: Dynamic Ontologies on the Web
paper_content:
We discuss the problems associated with managing ontologies in distributed environments such as the Web. The Web poses unique problems for the use of ontologies because of the rapid evolution and autonomy of web sites. We present SHOE, a web-based knowledge representation language that supports multiple versions of ontologies. We describe SHOE in the terms of a logic that separates data from ontologies and allows ontologies to provide different perspectives on the data. We then discuss the features of SHOE that address ontology versioning, the effects of ontology revision on SHOE web pages, and methods for implementing ontology integration using SHOE’s extension and version mechanisms.
---
paper_title: Ontology-based Web agents
paper_content:
This paper describes SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”. These annotations are expressed in terms of ontological knowledge which can be generated by using or extending standard ontologies available on the Web. This makes it possible to ask Web agent queries such as “Find me all graduate students in Maryland who are working on a project funded by DoD initiative 123-4567”, instead of simplistic keyword searches enabled by current search engines. We have also developed a web-crawling agent, Expos´ e, which interns SHOE knowledge from web documents, making these kinds queries a reality.
---
paper_title: The Semantic Web Revisited
paper_content:
The article included many scenarios in which intelligent agents and bots undertook tasks on behalf of their human or corporate owners. Of course, shopbots and auction bots abound on the Web, but these are essentially handcrafted for particular tasks: they have little ability to interact with heterogeneous data and information types. Because we haven't yet delivered large-scale, agent-based mediation, some commentators argue that the semantic Web has failed to deliver. We argue that agents can only flourish when standards are well established and that the Web standards for expressing shared meaning have progressed steadily over the past five years
---
| Title: Ontology evolution: a process-centric survey
Section 1: Introduction
Description 1: Introduce the aim, motivation, and significance of ontology evolution, providing an overview of the paper's objectives and structure.
Section 2: Ontology evolution: definition and process model
Description 2: Define ontology evolution and discuss existing process models, deriving a unified ontology evolution cycle.
Section 3: Detecting the need for evolution
Description 3: Explain methods for detecting the need for ontology evolution by analyzing data and usage patterns.
Section 4: Suggesting ontology changes
Description 4: Detail approaches for suggesting changes to an ontology, focusing on unstructured knowledge from documents and structured knowledge from databases and online sources.
Section 5: Validating ontology changes
Description 5: Discuss methods to validate suggested changes, ensuring consistency and correctness through domain-based and formal properties-based validation.
Section 6: Assessing the impact of evolution
Description 6: Analyze the impact of ontology evolution on dependent applications, other ontologies, and formal criteria such as cost and benefits.
Section 7: Managing ontology changes
Description 7: Explore techniques for recording changes and managing different ontology versions, ensuring traceability and consistency across the ontology lifecycle.
Section 8: Conclusion
Description 8: Summarize the findings, discuss the contributions of the paper, and suggest directions for future research in ontology evolution. |
Routing in Vehicular Ad-hoc Networks: A Survey on Single- and Cross-Layer Design Techniques, and Perspectives | 12 | ---
paper_title: Routing mechanisms and cross-layer design for Vehicular Ad Hoc Networks: A survey
paper_content:
Vehicular Ad-Hoc Network (VANET) will pave the way to advance automotive safety and occupant convenience. The potential VANET applications present diverse requirements. VANET shows unique characteristics and presents a set of challenges. The proposed VANET applications demand reliable and proficient message dissemination techniques. Routing techniques proposed for Mobile Ad-Hoc Network (MANET) do not cater for the characteristics of VANET. The need for novel routing techniques, exclusively designed for VANET has been recognised. This paper analyses different routing techniques proposed specifically for VANET. Unique characteristics of VANET pose challenges to traditional layered architecture where different layers make independent decisions. Mobility, absence of global view of network, random changes in topology, poor link quality and varied channel conditions have encouraged the paradigm shift to cross-layer approach. In order to optimise the performance of VANET, architectures based on cross-layer approach have been proposed by the researchers. The paper also surveys such cross-layer paradigm based solutions for VANET and concludes with an analytical summary.
---
paper_title: Vehicular ad hoc networks (VANETs): Current state, challenges, potentials and way forward
paper_content:
Recent advances in wireless communication technologies and auto-mobile industry have triggered a significant research interest in the field of VANETs over the past few years. VANET consists of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications supported by wireless access technologies such as IEEE 802.11p. This innovation in wireless communication has been envisaged to improve road safety and motor traffic efficiency in near future through the development of Intelligent Transport Systems (ITS). Hence, government, auto-mobile industries and academia are heavily partnering through several ongoing research projects to establish standards for VANETs. The typical set of VANET application areas, such as vehicle collision warning and traffic information dissemination have made VANET an interested field of wireless communication. This paper provides an overview on current research state, challenges, potentials of VANETs as well the way forward to achieving the long awaited ITS.
---
paper_title: Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends
paper_content:
Vehicular ad hoc networks (VANETs) have been quite a hot research area in the last few years. Due to their unique characteristics such as high dynamic topology and predictable mobility, VANETs attract so much attention of both academia and industry. In this paper, we provide an overview of the main aspects of VANETs from a research perspective. This paper starts with the basic architecture of networks, then discusses three popular research issues and general research methods, and ends up with the analysis on challenges and future trends of VANETs.
---
paper_title: A survey of cross-layer design for VANETs
paper_content:
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.
---
paper_title: Properties of the MAC layer in safety vehicular Ad Hoc networks
paper_content:
With intervehicle communications becoming a more and more popular research topic recently, the medium access control layer of the vehicular network has also received a considerable amount of attention. However, this increased interest has not always translated into a careful analysis of the properties exhibited by the MAC protocol when used by vehicular safety applications. This article tries to fill this gap by providing a comprehensive discussion on a number of important characteristics of the link layer in vehicular communications.
---
paper_title: OSI Reference Model - The ISO Model of Architecture for Open Systems Interconnection
paper_content:
Considering the urgency of the need for standards which would allow constitution of heterogeneous computer networks, ISO created a new subcommittee for Open Systems Interconnection (ISO/TC97/SC16) in 1977. The first priority of subcommittee 16 was to develop an architecture for open systems interconnection which could serve as a framework for the definition of standard protocols. As a result of 18 months of studies and discussions, SC16 adopted a layered architecture comprising seven layers (Physical, Data Link, Network, Transport, Session, Presentation, and Application). In July 1979 the specifications of this architecture, established by SC16, were passed under the name of OSI Reference Model to Technical Committee 97 Data Processing along with recommendations to start officially, on this basis, a set of protocols standardization projects to cover the most urgent needs. These recommendations were adopted by TC97 at the end of 1979 as the basis for the following development of standards for Open Systems Interconnection within ISO. The OSI Reference Model was also recognized by CCITT Rapporteur's Group on Layered Model for Public Data Network Services. This paper presents the model of architecture for Open Systems Interconnection developed by SC16. Some indications are also given on the initial set of protocols which will likely be developed in this OSI Reference Model.
---
paper_title: Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends
paper_content:
Vehicular ad hoc networks (VANETs) have been quite a hot research area in the last few years. Due to their unique characteristics such as high dynamic topology and predictable mobility, VANETs attract so much attention of both academia and industry. In this paper, we provide an overview of the main aspects of VANETs from a research perspective. This paper starts with the basic architecture of networks, then discusses three popular research issues and general research methods, and ends up with the analysis on challenges and future trends of VANETs.
---
paper_title: A Survey of Cross-Layer Designs in Wireless Networks
paper_content:
The strict boundary of the five layers in the TCP/IP network model provides the information encapsulation that enables the standardizing of network communications and makes the implementation of networks convenient in terms of abstract layers. However, the encapsulation results in some side effects, including compromise of QoS, latency, extra overload, etc. Therefore, to mitigate the side effect of the encapsulation between the abstract layers in the TCP/IP model, a number of cross-layer designs have been proposed. Cross-layer designs allow information sharing among all of the five layers in order to improve the wireless network functionality, including security, QoS, and mobility. In this article, we classify cross-layer designs by two ways. On the one hand, by how to share information among the five layers, cross-layer designs can be classified into two categories: non-manager method and manager method. On the other hand, by the organization of the network, cross-layer designs can be classified into two categories: centralized method and distributed method. Furthermore, we summarize the challenges of the cross-layer designs, including coexistence, signaling, the lack of a universal cross-layer design, and the destruction of the layered architecture.
---
paper_title: A survey of cross-layer design for VANETs
paper_content:
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.
---
paper_title: Cross-layer design: a survey and the road ahead
paper_content:
Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.
---
paper_title: Vertex-Based Multihop Vehicle-to-Infrastructure Routing for Vehicular Ad Hoc Networks
paper_content:
Multihop data delivery in vehicular ad hoc networks (VANETs) suffers from the fact that vehicles are highly mobile and inter-vehicle links are frequently disconnected. In this paper, we propose a multihop vehicle-to-infrastructure routing protocol named Vertex-Based Predictive Greedy Routing (VPGR), which predicts a sequence of valid vertices (or junctions) from a source vehicle to fixed infrastructure (or a roadside unit) in the area of interest and, then, forwards data to the fixed infrastructure through the sequence of vertices in urban environments. The well known predictive directional greedy routing mechanism is used for data forwarding in VPGR. The proposed VPGR leverages the geographic position, velocity, direction and acceleration of vehicles for both the calculation of a sequence of valid vertices and the predictive directional greedy routing. Simulation results show significant performance improvement compared to conventional routing protocols in terms of packet delivery ratio, end-to-end delay and routing overhead.
---
paper_title: Position Based Routing Protocols in VANET: A Survey
paper_content:
In this review article we are presenting a survey on position based routing protocols. We have classified systematically the different protocols into two categories, the infrastructure based and infrastructure less routing protocols. We have analyzed the working, architecture and application areas of Vehicular Ad hoc Network. A comparative study is also performed in each protocol by taking different quality parameters with similar routing protocols of same category.
---
paper_title: A Review of Information Dissemination Protocols for Vehicular Ad Hoc Networks
paper_content:
With the fast development in ad hoc wireless communications and vehicular technology, it is foreseeable that, in the near future, traffic information will be collected and disseminated in real-time by mobile sensors instead of fixed sensors used in the current infrastructure-based traffic information systems. A distributed network of vehicles such as a vehicular ad hoc network (VANET) can easily turn into an infrastructure-less self-organizing traffic information system, where any vehicle can participate in collecting and reporting useful traffic information such as section travel time, flow rate, and density. Disseminating traffic information relies on broadcasting protocols. Recently, there have been a significant number of broadcasting protocols for VANETs reported in the literature. In this paper, we classify and provide an in-depth review of these protocols.
---
paper_title: Vehicular ad hoc networks (VANETs): Current state, challenges, potentials and way forward
paper_content:
Recent advances in wireless communication technologies and auto-mobile industry have triggered a significant research interest in the field of VANETs over the past few years. VANET consists of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications supported by wireless access technologies such as IEEE 802.11p. This innovation in wireless communication has been envisaged to improve road safety and motor traffic efficiency in near future through the development of Intelligent Transport Systems (ITS). Hence, government, auto-mobile industries and academia are heavily partnering through several ongoing research projects to establish standards for VANETs. The typical set of VANET application areas, such as vehicle collision warning and traffic information dissemination have made VANET an interested field of wireless communication. This paper provides an overview on current research state, challenges, potentials of VANETs as well the way forward to achieving the long awaited ITS.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions
paper_content:
Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed.
---
paper_title: Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends
paper_content:
Vehicular ad hoc networks (VANETs) have been quite a hot research area in the last few years. Due to their unique characteristics such as high dynamic topology and predictable mobility, VANETs attract so much attention of both academia and industry. In this paper, we provide an overview of the main aspects of VANETs from a research perspective. This paper starts with the basic architecture of networks, then discusses three popular research issues and general research methods, and ends up with the analysis on challenges and future trends of VANETs.
---
paper_title: A survey of cross-layer design for VANETs
paper_content:
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.
---
paper_title: A survey on position-based routing for vehicular ad hoc networks
paper_content:
Position-based routing is considered to be a very promising routing strategy for communication within vehicular ad hoc networks (VANETs), due to the fact that vehicular nodes can obtain position information from onboard global positioning system receivers and acquire global road layout information from an onboard digital map. Position-based routing protocols, which are based mostly on greedy forwarding, are well-suited to the highly dynamic and rapid-changing network topology of VANETs. In this paper, we outline the background and the latest development in VANETs and survey the state-of-the-art routing protocols previously used in VANETs. We present the pros and cons for each routing protocol, and make a detailed comparison. We also discuss open issues, challenges and future research directions. It is observed that a hybrid routing protocol is the best choice for VANETs in both urban and highway environments.
---
paper_title: Data communication in VANETs: Protocols, applications and challenges
paper_content:
VANETs have emerged as an exciting research and application area. Increasingly vehicles are being equipped with embedded sensors, processing and wireless communication capabilities. This has opened a myriad of possibilities for powerful and potential life-changing applications on safety, efficiency, comfort, public collaboration and participation, while they are on the road. Although, considered as a special case of a Mobile Ad Hoc Network, the high but constrained mobility of vehicles bring new challenges to data communication and application design in VANETs. This is due to their highly dynamic and intermittent connected topology and different application's QoS requirements. In this work, we survey VANETs focusing on their communication and application challenges. In particular, we discuss the protocol stack of this type of network, and provide a qualitative comparison between most common protocols in the literature. We then present a detailed discussion of different categories of VANET applications. Finally, we discuss open research problems to encourage the design of new VANET solutions.
---
paper_title: Progress and challenges in intelligent vehicle area networks
paper_content:
Vehicle area networks form the backbone of future intelligent transportation systems.
---
paper_title: Vehicular ad hoc networks (VANETs): Current state, challenges, potentials and way forward
paper_content:
Recent advances in wireless communication technologies and auto-mobile industry have triggered a significant research interest in the field of VANETs over the past few years. VANET consists of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications supported by wireless access technologies such as IEEE 802.11p. This innovation in wireless communication has been envisaged to improve road safety and motor traffic efficiency in near future through the development of Intelligent Transport Systems (ITS). Hence, government, auto-mobile industries and academia are heavily partnering through several ongoing research projects to establish standards for VANETs. The typical set of VANET application areas, such as vehicle collision warning and traffic information dissemination have made VANET an interested field of wireless communication. This paper provides an overview on current research state, challenges, potentials of VANETs as well the way forward to achieving the long awaited ITS.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions
paper_content:
Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed.
---
paper_title: 1 Communication Patterns in VANETs
paper_content:
Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems.
---
paper_title: Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions
paper_content:
Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed.
---
paper_title: Data communication in VANETs: Protocols, applications and challenges
paper_content:
VANETs have emerged as an exciting research and application area. Increasingly vehicles are being equipped with embedded sensors, processing and wireless communication capabilities. This has opened a myriad of possibilities for powerful and potential life-changing applications on safety, efficiency, comfort, public collaboration and participation, while they are on the road. Although, considered as a special case of a Mobile Ad Hoc Network, the high but constrained mobility of vehicles bring new challenges to data communication and application design in VANETs. This is due to their highly dynamic and intermittent connected topology and different application's QoS requirements. In this work, we survey VANETs focusing on their communication and application challenges. In particular, we discuss the protocol stack of this type of network, and provide a qualitative comparison between most common protocols in the literature. We then present a detailed discussion of different categories of VANET applications. Finally, we discuss open research problems to encourage the design of new VANET solutions.
---
paper_title: Vehicular ad hoc networks (VANETs): Current state, challenges, potentials and way forward
paper_content:
Recent advances in wireless communication technologies and auto-mobile industry have triggered a significant research interest in the field of VANETs over the past few years. VANET consists of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications supported by wireless access technologies such as IEEE 802.11p. This innovation in wireless communication has been envisaged to improve road safety and motor traffic efficiency in near future through the development of Intelligent Transport Systems (ITS). Hence, government, auto-mobile industries and academia are heavily partnering through several ongoing research projects to establish standards for VANETs. The typical set of VANET application areas, such as vehicle collision warning and traffic information dissemination have made VANET an interested field of wireless communication. This paper provides an overview on current research state, challenges, potentials of VANETs as well the way forward to achieving the long awaited ITS.
---
paper_title: Progress and challenges in intelligent vehicle area networks
paper_content:
Vehicle area networks form the backbone of future intelligent transportation systems.
---
paper_title: Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends
paper_content:
Vehicular ad hoc networks (VANETs) have been quite a hot research area in the last few years. Due to their unique characteristics such as high dynamic topology and predictable mobility, VANETs attract so much attention of both academia and industry. In this paper, we provide an overview of the main aspects of VANETs from a research perspective. This paper starts with the basic architecture of networks, then discusses three popular research issues and general research methods, and ends up with the analysis on challenges and future trends of VANETs.
---
paper_title: Vehicle-to-Roadside Multihop Data Delivery in 802.11p/WAVE Vehicular Ad Hoc Networks
paper_content:
A wide variety of applications aimed to improve road safety and traffic efficiency or to provide comfort/entertainment to passengers in vehicles are intended to be delivered in VANETs (Vehicular Ad hoc Networks) in the near future. Most of these applications require vehicles to access a remote network. However, due to the high costs needed for deploying and maintaining an ubiquitous roadside infrastructure, just a few stationary access points are expected to be installed, which provide partial network coverage; hence multihop communications must be enforced in order to provide remote connectivity to vehicles on the road. The upcoming IEEE 802.11p/WAVE (Wireless Access for Vehicular Environment) standard relies on a multi-channel architecture to deliver both safety and non-safety applications to vehicles, but it does not explicitly support multihop data delivery. In this paper some main design issues are investigated which deal with the provision of multihop communications in 802.11p/WAVE networks. A set of forwarding schemes compliant to the draft standard is specified and analyzed, which aim to improve data delivery and delay performances, by reducing the overhead incurred for relaying packets.
---
paper_title: IEEE 802.11p: Towards an International Standard for Wireless Access in Vehicular Environments
paper_content:
Vehicular environments impose a set of new requirements on today's wireless communication systems. Vehicular safety communications applications cannot tolerate long connection establishment delays before being enabled to communicate with other vehicles encountered on the road. Similarly, non-safety applications also demand efficient connection setup with roadside stations providing services (e.g. digital map update) because of the limited time it takes for a car to drive through the coverage area. Additionally, the rapidly moving vehicles and complex roadway environment present challenges at the PHY level. The IEEE 802.11 standard body is currently working on a new amendment, IEEE 802.1 lp, to address these concerns. This document is named wireless access in vehicular environment, also known as WAVE. As of writing, the draft document for IEEE 802.11p is making progress and moving closer towards acceptance by the general IEEE 802.11 working group. It is projected to pass letter ballot in the first half of 2008. This paper provides an overview of the latest draft proposed for IEEE 802.11p. It is intended to provide an insight into the reasoning and approaches behind the document.
---
paper_title: Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions
paper_content:
Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed.
---
paper_title: 1 Communication Patterns in VANETs
paper_content:
Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: VANET routing protocols: Issues and challenges
paper_content:
In recent years, rapid growth in the number of vehicles on the road has increased demands for communication on the move. A new kind of Ad hoc network with an immense improvement in technological innovations is emerging these days known as VANET (Vehicular ad hoc network). It is an assortment of vehicular nodes that act as mobile hosts establish a transient network without the assistance of any centralized administration or any established infrastructure. Therefore, it is called autonomous & self configured network. In VANET, two kinds of communication can be done to provide a list of applications like emergency vehicle warning, safety etc. These are between various vehicles known as vehicle to vehicle and between vehicles and roadside units known as vehicle to roadside communication. Performance of such kind of communication between vehicles depends on various routing protocols. We have a tendency to survey a number of the recent analysis leads to routing space. In the following sections we present various existing routing protocols with their merits and demerits.
---
paper_title: 1 Communication Patterns in VANETs
paper_content:
Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: 1 Communication Patterns in VANETs
paper_content:
Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems.
---
paper_title: On Alleviating Beacon Overhead in Routing Protocols for Urban VANETs
paper_content:
Vehicular ad hoc networks (VANETs) have been attracting increasing research interests for the past decade. To address the routing problem, many protocols have been proposed in the past several years. Routing protocols for VANETs, mostly based on the ideas of “Geographical Routing” (or geo-routing for short), typically have nodes periodically broadcast one-hop beacon messages to reveal their positions to neighbors. Nevertheless, packet loss and thus deterioration of routing performance in these protocols are anticipated in urban areas due to high density of vehicles in the network. In this paper, we propose two new VANET routing protocols, namely, Routing Protocol with Beacon Control (RPBC) and Routing Protocol with BeaconLess (RPBL), to alleviate packet losses. In RPBC, each vehicle determines whether to transmit a beacon message based on a new beacon control scheme proposed in this paper, which by minimizing redundant beacon messages reduces transmission overhead significantly. On the other hand, RPBL is a beaconless protocol where a node broadcasts a packet to its neighboring nodes and transmits packet via multiple paths to achieve high delivery ratio. Moreover, as packets in geo-routing protocols include the location of the sender, it can be used for routing without heavily relying on beacons. Accordingly, we propose the idea of virtual beacons and use it to further improve our proposed protocols. We conduct comprehensive experiments by simulation to validate our ideas and evaluate the proposed protocols. The simulation results show that our proposals can achieve high delivery ratios, short delays, and small overhead.
---
paper_title: A distributed beaconless routing protocol for real-time video dissemination in multimedia VANETs
paper_content:
We design a framework for video (and in general high data rate applications) transmission over a VANET.Our framework includes both application and routing layer design.We analyze MAC layer behavior and improve its utilization by solving the Spurious Forwarding problem.We tested our protocols on a real scenario, and considered both QoS and QoE, including MOS experiments in a real car.We grant a high data rate in ad-hoc mode, also for long distance, through our backbone beaconless approach. Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks
paper_content:
Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.
---
paper_title: 1 Communication Patterns in VANETs
paper_content:
Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems.
---
paper_title: Application of Cognitive Techniques to Adaptive Routing for VANETs in City Environments
paper_content:
The evolution of smart vehicles has widened the application opportunities for vehicular ad hoc networks. In this context, the routing issue is still one of the main challenges regarding to the performance of the network. Although there are multiple ad hoc routing proposals, the traditional general-purpose approaches do not fit the distinctive properties of vehicular network environments. New routing strategies must complement the existing protocols to improve their performance in vehicular scenarios. This paper introduces a novel intelligent routing technique that makes decisions in order to adaptively adjust its operation and obtain a global benefit. The nodes sense the network locally and collect information to feed the cognitive module which will select the best routing strategy, without the need of additional protocol message dissemination or convergence mechanism.
---
paper_title: A survey on position-based routing for vehicular ad hoc networks
paper_content:
Position-based routing is considered to be a very promising routing strategy for communication within vehicular ad hoc networks (VANETs), due to the fact that vehicular nodes can obtain position information from onboard global positioning system receivers and acquire global road layout information from an onboard digital map. Position-based routing protocols, which are based mostly on greedy forwarding, are well-suited to the highly dynamic and rapid-changing network topology of VANETs. In this paper, we outline the background and the latest development in VANETs and survey the state-of-the-art routing protocols previously used in VANETs. We present the pros and cons for each routing protocol, and make a detailed comparison. We also discuss open issues, challenges and future research directions. It is observed that a hybrid routing protocol is the best choice for VANETs in both urban and highway environments.
---
paper_title: A Routing Algorithm Based on Dynamic Forecast of Vehicle Speed and Position in VANET
paper_content:
Considering city road environment as the background, by researching GPSR greedy algorithm and the movement characteristics of vehicle nodes in VANET, this paper proposes the concept of circle changing trends angle in vehicle speed fluctuation curve and the movement domain and designs an SWF routing algorithm based on the vehicle speed point forecasted and the changing trends time computation. Simulation experiments are carried out through using a combination of NS-2 and VanetMobiSim software. Compared with the performance of the SWF-GPSR protocol with general GPSR, 2-hop C-GEDIR, and the GRA and AODV protocols, we find that the SWF algorithm has a certain degree of improvement in routing hops, the packet delivery ratio, delay performance, and link stability.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: VANET routing protocols: Issues and challenges
paper_content:
In recent years, rapid growth in the number of vehicles on the road has increased demands for communication on the move. A new kind of Ad hoc network with an immense improvement in technological innovations is emerging these days known as VANET (Vehicular ad hoc network). It is an assortment of vehicular nodes that act as mobile hosts establish a transient network without the assistance of any centralized administration or any established infrastructure. Therefore, it is called autonomous & self configured network. In VANET, two kinds of communication can be done to provide a list of applications like emergency vehicle warning, safety etc. These are between various vehicles known as vehicle to vehicle and between vehicles and roadside units known as vehicle to roadside communication. Performance of such kind of communication between vehicles depends on various routing protocols. We have a tendency to survey a number of the recent analysis leads to routing space. In the following sections we present various existing routing protocols with their merits and demerits.
---
paper_title: VANET routing protocols: Issues and challenges
paper_content:
In recent years, rapid growth in the number of vehicles on the road has increased demands for communication on the move. A new kind of Ad hoc network with an immense improvement in technological innovations is emerging these days known as VANET (Vehicular ad hoc network). It is an assortment of vehicular nodes that act as mobile hosts establish a transient network without the assistance of any centralized administration or any established infrastructure. Therefore, it is called autonomous & self configured network. In VANET, two kinds of communication can be done to provide a list of applications like emergency vehicle warning, safety etc. These are between various vehicles known as vehicle to vehicle and between vehicles and roadside units known as vehicle to roadside communication. Performance of such kind of communication between vehicles depends on various routing protocols. We have a tendency to survey a number of the recent analysis leads to routing space. In the following sections we present various existing routing protocols with their merits and demerits.
---
paper_title: VANET routing protocols: Issues and challenges
paper_content:
In recent years, rapid growth in the number of vehicles on the road has increased demands for communication on the move. A new kind of Ad hoc network with an immense improvement in technological innovations is emerging these days known as VANET (Vehicular ad hoc network). It is an assortment of vehicular nodes that act as mobile hosts establish a transient network without the assistance of any centralized administration or any established infrastructure. Therefore, it is called autonomous & self configured network. In VANET, two kinds of communication can be done to provide a list of applications like emergency vehicle warning, safety etc. These are between various vehicles known as vehicle to vehicle and between vehicles and roadside units known as vehicle to roadside communication. Performance of such kind of communication between vehicles depends on various routing protocols. We have a tendency to survey a number of the recent analysis leads to routing space. In the following sections we present various existing routing protocols with their merits and demerits.
---
paper_title: Routing in vehicular ad hoc networks: A survey
paper_content:
Vehicular ad hoc network (VANET) is an emerging new technology integrating ad hoc network, wireless LAN (WLAN) and cellular technology to achieve intelligent inter-vehicle communications and improve road traffic safety and efficiency. VANETs are distinguished from other kinds of ad hoc networks by their hybrid network architectures, node movement characteristics, and new application scenarios. Therefore, VANETs pose many unique networking research challenges, and the design of an efficient routing protocol for VANETs is very crucial. In this article, we discuss the research challenge of routing in VANETs and survey recent routing protocols and related mobility models for VANETs.
---
paper_title: Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions
paper_content:
Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed.
---
paper_title: A Practical Routing Protocol for Vehicle-formed Mobile Ad Hoc Networks on the Roads
paper_content:
An IVC (Inter-vehicle communication) network is a type of mobile ad hoc networks (MANET) in which high-speed vehicles send, receive, and forward packets via other vehicles on the roads. An IVC network can provide useful applications in future Intelligent Transportation Systems. However, due to frequent network topology changes, a routing path in an IVC network breaks easily. As such, a routing protocol proposed for general MANET (e.g., AODV) performs poorly in IVC networks. To address this problem, we designed and implemented an intelligent flooding-based routing protocol and conducted several field trials to evaluate its performance on the roads. Results obtained from field trials show that (1) our protocol outperforms AODV significantly on IVC networks, and (2) our protocol can make many useful services such as email, ftp, web, video conferencing, and video broadcasting applicable on IVC networks for vehicle users.
---
paper_title: VANET routing protocols: Issues and challenges
paper_content:
In recent years, rapid growth in the number of vehicles on the road has increased demands for communication on the move. A new kind of Ad hoc network with an immense improvement in technological innovations is emerging these days known as VANET (Vehicular ad hoc network). It is an assortment of vehicular nodes that act as mobile hosts establish a transient network without the assistance of any centralized administration or any established infrastructure. Therefore, it is called autonomous & self configured network. In VANET, two kinds of communication can be done to provide a list of applications like emergency vehicle warning, safety etc. These are between various vehicles known as vehicle to vehicle and between vehicles and roadside units known as vehicle to roadside communication. Performance of such kind of communication between vehicles depends on various routing protocols. We have a tendency to survey a number of the recent analysis leads to routing space. In the following sections we present various existing routing protocols with their merits and demerits.
---
paper_title: Challenges of intervehicle ad hoc networks
paper_content:
Intervehicle communication (IVC) networks, a subclass of mobile ad hoc networks (MANETs), have no fixed infrastructure and instead rely on the nodes themselves to provide network functionality. However, due to mobility constraints, driver behavior, and high mobility, IVC networks exhibit characteristics that are dramatically different from many generic MANETs. This paper elicits these differences through simulations and mathematical models and then explores the impact of the differences on the IVC communication architecture, including important security implications.
---
paper_title: Position Based Routing Protocols in VANET: A Survey
paper_content:
In this review article we are presenting a survey on position based routing protocols. We have classified systematically the different protocols into two categories, the infrastructure based and infrastructure less routing protocols. We have analyzed the working, architecture and application areas of Vehicular Ad hoc Network. A comparative study is also performed in each protocol by taking different quality parameters with similar routing protocols of same category.
---
paper_title: CLWPR — A novel cross-layer optimized position based routing protocol for VANETs
paper_content:
In this paper, we propose a novel position-based routing protocol designed to anticipate the characteristics of an urban VANET environment. The proposed algorithm utilizes the prediction of the node's position and navigation information to improve the efficiency of routing protocol in a vehicular network. In addition, we use the information about link layer quality in terms of SNIR and MAC frame error rate to further improve the efficiency of the proposed routing protocol. This in particular helps to decrease end-to-end delay. Finally, carry-n-forward mechanism is employed as a repair strategy in sparse networks. It is shown that use of this technique increases packet delivery ratio, but increases end-to-end delay as well and is not recommended for QoS constraint services. Our results suggest that compared with GPSR, our proposal demonstrates better performance in the urban environment.
---
paper_title: A survey on position-based routing for vehicular ad hoc networks
paper_content:
Position-based routing is considered to be a very promising routing strategy for communication within vehicular ad hoc networks (VANETs), due to the fact that vehicular nodes can obtain position information from onboard global positioning system receivers and acquire global road layout information from an onboard digital map. Position-based routing protocols, which are based mostly on greedy forwarding, are well-suited to the highly dynamic and rapid-changing network topology of VANETs. In this paper, we outline the background and the latest development in VANETs and survey the state-of-the-art routing protocols previously used in VANETs. We present the pros and cons for each routing protocol, and make a detailed comparison. We also discuss open issues, challenges and future research directions. It is observed that a hybrid routing protocol is the best choice for VANETs in both urban and highway environments.
---
paper_title: A Routing Algorithm Based on Dynamic Forecast of Vehicle Speed and Position in VANET
paper_content:
Considering city road environment as the background, by researching GPSR greedy algorithm and the movement characteristics of vehicle nodes in VANET, this paper proposes the concept of circle changing trends angle in vehicle speed fluctuation curve and the movement domain and designs an SWF routing algorithm based on the vehicle speed point forecasted and the changing trends time computation. Simulation experiments are carried out through using a combination of NS-2 and VanetMobiSim software. Compared with the performance of the SWF-GPSR protocol with general GPSR, 2-hop C-GEDIR, and the GRA and AODV protocols, we find that the SWF algorithm has a certain degree of improvement in routing hops, the packet delivery ratio, delay performance, and link stability.
---
paper_title: A distributed beaconless routing protocol for real-time video dissemination in multimedia VANETs
paper_content:
We design a framework for video (and in general high data rate applications) transmission over a VANET.Our framework includes both application and routing layer design.We analyze MAC layer behavior and improve its utilization by solving the Spurious Forwarding problem.We tested our protocols on a real scenario, and considered both QoS and QoE, including MOS experiments in a real car.We grant a high data rate in ad-hoc mode, also for long distance, through our backbone beaconless approach. Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured.
---
paper_title: MURU: A Multi-Hop Routing Protocol for Urban Vehicular Ad Hoc Networks
paper_content:
Vehicular ad hoc networks (VANETs) are going to be an important communication infrastructure in our life. Because of high mobility and frequent link disconnection, it becomes quite challenging to establish a robust multi-hop path that helps packet delivery from the source to the destination. This paper presents a multi-hop routing protocol, called MURU, that is able to find robust paths in urban VANETs to achieve high end-to-end packet delivery ratio with low overhead. MURU tries to minimize the probability of path breakage by exploiting mobility information of each vehicle in VANETs. A new metric called expected disconnection degree (EDD) is used to select the most robust path from the source to the destination. MURU is fully distributed and does not incur much overhead, which makes MURU highly scalable for VANETs. The design is sufficiently justified through theoretical analysis and the protocol is evaluated with extensive simulations. Simulation results demonstrate that MURU significantly outperforms existing ad hoc routing protocols in terms of packet delivery ratio, packet delay and control overhead.
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: Video streaming over vehicular networks by a multiple path solution with error correction
paper_content:
A reliable solution for the task of unicast video streaming over urban VANETs is of great demand. Single path solutions which address this topic are highly prone to collision at a high data rates which are necessary for high quality videos. Multipath solution solves this problem by distributing the heavy traffic load into a set of paths. Among numbers of multipath works, only 2-path LIAITHON + takes into consideration the high dynamic topology of VANETs, route coupling effect, and path length growth. In this paper, we make several improvements on top of the 2-path LIAITHON + . We evaluate the use of more than two paths in this multipath solution. Moreover, the impact of added redundancy on the both 2-path and 3-path LIAITHON + is investigated as a solution for packet loss.
---
paper_title: LIAITHON: A location-aware multipath video streaming scheme for urban vehicular networks
paper_content:
Transmitting video content over Vehicular Ad Hoc Networks (VANETs) faces a great number of challenges caused by strict QoS (Quality of Service) requirements and highly dynamic network topology. In order to tackle these challenges, multipath forwarding schemes can be regarded as potential solutions. However, route coupling will severely impair the performance of multipath schemes. In this work, we present a LocatIon-Aware multIpaTH videO streamiNg (LIAITHON) scheme to address video streaming over urban VANETs. LIAITHON uses location information to discover two relatively short paths with minimum route coupling effect. The performance results have shown it outperforms the underlying single path solution as well as the node-disjoint multipath solution.
---
paper_title: Situation-Aware QoS Routing Algorithm for Vehicular Ad Hoc Networks
paper_content:
A wide range of services has been developed for vehicular ad hoc networks (VANETs), ranging from safety to infotainment applications. An essential requirement for such services is that they are offered with quality of service (QoS) guarantees in terms of service reliability and availability. Searching for feasible routes subject to multiple QoS constraints is, in general, an NP-hard problem. Moreover, routing reliability needs to be paid special attention as communication links frequently break in VANETs. In this paper, we propose employing the situational awareness (SA) concept and an ant colony system (ACS)-based algorithm to develop a situation-aware multiconstrained QoS (SAMQ) routing algorithm for VANETs. SAMQ aims to compute feasible routes between the communicating vehicles subject to multiple QoS constraints and pick the best computed route, if such a route exists. To mitigate the risks inherited from selecting the best computed route that may turn out to fail at any moment, SAMQ utilizes the SA levels and ACS mechanisms to prepare certain countermeasures with the aim of assuring a reliable data transmission. Simulation results demonstrate that SAMQ is capable of achieving a reliable data transmission, as compared with the existing QoS routing algorithms, even when the network topology is highly dynamic.
---
paper_title: RIVER: A reliable inter-vehicular routing protocol for vehicular ad hoc networks
paper_content:
Vehicular Ad hoc NETworks (VANETs), an emerging technology, would allow vehicles on roads to form a self-organized network without the aid of a permanent infrastructure. As a prerequisite to communication in VANETs, an efficient route between communicating nodes in the network must be established, and the routing protocol must adapt to the rapidly changing topology of vehicles in motion. This is one of the goals of VANET routing protocols. In this paper, we present an efficient routing protocol for VANETs, called the Reliable Inter-VEhicular Routing (RIVER) protocol. RIVER utilizes an undirected graph that represents the surrounding street layout where the vertices of the graph are points at which streets curve or intersect, and the graph edges represent the street segments between those vertices. Unlike existing protocols, RIVER performs real-time, active traffic monitoring and uses these data and other data gathered through passive mechanisms to assign a reliability rating to each street edge. The protocol then uses these reliability ratings to select the most reliable route. Control messages are used to identify a node's neighbors, determine the reliability of street edges, and to share street edge reliability information with other nodes.
---
paper_title: An Improved Vehicular Ad Hoc Routing Protocol for City Environments
paper_content:
The fundamental component for the success of VANET (vehicular ad hoc networks) applications is routing since it must efficiently handle rapid topology changes and a fragmented network. Current MANET (mobile ad hoc networks) routing protocols fail to fully address these specific needs especially in a city environments (nodes distribution, constrained but high mobility patterns, signal transmissions blocked by obstacles, etc.). In our current work, we propose an inter-vehicle ad-hoc routing protocol called GyTAR (improved greedy traffic aware routing protocol) suitable for city environments. GyTAR consists of two modules: (i) dynamic selection of the junctions through which a packet must pass to reach its destination, and (ii) an improved greedy strategy used to forward packets between two junctions. In this paper, we give detailed description of our approach and present its added value compared to other existing vehicular routing protocols. Simulation results show significant performance improvement in terms of packet delivery ratio, end-to-end delay, and routing overhead.
---
paper_title: VIRTUS: A resilient location-aware video unicast scheme for vehicular networks
paper_content:
Video streaming capabilities over Vehicular Ad Hoc Networks (VANETs) are crucial to the development of interesting and valuable services. However, VANETs are a challenging environment to this kind of communication due to the dispersion and movement of vehicles. In this work, we present a feasible solution to this problem. The VIdeo Reactive Tracking-based UnicaSt protocol (VIRTUS) is a receiving-based solution that uses vehicles' current and future location for a selection policy of relaying nodes. It fulfills video streaming requirements without incurring into an excessive number of transmissions. Besides that, it outperforms other baseline solutions.
---
paper_title: Improved Geographical Routing in Vehicular Ad Hoc Networks
paper_content:
Vehicular Ad Hoc Networks (VANET) has emerged to establish communication between intelligent vehicles. The high mobility of vehicles and existing of obstacles in urban area make the communication link between vehicles to be unreliable. In this environment, most geographical routing protocols does not consider stable and reliable link during packet forwarding towards destination. Thus, the network performance will be degraded due to large number of packet losses and high packet delay. In this paper, we propose an improved geographical routing protocol named IG for VANET. The proposed IG incorporates relative direction between source vehicle and candidate vehicles, distance between candidate node and destination and beacon reception rate in order to improve geographical greedy forwarding between intersection. Simulation results show that the proposed routing protocols performs better as compared to the existing routing solution.
---
paper_title: A routing strategy for vehicular ad hoc networks in city environments
paper_content:
Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV.
---
paper_title: VADD: Vehicle-Assisted Data Delivery in Vehicular Ad Hoc Networks
paper_content:
Multihop data delivery through vehicular ad hoc networks is complicated by the fact that vehicular networks are highly mobile and frequently disconnected. To address this issue, we adopt the idea of carry and forward, where a moving vehicle carries a packet until a new vehicle moves into its vicinity and forwards the packet. Being different from existing carry and forward solutions, we make use of predictable vehicle mobility, which is limited by traffic pattern and road layout. Based on the existing traffic pattern, a vehicle can find the next road to forward the packet to reduce the delay. We propose several vehicle-assisted data delivery (VADD) protocols to forward the packet to the best road with the lowest data-delivery delay. Experimental results show that the proposed VADD protocols outperform existing solutions in terms of packet-delivery ratio, data packet delay, and protocol overhead. Among the proposed VADD protocols, the hybrid probe (H-VADD) protocol has a much better performance.
---
paper_title: VehiHealth: An Emergency Routing Protocol for Vehicular Ad Hoc Network to Support Healthcare System
paper_content:
Survival of a patient depends on effective data communication in healthcare system. In this paper, an emergency routing protocol for Vehicular Ad hoc Network (VANET) is proposed to quickly forward the current patient status information from the ambulance to the hospital to provide pre-medical treatment. As the ambulance takes time to reach the hospital, ambulance doctor can provide sudden treatment to the patient in emergency by sending patient status information to the hospital through the vehicles using vehicular communication. Secondly, the experienced doctors respond to the information by quickly sending a treatment information to the ambulance. In this protocol, data is forwarded through that path which has less link breakage problem between the vehicles. This is done by calculating an intersection value I v a l u e for the neighboring intersections by using the current traffic information. Then the data is forwarded through that intersection which has minimum I v a l u e . Simulation results show VehiHealth performs better than P-GEDIR, GyTAR, A-STAR and GSR routing protocols in terms of average end-to-end delay, number of link breakage, path length, and average response time.
---
paper_title: Driving Path Predication Based Routing Protocol in Vehicular Ad hoc Networks
paper_content:
The vehicular mobility is reflection and extension of the human social activity. Since human trajectories show a high degree of temporal and spatial regularity, thus vehicular driving paths are predictable to a large extent. In this paper, we firstly analyze the predictabilities of different types of vehicles and then propose a new driving path predication based routing protocol (DPPR). With hello messages to broadcast vehicles’ driving path predication information to neighbor vehicles, DPPR can observably increase the successful ratio to find the proper next hop vehicles that move toward the optimal expected road in intersection areas. In roads with sparse vehicle density, DPPR utilizes vehicles to carry messages to roads with high vehicle density while the messages’ forward paths partially coincide with the vehicles’ driving paths. Moreover, as to messages that can tolerate long delay, they can be carried to destinations by vehicles whose driving paths will pass the messages’ destination in order to optimize bandwidth utilization. Simulation results demonstrate the effectiveness of the proposed DPPR protocol.
---
paper_title: An Adaptive Multipath Geographic Routing for Video Transmission in Urban VANETs
paper_content:
Vehicular ad hoc networks (VANETs) have attracted many researchers' attention in recent years. Due to the highly dynamic nature of these networks, providing guaranteed quality-of-service (QoS) video-on-demand (VOD) sessions is a challenging problem. In this paper, a new adaptive geographic routing scheme is proposed for establishing a simplex VOD transmission in urban environments. In this scheme, rather than one route, a number of independent routes are discovered between source and destination vehicles whose number of routes depends on the volume of the requested video and lifetime (span of time in which a route is almost fixed) for each route. A closed-form equation is derived for estimating the connectivity probability of a route, which is used to select best connected routes. Simulation results show the QoS parameters: packet loss ratio is decreased by 40.79% and freezing delay is significantly improved by 25 ms compared with those of junction-based multipath source routing at the cost of 2-ms degradation in the end-to-end delay.
---
paper_title: Knowledge-based opportunistic forwarding in vehicular wireless ad hoc networks
paper_content:
When highly mobile nodes are interconnected via wireless links, the resulting network can be used as a transit network to connect other disjoint ad-hoc networks. In this paper, we compare five different opportunistic forwarding schemes, which vary in their overhead, their success rate, and the amount of knowledge about neighboring nodes that they require. In particular, we present the MoVe algorithm, which uses velocity information to make intelligent opportunistic forwarding decisions. Using auxiliary information to make forwarding decisions provides a reasonable trade-off between resource overhead and performance.
---
paper_title: Connectivity-Aware Routing (CAR) in Vehicular Ad-hoc Networks
paper_content:
Vehicular ad hoc networks using WLAN technology have recently received considerable attention. We present a position-based routing scheme called Connectivity-Aware Routing (CAR) designed specifically for inter-vehicle communication in a city and/or highway environment. A distinguishing property of CAR is the ability to not only locate positions of destinations but also to find connected paths between source and destination pairs. These paths are auto-adjusted on the fly, without a new discovery process. "Guards" help to track the current position of a destination, even if it traveled a substantial distance from its initially known location. For the evaluation of the CAR protocol we use realistic mobility traces obtained from a microscopic vehicular traffic simulator that is based on a model of driver behavior and the real road maps of Switzerland.
---
paper_title: Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks
paper_content:
Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.
---
paper_title: SCRP: Stable CDS-Based Routing Protocol for Urban Vehicular Ad Hoc Networks
paper_content:
This paper addresses the issue of selecting routing paths with minimum end-to-end delay (E2ED) for nonsafety applications in urban vehicular ad hoc networks (VANETs). Most existing schemes aim at reducing E2ED via greedy-based techniques (i.e., shortest path, connectivity, or number of hops), which make them prone to the local maximum problem and to data congestion, leading to higher E2ED. As a solution, we propose SCRP, which is a distributed routing protocol that computes E2ED for the entire routing path before sending data messages. To do so, SCRP builds stable backbones on road segments and connects them at intersections via bridge nodes. These nodes assign weights to road segments based on the collected information of delay and connectivity. Routes with the lowest aggregated weights are selected to forward data packets. Simulation results show that SCRP outperforms some of the well-known protocols in literature.
---
paper_title: A Routing Algorithm Based on Dynamic Forecast of Vehicle Speed and Position in VANET
paper_content:
Considering city road environment as the background, by researching GPSR greedy algorithm and the movement characteristics of vehicle nodes in VANET, this paper proposes the concept of circle changing trends angle in vehicle speed fluctuation curve and the movement domain and designs an SWF routing algorithm based on the vehicle speed point forecasted and the changing trends time computation. Simulation experiments are carried out through using a combination of NS-2 and VanetMobiSim software. Compared with the performance of the SWF-GPSR protocol with general GPSR, 2-hop C-GEDIR, and the GRA and AODV protocols, we find that the SWF algorithm has a certain degree of improvement in routing hops, the packet delivery ratio, delay performance, and link stability.
---
paper_title: A distributed beaconless routing protocol for real-time video dissemination in multimedia VANETs
paper_content:
We design a framework for video (and in general high data rate applications) transmission over a VANET.Our framework includes both application and routing layer design.We analyze MAC layer behavior and improve its utilization by solving the Spurious Forwarding problem.We tested our protocols on a real scenario, and considered both QoS and QoE, including MOS experiments in a real car.We grant a high data rate in ad-hoc mode, also for long distance, through our backbone beaconless approach. Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured.
---
paper_title: Contention-based forwarding with multi-hop connectivity awareness in vehicular ad-hoc networks
paper_content:
Abstract Recent vehicular routing proposals use real-time road traffic density estimates to dynamically select forwarding paths. Estimating the traffic density in vehicular ad hoc networks requires the transmission of additional dedicated messages increasing the communications load. These proposals are generally based on unicast sender-based forwarding schemes. The greedy nature of sender-based forwarding can result in the selection of forwarders with weak radio links that might compromise the end-to-end performance. To overcome these limitations, this paper presents TOPOCBF, a novel contention-based broadcast forwarding protocol that dynamically selects forwarding paths based on their capability to route packets between anchor points. Such capability is estimated by means of a multi-hop connectivity metric. The obtained results demonstrate that TOPOCBF can provide good packet delivery ratios while reducing the communications load compared to unicast sender-based forwarding schemes using road traffic density estimates.
---
paper_title: MURU: A Multi-Hop Routing Protocol for Urban Vehicular Ad Hoc Networks
paper_content:
Vehicular ad hoc networks (VANETs) are going to be an important communication infrastructure in our life. Because of high mobility and frequent link disconnection, it becomes quite challenging to establish a robust multi-hop path that helps packet delivery from the source to the destination. This paper presents a multi-hop routing protocol, called MURU, that is able to find robust paths in urban VANETs to achieve high end-to-end packet delivery ratio with low overhead. MURU tries to minimize the probability of path breakage by exploiting mobility information of each vehicle in VANETs. A new metric called expected disconnection degree (EDD) is used to select the most robust path from the source to the destination. MURU is fully distributed and does not incur much overhead, which makes MURU highly scalable for VANETs. The design is sufficiently justified through theoretical analysis and the protocol is evaluated with extensive simulations. Simulation results demonstrate that MURU significantly outperforms existing ad hoc routing protocols in terms of packet delivery ratio, packet delay and control overhead.
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: Video streaming over vehicular networks by a multiple path solution with error correction
paper_content:
A reliable solution for the task of unicast video streaming over urban VANETs is of great demand. Single path solutions which address this topic are highly prone to collision at a high data rates which are necessary for high quality videos. Multipath solution solves this problem by distributing the heavy traffic load into a set of paths. Among numbers of multipath works, only 2-path LIAITHON + takes into consideration the high dynamic topology of VANETs, route coupling effect, and path length growth. In this paper, we make several improvements on top of the 2-path LIAITHON + . We evaluate the use of more than two paths in this multipath solution. Moreover, the impact of added redundancy on the both 2-path and 3-path LIAITHON + is investigated as a solution for packet loss.
---
paper_title: LIAITHON: A location-aware multipath video streaming scheme for urban vehicular networks
paper_content:
Transmitting video content over Vehicular Ad Hoc Networks (VANETs) faces a great number of challenges caused by strict QoS (Quality of Service) requirements and highly dynamic network topology. In order to tackle these challenges, multipath forwarding schemes can be regarded as potential solutions. However, route coupling will severely impair the performance of multipath schemes. In this work, we present a LocatIon-Aware multIpaTH videO streamiNg (LIAITHON) scheme to address video streaming over urban VANETs. LIAITHON uses location information to discover two relatively short paths with minimum route coupling effect. The performance results have shown it outperforms the underlying single path solution as well as the node-disjoint multipath solution.
---
paper_title: Situation-Aware QoS Routing Algorithm for Vehicular Ad Hoc Networks
paper_content:
A wide range of services has been developed for vehicular ad hoc networks (VANETs), ranging from safety to infotainment applications. An essential requirement for such services is that they are offered with quality of service (QoS) guarantees in terms of service reliability and availability. Searching for feasible routes subject to multiple QoS constraints is, in general, an NP-hard problem. Moreover, routing reliability needs to be paid special attention as communication links frequently break in VANETs. In this paper, we propose employing the situational awareness (SA) concept and an ant colony system (ACS)-based algorithm to develop a situation-aware multiconstrained QoS (SAMQ) routing algorithm for VANETs. SAMQ aims to compute feasible routes between the communicating vehicles subject to multiple QoS constraints and pick the best computed route, if such a route exists. To mitigate the risks inherited from selecting the best computed route that may turn out to fail at any moment, SAMQ utilizes the SA levels and ACS mechanisms to prepare certain countermeasures with the aim of assuring a reliable data transmission. Simulation results demonstrate that SAMQ is capable of achieving a reliable data transmission, as compared with the existing QoS routing algorithms, even when the network topology is highly dynamic.
---
paper_title: RIVER: A reliable inter-vehicular routing protocol for vehicular ad hoc networks
paper_content:
Vehicular Ad hoc NETworks (VANETs), an emerging technology, would allow vehicles on roads to form a self-organized network without the aid of a permanent infrastructure. As a prerequisite to communication in VANETs, an efficient route between communicating nodes in the network must be established, and the routing protocol must adapt to the rapidly changing topology of vehicles in motion. This is one of the goals of VANET routing protocols. In this paper, we present an efficient routing protocol for VANETs, called the Reliable Inter-VEhicular Routing (RIVER) protocol. RIVER utilizes an undirected graph that represents the surrounding street layout where the vertices of the graph are points at which streets curve or intersect, and the graph edges represent the street segments between those vertices. Unlike existing protocols, RIVER performs real-time, active traffic monitoring and uses these data and other data gathered through passive mechanisms to assign a reliability rating to each street edge. The protocol then uses these reliability ratings to select the most reliable route. Control messages are used to identify a node's neighbors, determine the reliability of street edges, and to share street edge reliability information with other nodes.
---
paper_title: An Improved Vehicular Ad Hoc Routing Protocol for City Environments
paper_content:
The fundamental component for the success of VANET (vehicular ad hoc networks) applications is routing since it must efficiently handle rapid topology changes and a fragmented network. Current MANET (mobile ad hoc networks) routing protocols fail to fully address these specific needs especially in a city environments (nodes distribution, constrained but high mobility patterns, signal transmissions blocked by obstacles, etc.). In our current work, we propose an inter-vehicle ad-hoc routing protocol called GyTAR (improved greedy traffic aware routing protocol) suitable for city environments. GyTAR consists of two modules: (i) dynamic selection of the junctions through which a packet must pass to reach its destination, and (ii) an improved greedy strategy used to forward packets between two junctions. In this paper, we give detailed description of our approach and present its added value compared to other existing vehicular routing protocols. Simulation results show significant performance improvement in terms of packet delivery ratio, end-to-end delay, and routing overhead.
---
paper_title: VIRTUS: A resilient location-aware video unicast scheme for vehicular networks
paper_content:
Video streaming capabilities over Vehicular Ad Hoc Networks (VANETs) are crucial to the development of interesting and valuable services. However, VANETs are a challenging environment to this kind of communication due to the dispersion and movement of vehicles. In this work, we present a feasible solution to this problem. The VIdeo Reactive Tracking-based UnicaSt protocol (VIRTUS) is a receiving-based solution that uses vehicles' current and future location for a selection policy of relaying nodes. It fulfills video streaming requirements without incurring into an excessive number of transmissions. Besides that, it outperforms other baseline solutions.
---
paper_title: Improved Geographical Routing in Vehicular Ad Hoc Networks
paper_content:
Vehicular Ad Hoc Networks (VANET) has emerged to establish communication between intelligent vehicles. The high mobility of vehicles and existing of obstacles in urban area make the communication link between vehicles to be unreliable. In this environment, most geographical routing protocols does not consider stable and reliable link during packet forwarding towards destination. Thus, the network performance will be degraded due to large number of packet losses and high packet delay. In this paper, we propose an improved geographical routing protocol named IG for VANET. The proposed IG incorporates relative direction between source vehicle and candidate vehicles, distance between candidate node and destination and beacon reception rate in order to improve geographical greedy forwarding between intersection. Simulation results show that the proposed routing protocols performs better as compared to the existing routing solution.
---
paper_title: A routing strategy for vehicular ad hoc networks in city environments
paper_content:
Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV.
---
paper_title: VADD: Vehicle-Assisted Data Delivery in Vehicular Ad Hoc Networks
paper_content:
Multihop data delivery through vehicular ad hoc networks is complicated by the fact that vehicular networks are highly mobile and frequently disconnected. To address this issue, we adopt the idea of carry and forward, where a moving vehicle carries a packet until a new vehicle moves into its vicinity and forwards the packet. Being different from existing carry and forward solutions, we make use of predictable vehicle mobility, which is limited by traffic pattern and road layout. Based on the existing traffic pattern, a vehicle can find the next road to forward the packet to reduce the delay. We propose several vehicle-assisted data delivery (VADD) protocols to forward the packet to the best road with the lowest data-delivery delay. Experimental results show that the proposed VADD protocols outperform existing solutions in terms of packet-delivery ratio, data packet delay, and protocol overhead. Among the proposed VADD protocols, the hybrid probe (H-VADD) protocol has a much better performance.
---
paper_title: VehiHealth: An Emergency Routing Protocol for Vehicular Ad Hoc Network to Support Healthcare System
paper_content:
Survival of a patient depends on effective data communication in healthcare system. In this paper, an emergency routing protocol for Vehicular Ad hoc Network (VANET) is proposed to quickly forward the current patient status information from the ambulance to the hospital to provide pre-medical treatment. As the ambulance takes time to reach the hospital, ambulance doctor can provide sudden treatment to the patient in emergency by sending patient status information to the hospital through the vehicles using vehicular communication. Secondly, the experienced doctors respond to the information by quickly sending a treatment information to the ambulance. In this protocol, data is forwarded through that path which has less link breakage problem between the vehicles. This is done by calculating an intersection value I v a l u e for the neighboring intersections by using the current traffic information. Then the data is forwarded through that intersection which has minimum I v a l u e . Simulation results show VehiHealth performs better than P-GEDIR, GyTAR, A-STAR and GSR routing protocols in terms of average end-to-end delay, number of link breakage, path length, and average response time.
---
paper_title: Driving Path Predication Based Routing Protocol in Vehicular Ad hoc Networks
paper_content:
The vehicular mobility is reflection and extension of the human social activity. Since human trajectories show a high degree of temporal and spatial regularity, thus vehicular driving paths are predictable to a large extent. In this paper, we firstly analyze the predictabilities of different types of vehicles and then propose a new driving path predication based routing protocol (DPPR). With hello messages to broadcast vehicles’ driving path predication information to neighbor vehicles, DPPR can observably increase the successful ratio to find the proper next hop vehicles that move toward the optimal expected road in intersection areas. In roads with sparse vehicle density, DPPR utilizes vehicles to carry messages to roads with high vehicle density while the messages’ forward paths partially coincide with the vehicles’ driving paths. Moreover, as to messages that can tolerate long delay, they can be carried to destinations by vehicles whose driving paths will pass the messages’ destination in order to optimize bandwidth utilization. Simulation results demonstrate the effectiveness of the proposed DPPR protocol.
---
paper_title: An Adaptive Multipath Geographic Routing for Video Transmission in Urban VANETs
paper_content:
Vehicular ad hoc networks (VANETs) have attracted many researchers' attention in recent years. Due to the highly dynamic nature of these networks, providing guaranteed quality-of-service (QoS) video-on-demand (VOD) sessions is a challenging problem. In this paper, a new adaptive geographic routing scheme is proposed for establishing a simplex VOD transmission in urban environments. In this scheme, rather than one route, a number of independent routes are discovered between source and destination vehicles whose number of routes depends on the volume of the requested video and lifetime (span of time in which a route is almost fixed) for each route. A closed-form equation is derived for estimating the connectivity probability of a route, which is used to select best connected routes. Simulation results show the QoS parameters: packet loss ratio is decreased by 40.79% and freezing delay is significantly improved by 25 ms compared with those of junction-based multipath source routing at the cost of 2-ms degradation in the end-to-end delay.
---
paper_title: Knowledge-based opportunistic forwarding in vehicular wireless ad hoc networks
paper_content:
When highly mobile nodes are interconnected via wireless links, the resulting network can be used as a transit network to connect other disjoint ad-hoc networks. In this paper, we compare five different opportunistic forwarding schemes, which vary in their overhead, their success rate, and the amount of knowledge about neighboring nodes that they require. In particular, we present the MoVe algorithm, which uses velocity information to make intelligent opportunistic forwarding decisions. Using auxiliary information to make forwarding decisions provides a reasonable trade-off between resource overhead and performance.
---
paper_title: Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks
paper_content:
Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.
---
paper_title: SCRP: Stable CDS-Based Routing Protocol for Urban Vehicular Ad Hoc Networks
paper_content:
This paper addresses the issue of selecting routing paths with minimum end-to-end delay (E2ED) for nonsafety applications in urban vehicular ad hoc networks (VANETs). Most existing schemes aim at reducing E2ED via greedy-based techniques (i.e., shortest path, connectivity, or number of hops), which make them prone to the local maximum problem and to data congestion, leading to higher E2ED. As a solution, we propose SCRP, which is a distributed routing protocol that computes E2ED for the entire routing path before sending data messages. To do so, SCRP builds stable backbones on road segments and connects them at intersections via bridge nodes. These nodes assign weights to road segments based on the collected information of delay and connectivity. Routes with the lowest aggregated weights are selected to forward data packets. Simulation results show that SCRP outperforms some of the well-known protocols in literature.
---
paper_title: Routing mechanisms and cross-layer design for Vehicular Ad Hoc Networks: A survey
paper_content:
Vehicular Ad-Hoc Network (VANET) will pave the way to advance automotive safety and occupant convenience. The potential VANET applications present diverse requirements. VANET shows unique characteristics and presents a set of challenges. The proposed VANET applications demand reliable and proficient message dissemination techniques. Routing techniques proposed for Mobile Ad-Hoc Network (MANET) do not cater for the characteristics of VANET. The need for novel routing techniques, exclusively designed for VANET has been recognised. This paper analyses different routing techniques proposed specifically for VANET. Unique characteristics of VANET pose challenges to traditional layered architecture where different layers make independent decisions. Mobility, absence of global view of network, random changes in topology, poor link quality and varied channel conditions have encouraged the paradigm shift to cross-layer approach. In order to optimise the performance of VANET, architectures based on cross-layer approach have been proposed by the researchers. The paper also surveys such cross-layer paradigm based solutions for VANET and concludes with an analytical summary.
---
paper_title: Cross layer routing for VANETs
paper_content:
Routing is an important and critical issue for successful transmission in Vehicular Ad-hoc Networks (VANETs). Most of the traditionally designed routing schemes are based on optimising their parameters individually in the existing VANET architecture. Such approaches may not result in an overall efficient system. Therefore, it is important to consider various parameters from multiple layers such as PHY and MAC, to optimise routing. In this paper while presenting a new cross-layer routing scheme, we subdivide the existing OSI model in three main layers. The routing scheme presented in this paper considers parameters from multiple layers simultaneously to achieve the routing objectives. We argue that the proposed routing scheme results in less packet drops and comparatively smaller delay in packet transmission.
---
paper_title: A survey of cross-layer design for VANETs
paper_content:
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.
---
paper_title: Congestion Avoidance Based on Lightweight Buffer Management in Sensor Networks
paper_content:
A wireless sensor network is constrained by computation capability, memory space, communication bandwidth, and above all, energy supply. When a critical event triggers a surge of data generated by the sensors, congestion may occur as data packets converge toward a sink. Congestion causes energy waste, throughput reduction, and information loss. However, the important problem of congestion avoidance in sensor networks is largely open. This paper proposes a congestion-avoidance scheme based on lightweight buffer management. We describe simple yet effective approaches that prevent data packets from overflowing the buffer space of the intermediate sensors. These approaches automatically adapt the sensors' forwarding rates to nearly optimal without causing congestion. We discuss how to implement buffer-based congestion avoidance with different MAC protocols. In particular, for CSMA with implicit ACK, our 1/k-buffer solution prevents hidden terminals from causing congestion. We demonstrate how to maintain near-optimal throughput with a small buffer at each sensor and how to achieve congestion-free load balancing when there are multiple routing paths toward multiple sinks
---
paper_title: Interference-Aware Probabilistic Routing for Wireless Sensor Networks
paper_content:
Wireless communications are prone to the interference, so the data transmission among the nodes in wireless sensor networks deployed in complex environments has the obvious uncertainty. This paper adopts probability theory to extend the existed interference model, and gives an interference analysis model and implements it through the cross-layer method. In addition, the isotonic property of the interference-aware routing metric is proved. Then, a probabilistic routing algorithm is proposed and its correctness and time-space complexity are analyzed. Simulation results show that the proposed algorithm can achieve better packet delivery ratio, throughput, jitter and average delay in dense deployment under the different loads at the expense of the comparable average length of paths compared with the Adhoc On-Demand Distance Vector (AODV) algorithm.
---
paper_title: Processing interference at the physical layer to enhance information flow in wireless networks
paper_content:
Interference in wireless networks results in interdependent communication links between the nodes. Therefore, cross-layer design is essential and makes optimization of wireless networks complicated. In this paper, we study the problem of maximizing the information flow for a multicast session over a wireless network. Different scheduling and coding strategies to handle the interference, including the commonly used interference avoidance strategy, are compared. Results in information theory on achievable rate regions for interference networks are incorporated in the flow optimization to achieve significant improvement. Numerical results illustrate that processing interference at the physical layer results in better information flow compared to interference avoidance.
---
paper_title: Measuring and Using the RSSI of IEEE 802.11P
paper_content:
The scalability of intelligent transport systems (ITS) applications is difficult to test in a field operational test (FOT) due to the high number of ITS equipped vehicles required. Therefore, compu ...
---
paper_title: Interference-Aware Probabilistic Routing for Wireless Sensor Networks
paper_content:
Wireless communications are prone to the interference, so the data transmission among the nodes in wireless sensor networks deployed in complex environments has the obvious uncertainty. This paper adopts probability theory to extend the existed interference model, and gives an interference analysis model and implements it through the cross-layer method. In addition, the isotonic property of the interference-aware routing metric is proved. Then, a probabilistic routing algorithm is proposed and its correctness and time-space complexity are analyzed. Simulation results show that the proposed algorithm can achieve better packet delivery ratio, throughput, jitter and average delay in dense deployment under the different loads at the expense of the comparable average length of paths compared with the Adhoc On-Demand Distance Vector (AODV) algorithm.
---
paper_title: A survey of cross-layer design for VANETs
paper_content:
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.
---
paper_title: Congestion Avoidance Based on Lightweight Buffer Management in Sensor Networks
paper_content:
A wireless sensor network is constrained by computation capability, memory space, communication bandwidth, and above all, energy supply. When a critical event triggers a surge of data generated by the sensors, congestion may occur as data packets converge toward a sink. Congestion causes energy waste, throughput reduction, and information loss. However, the important problem of congestion avoidance in sensor networks is largely open. This paper proposes a congestion-avoidance scheme based on lightweight buffer management. We describe simple yet effective approaches that prevent data packets from overflowing the buffer space of the intermediate sensors. These approaches automatically adapt the sensors' forwarding rates to nearly optimal without causing congestion. We discuss how to implement buffer-based congestion avoidance with different MAC protocols. In particular, for CSMA with implicit ACK, our 1/k-buffer solution prevents hidden terminals from causing congestion. We demonstrate how to maintain near-optimal throughput with a small buffer at each sensor and how to achieve congestion-free load balancing when there are multiple routing paths toward multiple sinks
---
paper_title: A Cross-Layer AOMDV Routing Protocol for V2V Communication in Urban VANET
paper_content:
Vehicular Ad hoc Network (VANET) is a special class of wireless mobile communication network. For vehicle-to-vehicle (V2V) communication, suitable routing protocols are needed. A routing metric combining hop counts and retransmission counts at MAC layer is proposed with consideration of link quality and delay reduction. Based on the new routing metric, a cross-layer Ad hoc On-demand Multipath Distance Vector with retransmission counts metric (R-AOMDV) routing protocol is designed to make use of advantages of multi-path routing protocol, such as decrease of route discovery frequency. Compared with AOMDV with minimum hop-count metric, simulation results show that R-AOMDV achieves better performance with Pareto On/Off distribution traffic model in urban VANET, no matter in sparse or dense scenarios.
---
paper_title: Cross-layer multi-hop wireless routing for inter-vehicle communication
paper_content:
Ad-hoc networking provides a cost-effective support structure for inter-vehicle communication. A decentralized peer-to-peer information dissemination architecture is well suited for automotive applications that need to exchange data having local relevance. Routing, however, is challenge in a vehicular scenario because of the associated dynamism in network topology and variations in driving conditions. In this work we present a cross-layer ad-hoc routing approach based on link connectivity assessment in the network topology. We suggest a framework for proactive enhancements to the optimized link state routing (OLSR) protocol and implement the proposed measures within the protocol format. We further deploy an IEEE 802.11b based vehicular network and demonstrate the effectiveness of link-quality assessment based enhancements in improving the performance of inter-vehicle ad-hoc routing. Through actual test-runs, we show that the enhanced protocol is more responsive to variations in network connectivity and can take preemptive actions in choosing stable and durable routes. The routing methodology suggested in this work leverages cross-layer interactions among the networking, data-link, and physical layers, for enhanced adaptability to varying network topology and link states. The main contributions of this work are as follows: introduction of link-quality assessment methodology for enhanced adaptability of ad-hoc routing in a dynamically changing topology, delineation of the framework of a proactive topology-adaptive ad-hoc routing protocol in a vehicular scenario, and demonstration of effectiveness of the proposed routing enhancements in an IEEE 802.11b based vehicular test-bed.
---
paper_title: Communication-aware mobile hosts in ad-hoc wireless network
paper_content:
A mobile, multi-hop wireless computer network, also termed an ad-hoc network, can be envisioned as a collection of routers, equipped with wireless transmitter/receiver, which are free to move about arbitrarily. The basic assumption in an ad-hoc network is that some nodes willing to communicate may be outside the wireless transmission range of each other but may be able to communicate if other nodes in the network are willing to forward packets from them. However, the successful operation of an ad-hoc network will be hampered if an intermediate node, participating in a communication between two nodes, moves out of range suddenly or switches itself off in between message transfer. The objective of this paper is to introduce a parameter, affinity that characterizes the strength of relationship between two nodes and to propose a distributed routing scheme between two nodes in order to find a set of paths between them which are more stable and less congested in a specific context. Thus, the communication in an ad-hoc wireless network can be effective by making the nodes in the system communication-aware in the sense that each node should know its affinity with its neighbors and should be aware of the impact of its movement on the communication structure of the underlying network.
---
paper_title: Route Construction for Long Lifetime in VANETs
paper_content:
One of the most distinguishing features of vehicular ad hoc networks (VANETs) is the increased mobility of the nodes. This results in the existence of transient communication links, which degrade the performance of developed protocols. Established routes frequently become invalid, and existing communication flows are interrupted, incurring delay and additional overhead. In this paper, we aim to provide a metric to support the design of networks that can proactively adapt to a constantly changing topology. We present a method that produces a link-lifetime-related metric capable of capturing the remaining time for which a link can be used for efficient communication. The metric is intended to be used to optimize route construction with respect to lifetime. We propose a cross-layer approach, which utilizes physical layer information, and formulate the relevant parameter estimation problem. Contrary to existing work, the method does not assume knowledge of the transmission power or the nodes' position and velocity vectors, or adoption of a specific mobility model, whereas the estimates go beyond describing the tendency of link quality. We achieve this by employing a unified model that accurately captures the effect of the radio propagation and the underlying structure of vehicle movement on the temporal dependence of the quality of a wireless mobile link. More specifically, the model takes into account the inherent nonlinearities arising and, most importantly, includes the minimum distance that will be achieved between two vehicles on the course of their movement, which is shown to play a crucial role in the link duration. We present an analytical framework, which quantifies the probability of correctly identifying the longest living link between two given links, based on the estimates. Utilization of the estimates is shown to lead to optimal performance under ideal channel conditions. The proposed scheme outperforms existing affinity-based schemes, achieving to opt for links that last up to 35% longer under the presence of shadow fading. Finally, we discuss the integration of the proposed estimation method in routing and demonstrate that our estimations can be beneficial, leading to the construction of routes that consistently last longer than routes that have been constructed based on the smoothed signal-to-noise ratio metric.
---
paper_title: Route-Lifetime Assessment Based Routing (RABR) Protocol for Mobile Ad-Hoc Networks
paper_content:
A photometric device in which the photographic field is divided into a plurality of photometric regions, photometric outputs are obtained from each region, the difference between the maximum and minimum values among the photometric outputs is compared with the latitude of the film being used, and one of a highlight reference exposure, average exposure and shadow exposure is selected according to the results of the comparison operation. Accordingly, the available latitude of the film being used is employed maximally.
---
paper_title: The Optimized Link State Routing Protocol Evaluation through Experiments and Simulation
paper_content:
In this paper, we describe the Optimized Link State Routing Protocol (OLSR) [1] for Mobile Ad-hoc NETworks (MANETs) and the evaluation of this protocol through experiments and simulations. In particular, we emphasize the practical tests and intensive simulations, which have been used in guiding and evaluating the design of the protocol, and which have been a key to identifying both problems and solutions. OLSR is a proactive link-state routing protocol, employing periodic message exchange for updating topological information in each node in the network. I.e. topological information is flooded to all nodes in the network. Conceptually, OLSR contains three elements: Mechanisms for neighbor sensing based on periodic exchange of HELLO messages within a node’s neighborhood. Generic mechanisms for efficient flooding of control traffic into the network employing the concept of multipoint relays (MPRs) [5] for a significant reduction of duplicate retransmissions during the flooding process. And a specification of a set of control-messages providing each node with sufficient topological information to be able to compute an optimal route to each destination in the network using any shortest-path algorithm. Experimental work, running a test-network of laptops with IEEE 802.11 wireless cards, revealed interesting properties. While the protocol, as originally specified, works quite well, it was found, that enforcing “jitter” on the interval between the periodic exchange of control messages in OLSR and piggybacking said control messages into a single packet, significantly reduced the number of messages lost due to collisions. It was also observed, that under certain conditions a “naive” neighbor sensing mechanism was insufficient: a bad link between two nodes (e.g. when two nodes are on the edge of radio range) might on occasion transmit a HELLO message in both directions (hence enabling the link for routing), while not being able to sustain continuous traffic. This would result in “route-flapping” and temporary loss of connectivity. With the experimental results as basis, we have been deploying simulations to reveal the impact of the various algorithmic improvements, described above.
---
paper_title: Mobility Prediction Progressive Routing (MP2R), a Cross-Layer Design for Inter-Vehicle Communication
paper_content:
In this paper we analyze the characteristics of vehicle mobility and propose a novel Mobility Prediction Progressive Routing (MP2R) protocol for Inter-Vehicle Communication (IVC) that is based on cross-layer design. MP2R utilizes the additional gain provided by the directional antennas to improve link quality and connectivity; interference is reduced by the directional transmission. Each node learns its own position and speed and that of other nodes, and performs position prediction. (i) With the predicted progress and link quality, the forwarding decision of a packet is locally made, just before the packet is actually transmitted. In addition the load at the forwarder is considered in order to avoid congestion. (ii) The predicted geographic direction is used to control the beam of the directional antenna. The proposed MP2R protocol is especially suitable for forwarding burst traffic in highly mobile environments. Simulation results show that MP2R effectively reduces Packet Error Ratio (PER) compared with both topology-based routing (AODV [1], FSR [2]) and normal progressive routing (NADV [18]) in the IVC scenarios.
---
paper_title: Ad hoc on-demand multipath distance vector routing
paper_content:
We present AOMDV, an on-demand multipath distance vector protocol for mobile ad hoc networks. AOMDV is based on a prominent on-demand single path protocol called AODV. AOMDV establishes multiple loop-free and link-disjoint paths. Performance comparison of AOMDV with AODV using ns-2 simulations under varying node speeds shows that AOMDV provides a factor of two improvement in delay and about 20% reduction in routing overhead, while having similar packet delivery fraction.
---
paper_title: A Cross-Layer AOMDV Routing Protocol for V2V Communication in Urban VANET
paper_content:
Vehicular Ad hoc Network (VANET) is a special class of wireless mobile communication network. For vehicle-to-vehicle (V2V) communication, suitable routing protocols are needed. A routing metric combining hop counts and retransmission counts at MAC layer is proposed with consideration of link quality and delay reduction. Based on the new routing metric, a cross-layer Ad hoc On-demand Multipath Distance Vector with retransmission counts metric (R-AOMDV) routing protocol is designed to make use of advantages of multi-path routing protocol, such as decrease of route discovery frequency. Compared with AOMDV with minimum hop-count metric, simulation results show that R-AOMDV achieves better performance with Pareto On/Off distribution traffic model in urban VANET, no matter in sparse or dense scenarios.
---
paper_title: VADD: Vehicle-Assisted Data Delivery in Vehicular Ad Hoc Networks
paper_content:
Multihop data delivery through vehicular ad hoc networks is complicated by the fact that vehicular networks are highly mobile and frequently disconnected. To address this issue, we adopt the idea of carry and forward, where a moving vehicle carries a packet until a new vehicle moves into its vicinity and forwards the packet. Being different from existing carry and forward solutions, we make use of predictable vehicle mobility, which is limited by traffic pattern and road layout. Based on the existing traffic pattern, a vehicle can find the next road to forward the packet to reduce the delay. We propose several vehicle-assisted data delivery (VADD) protocols to forward the packet to the best road with the lowest data-delivery delay. Experimental results show that the proposed VADD protocols outperform existing solutions in terms of packet-delivery ratio, data packet delay, and protocol overhead. Among the proposed VADD protocols, the hybrid probe (H-VADD) protocol has a much better performance.
---
paper_title: Urban multi-hop broadcast protocol for inter-vehicle communication systems
paper_content:
Inter-Vehicle Communication Systems rely on multi-hop broadcast to disseminate information to locations beyond the transmission range of individual nodes. Message dissemination is especially difficult in urban areas crowded with tall buildings because of the line-of-sight problem. In this paper, we propose a new efficient IEEE 802.11 based multi-hop broadcast protocol (UMB) which is designed to address the broadcast storm, hidden node, and reliability problems of multi-hop broadcast in urban areas. Thisprotocol assigns the duty of forwarding and acknowledging broadcast packet to only one vehicle by dividing the road portion inside the transmission range into segments and choosing the vehicle in the furthest non-empty segment without apriori topology information. When there is an intersection in the path of the message dissemination, new directional broadcasts are initiated by the repeaters located at the intersections. We have shown through simulations that our protocol has a very high success rate and efficient channel utilization when compared with other flooding based protocols.
---
paper_title: Urban multi-hop broadcast protocol for inter-vehicle communication systems
paper_content:
Inter-Vehicle Communication Systems rely on multi-hop broadcast to disseminate information to locations beyond the transmission range of individual nodes. Message dissemination is especially difficult in urban areas crowded with tall buildings because of the line-of-sight problem. In this paper, we propose a new efficient IEEE 802.11 based multi-hop broadcast protocol (UMB) which is designed to address the broadcast storm, hidden node, and reliability problems of multi-hop broadcast in urban areas. Thisprotocol assigns the duty of forwarding and acknowledging broadcast packet to only one vehicle by dividing the road portion inside the transmission range into segments and choosing the vehicle in the furthest non-empty segment without apriori topology information. When there is an intersection in the path of the message dissemination, new directional broadcasts are initiated by the repeaters located at the intersections. We have shown through simulations that our protocol has a very high success rate and efficient channel utilization when compared with other flooding based protocols.
---
paper_title: PROMPT: A cross-layer position-based communication protocol for delay-aware vehicular access networks
paper_content:
Vehicular communication systems facilitate communication devices for exchange of information among vehicles and between vehicles and roadside equipment. These systems are used to provide a myriad of services ranging from traffic safety application to convenience applications for drivers and passengers. In this paper, we focus on the design of communication protocols for vehicular access networks where vehicles access a wired backbone network by means of a multi-hop data delivery service. Key challenges in designing protocols for vehicular access networks include quick adaptability to frequent changes in the network topology due to vehicular mobility and delay awareness in data delivery. To address these challenges, we propose a cross-layer position-based delay-aware communication protocol called PROMPT. It adopts a source routing mechanism that relies on positions independent of vehicle movement rather than on specific vehicle addresses. Vehicles monitor information exchange in their reception range to obtain data flow statistics, which are then used in estimating the delay and selecting best available paths. Through a detailed simulation study using ns-2, we empirically show that PROMPT outperforms existing routing protocols proposed for vehicular networks in terms of end-to-end packet delay, packet loss rate, and fairness of service.
---
paper_title: CLWPR — A novel cross-layer optimized position based routing protocol for VANETs
paper_content:
In this paper, we propose a novel position-based routing protocol designed to anticipate the characteristics of an urban VANET environment. The proposed algorithm utilizes the prediction of the node's position and navigation information to improve the efficiency of routing protocol in a vehicular network. In addition, we use the information about link layer quality in terms of SNIR and MAC frame error rate to further improve the efficiency of the proposed routing protocol. This in particular helps to decrease end-to-end delay. Finally, carry-n-forward mechanism is employed as a repair strategy in sparse networks. It is shown that use of this technique increases packet delivery ratio, but increases end-to-end delay as well and is not recommended for QoS constraint services. Our results suggest that compared with GPSR, our proposal demonstrates better performance in the urban environment.
---
paper_title: A multi-hop cross layer decision based routing for VANETs
paper_content:
In recent years, vehicular ad-hoc networks have emerged as a key wireless technology offering countless new services and applications for the transport community. Along with many interesting and useful applications, there have been a number of design challenges to create an efficient and reliable routing scheme. A conventional design approach only optimizes routing schemes without considering the constraints from other network layers. This may result in an under-performing routing mechanism. In this paper we present the design of a multi-hop cross-layer routing scheme that utilises beaconing information at the physical layer as well as queue buffer information at medium access control layer to optimise routing objectives. In particular, the proposed scheme integrates channel quality information and queuing information from other layers to transmit data. Using simulations as well as analytical studies we have presented results of our proposed scheme and have done a thorough comparison with existing approaches in this area. The results highlight better performance of the proposed cross-layer structure as compared to other conventional single layer approaches.
---
paper_title: A Self-Adaptive and Link-Aware Beaconless Forwarding Protocol for VANETs
paper_content:
With the development of networks, Vehicular Ad-hoc Networks (VANETs) which act as the emerging application enhance the potential power of networks on the traffic safety and the entertainment. However, the high mobility and the dynamic nature of VANETs lead to the unreliable link which causes unreachable transmission and degrades the performance of the routing protocol in terms of the quality of experience (QoE). To provide a reliable routing for the data transmission, a self-adaptive and link-aware beaconless forwarding (SLBF) protocol is proposed. Based on the receiver based forwarding (RBF) scheme, SLBF designs a selfadaptive forwarding zone which is used tomake the candidate nodes accurate. Furthermore, it proposes a comprehensive algorithm to calculate the waiting time by taking the greedy strategy, link quality, and the traffic load into account. With the NS-2 simulator, the performance of SLBF is demonstrated. The results show that the SLBF makes a great improvement in the delivery ratio, the end-to-end delay, and the average hops.
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: An Improved Vehicular Ad Hoc Routing Protocol for City Environments
paper_content:
The fundamental component for the success of VANET (vehicular ad hoc networks) applications is routing since it must efficiently handle rapid topology changes and a fragmented network. Current MANET (mobile ad hoc networks) routing protocols fail to fully address these specific needs especially in a city environments (nodes distribution, constrained but high mobility patterns, signal transmissions blocked by obstacles, etc.). In our current work, we propose an inter-vehicle ad-hoc routing protocol called GyTAR (improved greedy traffic aware routing protocol) suitable for city environments. GyTAR consists of two modules: (i) dynamic selection of the junctions through which a packet must pass to reach its destination, and (ii) an improved greedy strategy used to forward packets between two junctions. In this paper, we give detailed description of our approach and present its added value compared to other existing vehicular routing protocols. Simulation results show significant performance improvement in terms of packet delivery ratio, end-to-end delay, and routing overhead.
---
paper_title: A Street-Centric Routing Protocol Based on Microtopology in Vehicular Ad Hoc Networks
paper_content:
In a vehicular ad hoc network (VANET), high mobility and uneven distribution of vehicles are important factors affecting the performance of routing protocols. The high mobility may cause frequent changes of network topology, whereas the uneven distribution of vehicles may lead to routing failures due to network partition; even high density of vehicles may cause severe wireless channel contentions in an urban environment. In this paper, we propose a novel concept called the microtopology (MT), which consists of vehicles and wireless links among vehicles along a street as a basic component of routing paths and even the entire network topology. We abstract the MT model reflecting the dynamic routing-related characteristics in practical urban scenarios along streets, including the effect of mobility of vehicles, signal fading, wireless channel contention, and existing data traffic. We first analyze the endside-to-endside routing performance in an MT as a basis of routing decision. Then, we propose a novel street-centric routing protocol based on MT (SRPMT) along the streets for VANETs. Simulation results show that our proposed SRPMT protocol achieves higher data delivery rate and shorter average end-to-end delay compared with the performance of greedy perimeter stateless routing (GPSR) and greedy traffic-aware routing (GyTAR).
---
paper_title: On Alleviating Beacon Overhead in Routing Protocols for Urban VANETs
paper_content:
Vehicular ad hoc networks (VANETs) have been attracting increasing research interests for the past decade. To address the routing problem, many protocols have been proposed in the past several years. Routing protocols for VANETs, mostly based on the ideas of “Geographical Routing” (or geo-routing for short), typically have nodes periodically broadcast one-hop beacon messages to reveal their positions to neighbors. Nevertheless, packet loss and thus deterioration of routing performance in these protocols are anticipated in urban areas due to high density of vehicles in the network. In this paper, we propose two new VANET routing protocols, namely, Routing Protocol with Beacon Control (RPBC) and Routing Protocol with BeaconLess (RPBL), to alleviate packet losses. In RPBC, each vehicle determines whether to transmit a beacon message based on a new beacon control scheme proposed in this paper, which by minimizing redundant beacon messages reduces transmission overhead significantly. On the other hand, RPBL is a beaconless protocol where a node broadcasts a packet to its neighboring nodes and transmits packet via multiple paths to achieve high delivery ratio. Moreover, as packets in geo-routing protocols include the location of the sender, it can be used for routing without heavily relying on beacons. Accordingly, we propose the idea of virtual beacons and use it to further improve our proposed protocols. We conduct comprehensive experiments by simulation to validate our ideas and evaluate the proposed protocols. The simulation results show that our proposals can achieve high delivery ratios, short delays, and small overhead.
---
paper_title: Application of Cognitive Techniques to Adaptive Routing for VANETs in City Environments
paper_content:
The evolution of smart vehicles has widened the application opportunities for vehicular ad hoc networks. In this context, the routing issue is still one of the main challenges regarding to the performance of the network. Although there are multiple ad hoc routing proposals, the traditional general-purpose approaches do not fit the distinctive properties of vehicular network environments. New routing strategies must complement the existing protocols to improve their performance in vehicular scenarios. This paper introduces a novel intelligent routing technique that makes decisions in order to adaptively adjust its operation and obtain a global benefit. The nodes sense the network locally and collect information to feed the cognitive module which will select the best routing strategy, without the need of additional protocol message dissemination or convergence mechanism.
---
paper_title: Cross-layer multi-hop wireless routing for inter-vehicle communication
paper_content:
Ad-hoc networking provides a cost-effective support structure for inter-vehicle communication. A decentralized peer-to-peer information dissemination architecture is well suited for automotive applications that need to exchange data having local relevance. Routing, however, is challenge in a vehicular scenario because of the associated dynamism in network topology and variations in driving conditions. In this work we present a cross-layer ad-hoc routing approach based on link connectivity assessment in the network topology. We suggest a framework for proactive enhancements to the optimized link state routing (OLSR) protocol and implement the proposed measures within the protocol format. We further deploy an IEEE 802.11b based vehicular network and demonstrate the effectiveness of link-quality assessment based enhancements in improving the performance of inter-vehicle ad-hoc routing. Through actual test-runs, we show that the enhanced protocol is more responsive to variations in network connectivity and can take preemptive actions in choosing stable and durable routes. The routing methodology suggested in this work leverages cross-layer interactions among the networking, data-link, and physical layers, for enhanced adaptability to varying network topology and link states. The main contributions of this work are as follows: introduction of link-quality assessment methodology for enhanced adaptability of ad-hoc routing in a dynamically changing topology, delineation of the framework of a proactive topology-adaptive ad-hoc routing protocol in a vehicular scenario, and demonstration of effectiveness of the proposed routing enhancements in an IEEE 802.11b based vehicular test-bed.
---
paper_title: CLWPR — A novel cross-layer optimized position based routing protocol for VANETs
paper_content:
In this paper, we propose a novel position-based routing protocol designed to anticipate the characteristics of an urban VANET environment. The proposed algorithm utilizes the prediction of the node's position and navigation information to improve the efficiency of routing protocol in a vehicular network. In addition, we use the information about link layer quality in terms of SNIR and MAC frame error rate to further improve the efficiency of the proposed routing protocol. This in particular helps to decrease end-to-end delay. Finally, carry-n-forward mechanism is employed as a repair strategy in sparse networks. It is shown that use of this technique increases packet delivery ratio, but increases end-to-end delay as well and is not recommended for QoS constraint services. Our results suggest that compared with GPSR, our proposal demonstrates better performance in the urban environment.
---
paper_title: A Cross-Layer AOMDV Routing Protocol for V2V Communication in Urban VANET
paper_content:
Vehicular Ad hoc Network (VANET) is a special class of wireless mobile communication network. For vehicle-to-vehicle (V2V) communication, suitable routing protocols are needed. A routing metric combining hop counts and retransmission counts at MAC layer is proposed with consideration of link quality and delay reduction. Based on the new routing metric, a cross-layer Ad hoc On-demand Multipath Distance Vector with retransmission counts metric (R-AOMDV) routing protocol is designed to make use of advantages of multi-path routing protocol, such as decrease of route discovery frequency. Compared with AOMDV with minimum hop-count metric, simulation results show that R-AOMDV achieves better performance with Pareto On/Off distribution traffic model in urban VANET, no matter in sparse or dense scenarios.
---
paper_title: Cross-Layer Contextual Interactions in Wireless Networks
paper_content:
Future wireless networks should facilitate efficient communication for rapidly growing bandwidth intensive applications. The existing layered protocol stack lacks efficiency in handling such applications. Cross-layer interactions can operate within the existing protocol stack and are a promising solution. In this article we present a survey of cross-layer interactions in wireless networks. We classify the cross-layer solutions based on the nature of the adaptation using a systematic evaluation of existing approaches. We identify critical criteria applicable to generic cross-layer framework design. Further, we analyze the existing generic cross-layer frameworks and qualitatively compare them based on the identified criteria. Context awareness is an essential and important element of future pervasive and wireless technologies. We propose to consider the context awareness as an essential and important aspect in cross-layer interactions and adaptations. We discuss context parameters with respect to adaptations that can be enabled at the layered protocol stack. Finally, we discuss open research challenges and gaps to be filled in cross-layer interactions in wireless networks.
---
paper_title: A Survey of Cross-Layer Designs in Wireless Networks
paper_content:
The strict boundary of the five layers in the TCP/IP network model provides the information encapsulation that enables the standardizing of network communications and makes the implementation of networks convenient in terms of abstract layers. However, the encapsulation results in some side effects, including compromise of QoS, latency, extra overload, etc. Therefore, to mitigate the side effect of the encapsulation between the abstract layers in the TCP/IP model, a number of cross-layer designs have been proposed. Cross-layer designs allow information sharing among all of the five layers in order to improve the wireless network functionality, including security, QoS, and mobility. In this article, we classify cross-layer designs by two ways. On the one hand, by how to share information among the five layers, cross-layer designs can be classified into two categories: non-manager method and manager method. On the other hand, by the organization of the network, cross-layer designs can be classified into two categories: centralized method and distributed method. Furthermore, we summarize the challenges of the cross-layer designs, including coexistence, signaling, the lack of a universal cross-layer design, and the destruction of the layered architecture.
---
paper_title: Wireless Ad Hoc and Sensor Networks: A Cross-Layer Design Perspective
paper_content:
Wireless Ad Hoc and Sensor Networks: A Cross-Layer Design Perspective deals with the emerging design trend that transcends traditional communication layers for performance gains in ad hoc and sensor networks. The author explores the current state of the art in cross-layer approaches for ad hoc and sensor networks, providing a comprehensive design resource. ::: ::: The book offers a structured comparison and analysis of both layered and cross-layer design, providing readers with an overview of the many issues relating to ad hoc and sensor networks. The benefits of these cross-layer approaches are examined through three diverse case studies: a monitoring sensor network using Radio Frequency waves, an ad hoc network that uses Ultra Wide Band Radio, and an acoustic underwater sensor network for environmental monitoring. ::: ::: Wireless Ad Hoc and Sensor Networks: A Cross-Layer Design Perspective is interdisciplinary in character, and should be of value to software engineers, hardware engineers, application developers, network protocol designers, graduate students, communication engineers, systems engineers, and university professors. ::: ::: [From publisher's website]
---
paper_title: Cross-layer design: a survey and the road ahead
paper_content:
Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.
---
paper_title: Harnessing cross-layer-design
paper_content:
Applications and protocols for wireless and mobile systems have to deal with volatile environmental conditions such as interference, packet loss, and mobility. Utilizing cross-layer information from other protocols and system components such as sensors can improve their performance and responsiveness. However, application and protocol developers lack a convenient way of specifying, monitoring, and experimenting with optimizations to evaluate their cross-layer ideas. We present crawler, a novel experimentation architecture for system monitoring and cross-layer-coordination that facilitates evaluation of applications and wireless protocols. It alleviates the problem of complicated access to relevant system information by providing a unified interface to application, protocol and system information. The versatile design of this interface further enables a convenient and declarative way to specify and experiment with compositions of cross-layer optimizations and their adaptions at runtime. crawler also provides the necessary support to detect cross-layer conflicts, and hence prevents performance degradation when multiple optimizations are enabled across the protocol stack. We demonstrate the usability of crawler for system monitoring and cross-layer optimizations with three use cases from different areas of wireless networking.
---
| Title: Routing in Vehicular Ad-hoc Networks: A Survey on Single- and Cross-Layer Design Techniques, and Perspectives
Section 1: INTRODUCTION
Description 1: This section discusses recent advancements in wireless communication technologies, the development of transport safety approaches in intelligent transportation systems (ITS), and the unique characteristics and challenges of VANETs.
Section 2: MOTIVATION
Description 2: This section outlines the motivation for the study of VANETs, the challenges posed by their unique characteristics, and the necessity of cross-layer design to address these challenges.
Section 3: EXISTING WORK
Description 3: This section provides a summary of recent survey work on VANETs, their routing protocols, and the limitations of these surveys in focusing on cross-layer routing protocols.
Section 4: KEY HIGHLIGHTS
Description 4: This section summarizes the key contributions of the article, including overviews of VANETs, routing challenges, classification of single-layer and cross-layer routing protocols, and open research issues.
Section 5: OVERVIEW OF VANETs
Description 5: This section presents an overview of VANETs, including their applications in ITS, system architecture, networking requirements, and unique characteristics and challenges.
Section 6: CHALLENGES OF ROUTING IN VANETs
Description 6: This section highlights the differences between VANET and MANET, followed by a discussion of various routing challenges and design alternatives specific to VANET.
Section 7: SINGLE-LAYER ROUTING IN VANETs
Description 7: This section classifies single-layer routing protocols in VANETs into topology-based and geographic routing, followed by further classification of geographic routing protocols based on routing mechanism and geographic metric.
Section 8: ISSUES IN TRADITIONAL SINGLE-LAYER ROUTING
Description 8: This section outlines the limitations of traditional single-layer routing techniques, such as congestion, and interference, and discusses the need for cross-layer routing protocols.
Section 9: CROSS-LAYER ROUTING PARAMETERS
Description 9: This section discusses the benefits of cross-layer routing, including the integration of parameters from the PHY, MAC, and NET layers to enhance routing decisions and network performance.
Section 10: CROSS-LAYER ROUTING PROTOCOLS
Description 10: This section reviews various cross-layer routing protocols, highlighting their mechanisms, routing parameters, and limitations.
Section 11: DISCUSSION AND OPEN ISSUES
Description 11: This section discusses the open research issues related to single-layer and cross-layer routing approaches, including beacon-based vs. beaconless routing, dense vs. sparse network conditions, route establishment vs. hop-by-hop forwarding, and cross-layer design challenges.
Section 12: CONCLUSION
Description 12: This section concludes the survey, reiterating the importance of cross-layer information for routing in VANETs and emphasizing the open research issues that need to be addressed to develop efficient routing protocols. |
A Survey of P2P Network Security | 10 | ---
paper_title: P2P Networks for Content Sharing
paper_content:
Peer-to-peer (P2P) technologies have been widely used for content sharing, popularly called "file-swapping" networks. This chapter gives a broad overview of content sharing P2P technologies. It starts with the fundamental concept of P2P computing followed by the analysis of network topologies used in peer-to-peer systems. Next, three milestone peer-to-peer technologies: Napster, Gnutella, and Fasttrack are explored in details, and they are finally concluded with the comparison table in the last section.
---
paper_title: Why the Arpanet Was Built
paper_content:
The who, what, when, and how of the Arpanet is usually told in heroic terms-Licklider's vision, the fervor of his disciples, the dedication of computer scientists and engineers, the work of graduate students, and so forth. Told by one of the key actors in this salient part of US and Internet history, this article addresses why the Arpanet was built.
---
paper_title: P2P Networks for Content Sharing
paper_content:
Peer-to-peer (P2P) technologies have been widely used for content sharing, popularly called "file-swapping" networks. This chapter gives a broad overview of content sharing P2P technologies. It starts with the fundamental concept of P2P computing followed by the analysis of network topologies used in peer-to-peer systems. Next, three milestone peer-to-peer technologies: Napster, Gnutella, and Fasttrack are explored in details, and they are finally concluded with the comparison table in the last section.
---
paper_title: Trusted computing: providing security for peer-to-peer networks
paper_content:
In this paper, we demonstrate the application of trusted computing to securing peer-to-peer (P2P) networks. We identify a central challenge in providing many of the security services within these networks, namely the absence of stable verifiable peer identities. We employ the functionalities provided by trusted computing technology to establish a pseudonymous authentication scheme for peers and extend this scheme to build secure channels between peers for future communications. In support of our work, we illustrate how commands from the trusted computing group (TCG) specifications can be used to implement our approach in P2P networks.
---
paper_title: Research of Trust Model Based on Peer-to-Peer Network Security
paper_content:
P2P network is dynamic, self-organized, and anonymous and has some other features. P2P network cannot guarantee that all the peers can provide reliable resources and good service. Some Peers even have malicious behavior, such as Provision of false information, spread of illegal advertising, dissemination of Trojans and worms. In order to solve these Problems, the trust mechanism is introduced to P2P network and trust model is built to establish trust relationship between Peers. The popular research directions were introduced. The major sorts and key technologies were analyzed, and the algorithms in recent research were summarized. Based on the analysis of the capabilities of the trust model, its shortages were presented. Finally, some directions for future research were discussed.
---
paper_title: Understanding the Properties of the BitTorrent Overlay
paper_content:
In this paper, we conduct extensive simulations to understand the properties of the overlay generated by BitTorrent. We start by analyzing how the overlay properties impact the efficiency of BitTorrent. We focus on the average peer set size (i.e., average number of neighbors), the time for a peer to reach its maximum peer set size, and the diameter of the overlay. In particular, we show that the later a peer arrives in a torrent, the longer it takes to reach its maximum peer set size. Then, we evaluate the impact of the maximum peer set size, the maximum number of outgoing connections per peer, and the number of NATed peers on the overlay properties. We show that BitTorrent generates a robust overlay, but that this overlay is not a random graph. In particular, the connectivity of a peer to its neighbors depends on its arriving order in the torrent. We also show that a large number of NATed peers significantly compromise the robustness of the overlay to attacks. Finally, we evaluate the impact of peer exchange on the overlay properties, and we show that it generates a chain-like overlay with a large diameter, which will adversely impact the efficiency of large torrents.
---
paper_title: Security in Peer-to-Peer Networks: Empiric Model of File Diffusion in BitTorrent
paper_content:
In this work we analyze propagation of files in the BitTorrent network. The paper covers security problems in peer-to-peer networks and establishes a malware propagation model. We give overview of existing models and their weaknesses and introduce a propagation or epidemiological model based on model based on real data and real user behavior in the peer-to-peer network BitTorrent. We describe our empirical epidemiological model in detail and propose some advanced strategies which can help in fight against malware. Further we present our empiric, as its application.
---
paper_title: Research of Trust Model Based on Peer-to-Peer Network Security
paper_content:
P2P network is dynamic, self-organized, and anonymous and has some other features. P2P network cannot guarantee that all the peers can provide reliable resources and good service. Some Peers even have malicious behavior, such as Provision of false information, spread of illegal advertising, dissemination of Trojans and worms. In order to solve these Problems, the trust mechanism is introduced to P2P network and trust model is built to establish trust relationship between Peers. The popular research directions were introduced. The major sorts and key technologies were analyzed, and the algorithms in recent research were summarized. Based on the analysis of the capabilities of the trust model, its shortages were presented. Finally, some directions for future research were discussed.
---
paper_title: BitTorrent traffic obfuscation: A chase towards semantic traffic identification
paper_content:
With the beginning of the 21st century emerging peer-to-peer networks ushered in a new era of large scale media exchange. Faced with ever increasing volumes of traffic, legal threats by copyright holders, and QoS demands of customers, network service providers are urged to apply traffic classification and shaping techniques. These systems usually are highly integrated to satisfy the harsh restrictions present in network infrastructure. They require constant maintenance and updates. Additionally, they have legal issues and violate both the net neutrality and end-to-end principles.
---
paper_title: Trusted computing: providing security for peer-to-peer networks
paper_content:
In this paper, we demonstrate the application of trusted computing to securing peer-to-peer (P2P) networks. We identify a central challenge in providing many of the security services within these networks, namely the absence of stable verifiable peer identities. We employ the functionalities provided by trusted computing technology to establish a pseudonymous authentication scheme for peers and extend this scheme to build secure channels between peers for future communications. In support of our work, we illustrate how commands from the trusted computing group (TCG) specifications can be used to implement our approach in P2P networks.
---
paper_title: Majority is not enough: bitcoin mining is vulnerable
paper_content:
The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. ::: ::: We show that the Bitcoin mining protocol is not incentive-compatible. We present an attack with which colluding miners' revenue is larger than their fair share. The attack can have significant consequences for Bitcoin: Rational miners will prefer to join the attackers, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency. ::: ::: Unless certain assumptions are made, selfish mining may be feasible for any coalition size of colluding miners. We propose a practical modification to the Bitcoin protocol that protects Bitcoin in the general case. It prohibits selfish mining by a coalition that command less than 1/4 of the resources. This threshold is lower than the wrongly assumed 1/2 bound, but better than the current reality where a coalition of any size can compromise the system.
---
paper_title: The cryptoanarchists' answer to cash
paper_content:
There's nothing like a dollar bill for paying a stripper. Anonymous, yet highly personal-wherever you use it, that dollar will fit the occasion. Purveyors of Internet smut, after years of hiding charges on credit cards, or just giving it away for free, recently found their own version of the dollar-a new digital currency called Bitcoin.
---
paper_title: Accomplishing anonymity in peer to peer network
paper_content:
A key weakness of most existing P2P systems is the lack of anonymity. Without anonymity, it is possible for third parties to identify the participants involved. There are three basic anonymity requirements in P2P system. First, anonymous P2P system should make it impossible for third parties to identify the participants involved. Second, anonymous P2P system should guarantee that only the content receiver knows the content. Third, anonymous P2P system should allow the content publisher to plausibly deny that the content originated from him or her. Inside this paper, various techniques of P2P networking and the design with issues associated with implementation are discussed.
---
paper_title: Security issues in peer-to-peer systems
paper_content:
Decentralization was an architectural principle of the Internet, but over a period of time, the Internet has changed significantly. Some even think the Internet is becoming increasingly centralized that it is fundamentally broken. The term "peer-to-peer" (P2P) is a collection of various systems and applications based on such decentralized architecture, which imply a variety of security means and tools on the client-server system cannot be applied. Such a high risks involved in P2P systems draw attentions on various security issues. The paper introduces various security issues, methods for protection, and suggestions for further enhancements on the security mechanism in P2P systems
---
| Title: A Survey of P2P Network Security
Section 1: Introduction
Description 1: Introduce the paper by discussing the background and significance of P2P networks.
Section 2: ARPANET and Early Networks
Description 2: Describe the antecedents to modern P2P networks, including ARPANET and the development of packet switching.
Section 3: Napster
Description 3: Discuss Napster as the first large P2P network application, including its central server model and security drawbacks.
Section 4: Gnutella
Description 4: Explain the decentralized architecture of Gnutella and the security and efficiency issues it faced.
Section 5: BitTorrent
Description 5: Describe the BitTorrent protocol, its decentralized nature, and how it addresses inefficiencies using a seeder and leecher model.
Section 6: Security Issues
Description 6: Enumerate and explain the various security vulnerabilities in P2P networks, such as leechers, social attacks, and DDoS attacks.
Section 7: Encryption Techniques
Description 7: Discuss the encryption methods used to secure P2P networks, including user traffic obfuscation and data encryption between peers.
Section 8: File Spreading Models
Description 8: Describe models used to understand and mitigate the spread of viruses and malware in P2P networks.
Section 9: Bitcoin
Description 9: Explain the decentralized nature of Bitcoin, the blockchain, and the security mechanisms involved, such as mining and digital signatures.
Section 10: Conclusions
Description 10: Summarize the history, security challenges, and future directions of P2P networks, emphasizing the importance of ongoing research in this field. |
Time models and cognitive processes: a review | 15 | ---
paper_title: On the Evolution of Memory: A Time for Clocks
paper_content:
What was the earliest engram? Biology has evolved to encode representations of past events, and in neuroscience, we are attempting to link experience-dependent changes in molecular signaling with cellular processes that ultimately lead to behavioral output. The theory of evolution has guided biological research for decades, and since phylogenetically conserved mechanisms drive circadian rhythms, these processes may serve as common predecessors underlying more complex behavioral phenotypes. For example, the cAMP/MAPK/CREB cascade is interwoven with the clock to trigger circadian output, and is also known to affect memory formation. Time-of-day dependent changes have been observed in long-term potentiation (LTP) within the mammalian suprachiasmatic nucleus and hippocampus, along with light-induced circadian phase resetting and fear conditioning behaviors. Together this suggests during evolution, similar processes underlying metaplasticity in more simple circuits may have been redeployed in higher-order brain regions. Therefore, this notion predicts a model that LTP and metaplasticity may exist in clock-forming circuits of lower-order species, through phylogenetically conserved pathways, leading to several testable hypotheses.
---
paper_title: Self-organized Neural Representation of Time
paper_content:
Time is crucially involved in most of the activities of humans and animals. However, the cognitive mechanisms that support experiencing and processing time remain largely unknown. In the present work we follow a self-organized connectionist modeling approach to study how time may be encoded in a neural network based cognitive system in order to provide suggestions for possible time processing mechanisms in the brain. A particularly interesting feature of our study regards the implementation of a single computational model to accomplish two different robotic behavioral tasks which assume diverse manipulation of time intervals. Examination of the implemented cognitive systems revealed that it is possible to integrate the main theoretical models of time representation existing today into a new and particularly effective theory that can sufficiently explain a series of neuroscientific observations.
---
paper_title: Evolution of temporal order in living organisms
paper_content:
Circadian clocks are believed to have evolved in parallel with the geological history of the earth, and have since been fine-tuned under selection pressures imposed by cyclic factors in the environment. These clocks regulate a wide variety of behavioral and metabolic processes in many life forms. They enhance the fitness of organisms by improving their ability to efficiently anticipate periodic events in their external environments, especially periodic changes in light, temperature and humidity. Circadian clocks provide fitness advantage even to organisms living under constant conditions, such as those prevailing in the depth of oceans or in subterranean caves, perhaps by coordinating several metabolic processes in the internal milieu. Although the issue of adaptive significance of circadian rhythms has always remained central to circadian biology research, it has never been subjected to systematic and rigorous empirical validation. A few studies carried out on free-living animals under field conditions and simulated periodic and aperiodic conditions of the laboratory suggest that circadian rhythms are of adaptive value to their owners. However, most of these studies suffer from a number of drawbacks such as lack of population-level replication, lack of true controls and lack of adequate control on the genetic composition of the populations, which in many ways limits the potential insights gained from the studies. The present review is an effort to critically discuss studies that directly or indirectly touch upon the issue of adaptive significance of circadian rhythms and highlight some shortcomings that should be avoided while designing future experiments.
---
paper_title: Cryptochromes Enabling Plants and Animals to Determine Circadian Time
paper_content:
Abstract Cryptochromes are flavin-containing blue light photoreceptors related to photolyases—they are found in both plants and animals and have recently been described for bacteria. In plants, cryptochromes perform a variety of functions including the entrainment of circadian rhythms. They serve a similar role in Drosophila and mammals, where the cryptochromes also perform an additional function as an essential component of the circadian clock.
---
paper_title: Explorations on Artificial Time Perception
paper_content:
In the field of biologically inspired cognitive systems, time perception, a fundamental aspect of natural cognition is not sufficiently explored. The majority of existing works ignore the importance of experiencing the flow of time, and the implemented agents are rarely furnished with time processing capacities. The current work aims at shedding light on this largely unexplored issue, focusing on the perception of temporal duration. Specifically, we investigate a rule switching task that consists of repeating trials with dynamic temporal lengths. An evolutionary process is employed to search for neuronal mechanisms that accomplish the underlying task and self-organize time-processing dynamics. Our repeated simulation experiments showed that the capacity of perceiving duration biases the functionality of neural mechanisms with other cognitive responsibilities and additionally that time perception and ordinary cognitive processes may share the same neural resources in the cognitive system. The obtained results are related with previous brain imaging studies on time perception, and they are used to formulate suggestions for the cortical representation of time in biological agents.
---
paper_title: An integrated theory of prospective time interval estimation: The role of cognition, attention and learning
paper_content:
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary tasks on attention). The authors subsequently used the module as a part of the adaptive control of thought-rational (ACT-R) architecture to model a new experiment that combines attention, learning, dual tasking, and time perception. Finally, the model predicts time estimation, learning, and attention in a new experiment. The model predictions and fits demonstrate that the proposed integrated theory of prospective time interval estimation explains detailed effects of attention and learning during time interval estimation.
---
paper_title: The inner sense of time: how the brain creates a representation of duration
paper_content:
A large number of competing models exist for how the brain creates a representation of time. However, several human and animal studies point to 'climbing neural activation' as a potential neural mechanism for the representation of duration. Neurophysiological recordings in animals have revealed how climbing neural activation that peaks at the end of a timed interval underlies the processing of duration, and, in humans, climbing neural activity in the insular cortex, which is associated with feeling states of the body and emotions, may be related to the cumulative representation of time.
---
paper_title: Cortical Networks Underlying Mechanisms of Time Perception
paper_content:
Precise timing of sensory information from multiple sensory streams is essential for many aspects of human perception and action. Animal and human research implicates the basal ganglia and cerebellar systems in timekeeping operations, but investigations into the role of the cerebral cortex have been limited. Individuals with focal left (LHD) or right hemisphere (RHD) lesions and control subjects performed two time perception tasks (duration perception, wherein the standard tone pair interval was 300 or 600 msec) and a frequency perception task, which controlled for deficits in time-independent processes shared by both tasks. When frequency perception deficits were controlled, only patients with RHD showed time perception deficits. Time perception competency was correlated with an independent test of switching nonspatial attention in the RHD but not the LHD patients, despite attention deficits in both groups. Lesion overlays of patients with RHD and impaired timing showed that 100% of the patients with anterior damage had lesions in premotor and prefrontal cortex (Brodmann areas 6, 8, 9, and 46), and 100% with posterior damage had lesions in the inferior parietal cortex. All LHD patients with normal timing had damage in these same regions, whereas few, if any, RHD patients with normal timing had similar lesion distributions. These results implicate a right hemisphere prefrontal–inferior parietal network in timing. Time-dependent attention and working memory functions may contribute to temporal perception deficits observed after damage to this network.
---
paper_title: The neural representation of time
paper_content:
This review summarizes recent investigations of temporal processing. We focus on motor and perceptual tasks in which crucial events span hundreds of milliseconds. One key question concerns whether the representation of temporal information is dependent on a specialized system, distributed across a network of neural regions, or computed in a local task-dependent manner. Consistent with the specialized system framework, the cerebellum is associated with various tasks that require precise timing. Computational models of timing mechanisms within the cerebellar cortex are beginning to motivate physiological studies. Emphasis has also been placed on the basal ganglia as a specialized timing system, particularly for longer intervals. We outline an alternative hypothesis in which this structure is associated with decision processes.
---
paper_title: Spatial–temporal interactions in the human brain
paper_content:
The review summarises current evidence on the cognitive mechanisms for the integration of spatial and temporal representations and of common brain structures to process the where and when of stimuli. Psychophysical experiments document the presence of spatially localised distortions of sub-second time intervals and suggest that visual events are timed by neural mechanisms that are spatially selective. On the other hand, experiments with supra-second intervals suggest that time could be represented on a mental time-line ordered from left-to-right, similar to what is reported for other ordered quantities, such as numbers. Neuroimaging and neuropsychological findings point towards the posterior parietal cortex as the main site where spatial and temporal information converge and interact with each other.
---
paper_title: Emotional moments across time: a possible neural basis for time perception in the anterior insula
paper_content:
A model of awareness based on interoceptive salience is described, which has an endogenous time base that might provide a basis for the human capacity to perceive and estimate time intervals in the range of seconds to subseconds. The model posits that the neural substrate for awareness across time is located in the anterior insular cortex, which fits with recent functional imaging evidence relevant to awareness and time perception. The time base in this model is adaptive and emotional, and thus it offers an explanation for some aspects of the subjective nature of time perception. This model does not describe the mechanism of the time base, but it suggests a possible relationship with interoceptive afferent activity, such as heartbeat-related inputs.
---
paper_title: The inner experience of time
paper_content:
The striking diversity of psychological and neurophysiological models of ‘time perception’ characterizes the debate on how and where in the brain time is processed. In this review, the most prominent models of time perception will be critically discussed. Some of the variation across the proposed models will be explained, namely (i) different processes and regions of the brain are involved depending on the length of the processed time interval, and (ii) different cognitive processes may be involved that are not necessarily part of a core timekeeping system but, nevertheless, influence the experience of time. These cognitive processes are distributed over the brain and are difficult to discern from timing mechanisms. Recent developments in the research on emotional influences on time perception, which succeed decades of studies on the cognition of temporal processing, will be highlighted. Empirical findings on the relationship between affect and time, together with recent conceptualizations of self- and body processes, are integrated by viewing time perception as entailing emotional and interoceptive (within the body) states. To date, specific neurophysiological mechanisms that would account for the representation of human time have not been identified. It will be argued that neural processes in the insular cortex that are related to body signals and feeling states might constitute such a neurophysiological mechanism for the encoding of duration.
---
paper_title: An integrated theory of prospective time interval estimation: The role of cognition, attention and learning
paper_content:
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary tasks on attention). The authors subsequently used the module as a part of the adaptive control of thought-rational (ACT-R) architecture to model a new experiment that combines attention, learning, dual tasking, and time perception. Finally, the model predicts time estimation, learning, and attention in a new experiment. The model predictions and fits demonstrate that the proposed integrated theory of prospective time interval estimation explains detailed effects of attention and learning during time interval estimation.
---
paper_title: A right hemispheric frontocerebellar network for time discrimination of several hundreds of milliseconds
paper_content:
Abstract Debate still surrounds the nature of the role of the dorsolateral prefrontal gyrus (DLPFC) in time perception. This region is frequently associated with working memory and is thus implicated as a so-called “accumulator” within a hypothesized internal clock model. However, we hypothesized that this region may have a more primary role in time perception. To test this hypothesis we used functional magnetic resonance imaging (fMRI) to examine the neural correlates of relatively pure time perception with a temporal discrimination task where intervals of 1 s had to be discriminated from those of 1.3, 1.4, and 1.5 s. Time perception in this particular time domain within the “perceived present” has not previously been investigated using fMRI. By using relatively short time periods to be discriminated and also contrasting activation with an order judgment task, we aimed to minimize the confounding aspects of sustained attention and working memory. In a group of 20 healthy right-handed adult males, neural activation associated with time discrimination was found in a predominantly right hemispheric network of right dorsolateral and inferior prefrontal cortices, right supplementary motor area, and left cerebellum. We conclude that right DLPFC, rather than having a purely working memory function, might be more centrally involved in time perception than previously thought.
---
paper_title: Frontal–striatal circuitry activated by human peak-interval timing in the supra-seconds range
paper_content:
Abstract Functional magnetic resonance imaging (fMRI) was used to measure the location and intensity of brain activations when participants time an 11-s signal duration. The experiment evaluated six healthy adult male participants who performed the peak-interval timing procedure in variants of stimulus modality (auditory or visual) and condition (foreground or background: i.e., whether the presence or absence of the stimulus is the signal to be timed). The complete experimental design called for each signal variant to be used across four behavioral tasks presented in the following order: control, timing+motor, timing, and motor. In the control task, participants passively experienced the stimuli. The timing+motor and timing tasks were preceded by five fixed-time training trials in which participants learned the 11-s signal they would subsequently reproduce. In the timing+motor task, participants made two motor responses centered around their subjective estimate of the criterion time. For the timing task, participants were instructed to time internally without making a motor response. The motor task had participants make two cued responses that were not determined by the participant's sense of the passage of time. Neuroimaging data from the timing+motor and timing tasks showed activation of the frontal cortex, striatum and thalamus—none of which was apparent in the control or motor tasks. These results, combined with other peak-interval procedure data from drug and lesion studies in animals as well as behavioral results in human patient populations with striatal damage, support the involvement of frontal–striatal circuitry in human interval timing.
---
paper_title: The evolution of brain activation during temporal processing
paper_content:
Timing is crucial to many aspects of human performance. To better understand its neural underpinnings, we used event-related fMRI to examine the time course of activation associated with different components of a time perception task. We distinguished systems associated with encoding time intervals from those related to comparing intervals and implementing a response. Activation in the basal ganglia occurred early, and was uniquely associated with encoding time intervals, whereas cerebellar activation unfolded late, suggesting an involvement in processes other than explicit timing. Early cortical activation associated with encoding of time intervals was observed in the right inferior parietal cortex and bilateral premotor cortex, implicating these systems in attention and temporary maintenance of intervals. Late activation in the right dorsolateral prefrontal cortex emerged during comparison of time intervals. Our results illustrate a dynamic network of cortical-subcortical activation associated with different components of temporal information processing.
---
paper_title: Time perception: Manipulation of task difficulty dissociates clock functions from other cognitive demands
paper_content:
Previous studies suggest the involvement in timing functions of a surprisingly extensive network of human brain regions. But it is likely that while some of these regions play a fundamental role in timing, others are activated by associated task demands such as memory and decisionmaking. In two experiments, time perception (duration discrimination) was studied under two conditions of task difficulty and neural activation was compared using fMRI. Brain activation during duration discrimination was contrasted with activation evoked in a control condition (colour discrimination) that used identical stimuli. In the first experiment, the control task was slightly easier than the time task. Multiple brain areas were activated, in line with previous studies. These included the prefrontal cortex, cerebellum, inferior parietal lobule and striatum. In the second experiment, the control task was made more difficult than the time task. Much of the differential time-related activity seen in the first experiment disappeared and in some regions (inferior parietal cortex, pre-SMA and parts of prefrontal cortex) it reversed in polarity. This suggests that such activity is not specifically concerned with timing functions, but reflects the relative cognitive demands of the two tasks. However, three areas of time-related activation survived the task-difficulty manipulation: (i) a small region at the confluence of the inferior frontal gyrus and the anterior insula, bilaterally, (ii) a small portion of the left supramarginal gyrus and (iii) the putamen. We argue that the extent of the timing “network” has been significantly over-estimated in the past and that only these three relatively small regions can safely be regarded as being directly concerned with duration judgements.
---
paper_title: Brain activation patterns during measurement of sub- and supra-second intervals
paper_content:
The possibility that different neural systems are used to measure temporal durations at the sub-second and several second ranges has been supported by pharmacological manipulation, psychophysics, and neural network modelling. Here, we add to this literature by using fMRI to isolate differences between the brain networks which measure 0.6 and 3 s in a temporal discrimination task with visual discrimination for control. We observe activity in bilateral insula and dorsolateral prefrontal cortex, and in right hemispheric pre-supplementary motor area, frontal pole, and inferior parietal cortex during measurement of both intervals, suggesting that these regions constitute a system used in temporal discrimination at both ranges. The frontal operculum, left cerebellar hemisphere and middle and superior temporal gyri, all show significantly greater activity during measurement of the shorter interval, supporting the hypotheses that the motor system is preferentially involved in the measurement of sub-second intervals, and that auditory imagery is preferentially used during measurement of the same. Only a few voxels, falling in the left posterior cingulate and inferior parietal lobe, are more active in the 3 s condition. Overall, this study shows that although many brain regions are used for the measurement of both sub- and supra-second temporal durations, there are also differences in activation patterns, suggesting that distinct components are used for the two durations.
---
paper_title: Interval timing as an emergent learning property
paper_content:
Interval timing in operant conditioning is the learned covariation of a temporal dependent measure such as wait time with a temporal independent variable such as fixed-interval duration. The dominant theories of interval timing all incorporate an explicit internal clock, or pacemaker, despite its lack of independent evidence. The authors propose an alternative, pacemaker-free view that demonstrates that temporal discrimination can be explained by using only 2 assumptions: (a) variation and selection of responses through competition between reinforced behavior and all other, elicited, behaviors and (b) modulation of the strength of response competition by the memory for recent reinforcement. The model departs radically from existing timing models: It shows that temporal learning can emerge from a simple dynamic process that lacks a periodic time reference such as a pacemaker.
---
paper_title: Cortico-striatal representation of time in animals and humans
paper_content:
Interval timing in the seconds-to-minutes range is crucial to learning, memory, and decision-making. Recent findings argue for the involvement of cortico-striatal circuits that are optimized by the dopaminergic modulation of oscillatory activity and lateral connectivity at the level of cortico-striatal inputs. Striatal medium spiny neurons are proposed to detect the coincident activity of specific beat patterns of cortical oscillations, thereby permitting the discrimination of supra-second durations based upon the reoccurring patterns of subsecond neural firing. This proposal for the cortico-striatal representation of time is consistent with the observed psychophysical properties of interval timing (e.g. linear time scale and scalar variance) as well as much of the available pharmacological, lesion, patient, electrophysiological, and neuroimaging data from animals and humans (e.g. dopamine-related timing deficits in Huntington's and Parkinson's disease as well as related animal models). The conclusion is that although the striatum serves as a ‘core timer’, it is part of a distributed timing system involving the coordination of large-scale oscillatory networks.
---
paper_title: The reproduction of temporal intervals.
paper_content:
An electronic power varying controller device adapted for a dimmer, etc. and having substantially no mechanically sliding portions, comprising a thyristor connected with a load such as a lamp, a firing phase controller circuit for the thyristor, a field effect transistor for controlling the firing phase controller circuit, a capacitor connected in parallel with the gate circuit of the field effect transistor, a highly insulating switch connected in series to the gate of the field effect transistor, and a switch to be connected with the highly insulating switch for selecting a positive or negative potential.
---
paper_title: A Model of Interval Timing by Neural Integration
paper_content:
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
---
paper_title: Cortico-striatal circuits and interval timing: coincidence detection of oscillatory processes
paper_content:
Abstract Humans and other animals demonstrate the ability to perceive and respond to temporally relevant information with characteristic behavioral properties. For example, the response time distributions in peak-interval timing tasks are well described by Gaussian functions, and superimpose when scaled by the criterion duration. This superimposition has been referred to as the scalar property and results from the fact that the standard deviation of a temporal estimate is proportional to the duration being timed. Various psychological models have been proposed to account for such responding. These models vary in their success in predicting the temporal control of behavior as well as in the neurobiological feasibility of the mechanisms they postulate. A review of the major interval timing models reveals that no current model is successful on both counts. The neurobiological properties of the basal ganglia, an area known to be necessary for interval timing and motor control, suggests that this set of structures act as a coincidence detector of cortical and thalamic input. The hypothesized functioning of the basal ganglia is similar to the mechanisms proposed in the beat frequency timing model [R.C. Miall, Neural Computation 1 (1989) 359–371], leading to a reevaluation of its capabilities in terms of behavioral prediction. By implementing a probabilistic firing rule, a dynamic response threshold, and adding variance to a number of its components, simulations of the striatal beat frequency model were able to produce output that is functionally equivalent to the expected behavioral response form of peak-interval timing procedures.
---
paper_title: The Sensory Representation of Time
paper_content:
Time is embedded in many aspects of our sensory experience; sensory events unfold in time and often acquire particular meaning because of their specific temporal structure. The speed of a moving object, the words pronounced by a speaker and the tactile exploration of a texture, are all examples of temporally structured sensory experiences. Despite the ubiquitousness of the temporal dimension of our sensory experience, the understanding of the neural mechanisms underlying the temporal representation of sensory events, that is the capacity to estimate duration in milliseconds/seconds range, remains a controversial and complex issue. The controversy relates to the effective involvement of sensory-specific brain regions in the processing of temporal information. The complexity arises from the neurophysiological mechanisms underlying the representation of time in these areas and the functional interplay between sensory-specific and amodal temporal mechanisms (Harrington et al., 2011). ::: ::: The idea that we time sensory signals via a single “centralized” and “amodal” clock dominated the field of temporal cognition over the last 30 years. More recently the universality of timing mechanisms has been challenged by new theoretical positions and a growing body of empirical data (Buhusi and Meck, 2005). From a theoretical perspective the challenge comes from “distributed” timing models. This is a broad class of models, which – although different regarding the neurophysiological mechanisms proposed for time processing – collectively share the idea that we have multiple timing mechanisms “distributed” across brain areas or circuits; and that the engagement of each single mechanism depends on the psychophysical task, sensory modality, and lengths of temporal intervals (Ivry and Richardson, 2002; Durstewitz, 2003; Matell and Meck, 2004; Buonomano and Maass, 2009). The idea that sensory-specific timing mechanisms exist is supported by studies showing that the ability to discriminate temporal information depends on the modality of the signals. For example, temporal discrimination thresholds are lower for auditory compared to visual signal durations (Grondin, 1993; Grondin et al., 2005; Merchant et al., 2008); and the capacity to keep in memory multiple intervals improves if the temporal signals belong to different modalities and therefore rely on different memory resources (Gamache and Grondin, 2010). The existence of independent sensory-specific clocks is also suggested by the observation that the perceived duration of a sensory event can be distorted by modality-specific properties of the stimuli such as visual adaptation (Johnston et al., 2006; Ayhan et al., 2009), spatial, and temporal frequency (Kanai et al., 2006; Kaneko and Murakami, 2009); or by the observation that such distortions are limited to a single sensory domain, like in case of saccadic eye movements causing compression of the perceived duration of visual but not of auditory stimuli (Morrone et al., 2005; Burr et al., 2011). From the neurophysiological point of view, electrophysiological recordings in animals as well as neuroimaging and magnetic stimulation studies in humans suggest that both modality-specific and supramodal mechanisms underlie the estimation of temporal intervals (Ghose and Maunsell, 2002; Shuler and Bear, 2006; Bosco et al., 2008; Bueti et al., 2008b; Sadeghi et al., 2011). For example, it has been demonstrated that the extrastriate visual area MT/V5 is necessary for temporal discrimination of visual, but not of auditory durations (Bueti et al., 2008a) and that duration estimation to predict expected visual and auditory events involves secondary as well as primary visual and auditory cortices (Ghose and Maunsell, 2002; Shuler and Bear, 2006; Bueti and Macaluso, 2010; Bueti et al., 2010). ::: ::: Taken together these behavioral and neurophysiological data highlight the functional contribution of sensory-specific cortices and support the existence of modality-specific timing mechanisms. However, how temporal information is actually represented in these cortices and what is the neurophysiological mechanism behind it, remain unclear. A few interesting theoretical hypotheses have been advanced. “Intrinsic” timing models for example, describe time as a general and inherent property of neural dynamics. A consequence of this assumption is that any area of the brain is in principle able to encode time. Temporal computations according to these models rely on inherent temporal properties of neural networks like short-term synaptic plasticity [i.e., state-dependent networks (SDNs) model; Buonomano and Maass, 2009] or arise either from the overall magnitude of neural activity (Eagleman, 2008) or from the linear ramping of neuronal firing rate (Durstewitz, 2003; Reutimann et al., 2004). “Intrinsic models” of temporal coding are particularly suitable to describe the functional organization of sensory timing mechanisms because they assume that time is encoded by the same circuits encoding other stimulus properties such as color or motion in the visual modality. However the explanatory power of some of these models, like for example the SDNs model, is constrained to durations of a few hundred milliseconds (i.e., <500 ms; Buonomano et al., 2009; Spencer et al., 2009); this is indeed a strong limitation, given that most of the neurophysiological evidence in favor of modality-specific timing mechanisms deal with durations from hundreds of milliseconds to a few seconds. An alternative possibility is that temporal computations in sensory cortices engage wider and specialized temporal circuit (s), where time signals from sensory cortex are sent to “dedicated” timing areas where these signals are integrated and used to guide action for example (Coull et al., 2011). In this latter case the relationship between sensory-specific and sensory independent timing areas need to be elucidated. Many cortical (parietal, premotor, prefrontal, and insular cortices) and subcortical (basal ganglia and cerebellum) brain structures have indeed been implicated in the processing of temporal information independently from the sensory modality of the stimuli (see Spencer et al., 2003; Coull et al., 2004; Koch et al., 2008; Wiener et al., 2010 for a review; Wittmann et al., 2010). Although there is only a partial agreement regarding the relevance of all these structures to time processing, the challenge is now to explore whether these areas have dissociable or interchangeable/overlapping functional roles and therefore whether these areas support the same or different temporal mechanisms compared to sensory-specific areas. A very special case of multimodal timing area is represented by the auditory cortex, a sensory-specific area. It has been recently demonstrated indeed that the auditory cortex is important for temporal discrimination not only of auditory but also of somatosensory and visual stimuli (Bolognini et al., 2009; Kanai et al., 2011). The supramodal involvement of auditory areas in temporal tasks has been associated with a strategic use of auditory-based mental representations for time estimation (Franssen et al., 2006). An interesting hypothesis, suggested by Kanai and colleagues, is that given the dominance of the auditory system over vision in temporal tasks (Walker and Scott, 1981; Burr et al., 2009), visual information is converted into an auditory code for temporal computation(Kanai et al., 2011). This hypothesis is interesting because offers new insight into the relationship between visual and auditory timing systems and highlights a possible link between modality independent and modality-specific temporal mechanisms. ::: ::: It is therefore clear that the study of the functional architecture of sensory timing mechanisms poses a few more theoretical and experimental challenges. A few important questions are still open. It is, for example, unclear whether the organizational principles that apply to space also apply to time and whether the temporal dimension of visual stimuli is processed by the same or distinct networks compared to those for space. Is time coding in visual cortex retinotopic specific? Do we encode all possible temporal intervals at each retinotopic position? In which context do sensory-specific temporal mechanisms work? Is temporal information encoded in sensory cortices automatically or does it require explicit attention? Are sensory areas engaged only during duration encoding or are also active during working memory maintenance? ::: ::: The already complex scenario of the neural representation of time is getting even more intricate. From the idea of a single “amodal” mechanism we moved into the idea of multiple “modality-specific” and “modality independent” temporal mechanisms (Wiener et al., 2011). The challenge is now to find out the functional architecture of these mechanisms as well as the interaction between them. As a concluding remark, I would like to emphasize that the focus of the majority of studies exploring the neural correlates of temporal processing has been so far to identifying the key components of internal timing networks (i.e., the “where” of timing mechanisms). The result of this approach has been, for example, an exponential increase of the number of neuroimaging studies on this topic that has lead to a substantial disagreement regarding the structures that are relevant to time processing (Wiener et al., 2010 for a review). It is time to adopt new experimental approaches that pose more mechanistically oriented questions about the underlying timing mechanisms while at the same time attempting to link computational models and neurophysiology (Portugal et al., 2011).
---
paper_title: Time And Memory: Towards A Pacemaker-Free Theory Of Interval Timing
paper_content:
A popular view of interval timing in animals is that it is driven by a discrete pacemaker-accumulator mechanism that yields a linear scale for encoded time. But these mechanisms are fundamentally at odds with the Weber law property of interval timing, and experiments that support linear encoded time can be interpreted in other ways. We argue that the dominant pacemaker-accumulator theory, scalar expectancy theory (SET), fails to explain some basic properties of operant behavior on interval-timing procedures and can only accommodate a number of discrepancies by modifications and elaborations that raise questions about the entire theory. We propose an alternative that is based on principles of memory dynamics derived from the multiple-time-scale (MTS) model of habituation. The MTS timing model can account for data from a wide variety of time-related experiments: proportional and Weber law temporal discrimination, transient as well as persistent effects of reinforcement omission and reinforcement magnitude, bisection, the discrimination of relative as well as absolute duration, and the choose-short effect and its analogue in number-discrimination experiments. Resemblances between timing and counting are an automatic consequence of the model. We also argue that the transient and persistent effects of drugs on time estimates can be interpreted as well within MTS theory as in SET. Recent real-time physiological data conform in surprising detail to the assumptions of the MTS habituation model. Comparisons between the two views suggest a number of novel experiments.
---
paper_title: Scalar Timing in Memory
paper_content:
A recent report of ours’ proposed an information-processing account of temporal generalization. The account posited a clock process, which was the basic time measurement device, and working and reference memory for storing the output of the clock either temporarily or relatively permanently. Records of time intervals in working and reference memory were then compared using a binary decision process, which dictated responding or not responding. The analysis concentrated on a relativistic Weber’s law property of the data from temporal generalization, and the constraints this property imposed on sources of variance in the information-processing stages. Our purpose here is to summarize that work and generalize the model in two ways: First we consider several sources of variance operating simultaneously. The original analysis demonstrated that if only one source of variance is present, it must be a scalar source, that is, it must result in a variable memory for which variance increases with the square of the mean.’ In the generalized account proposed here, we will develop the conclusion that scalar sources dominate in some time ranges, while other sources may dominate in others. These ideas are then applied to two additional timing tasks with different characteristics.
---
paper_title: A Unified Model of Time Perception Accounts for Duration-Based and Beat-Based Timing Mechanisms
paper_content:
Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary, and supplementary motor area, pre-motor cortex as well as higher-level areas such as the prefrontal cortex. In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011). However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent. Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex through multiple synaptic pathways. Timing in this unified model is proposed to involve serial beat-based striatal activation followed by absolute olivocerebellar timing mechanisms.
---
paper_title: The Storage of Time Intervals Using Oscillating Neurons
paper_content:
A mechanism to store and recall time intervals ranging from hundreds of milliseconds to tens of seconds is described. The principle is based on beat frequencies between oscillating elements; any small group of oscillators codes specifically for an interval equal to the lowest common multiple of their oscillation periods. This mechanism could be realized in the nervous system by an output neuron, excited by a group of pacemaker neurons, and able to select via a Hebbian rule a subgroup of pacemaker cells to encode any given interval, or small number of intervals (for example, a pattern of pulses). Recall could be achieved by resetting the pacemaker cells and setting a threshold for activation of the output unit. A simulation is described and the main features of such an encoding scheme are discussed.
---
paper_title: Sensory modality and time perception in children and adults
paper_content:
This experiment investigated the effect of signal modality on time perception in 5- and 8-year-old children as well as young adults using a duration bisection task in which auditory and visual signals were presented in the same test session and shared common anchor durations. Durations were judged shorter for visual than for auditory signals by all age groups. However, the magnitude of this modality difference was larger in the children than in the adults. Sensitivity to time was also observed to increase with age for both modalities. Taken together, these two observations suggest that the greater modality effect on duration judgments for the children, for whom attentional abilities are considered limited, is the result of visual signals requiring more attentional resources than are needed for the processing of auditory signals. Within the framework of the information-processing model of Scalar Timing Theory, these effects are consistent with a developmental difference in the operation of the "attentional switch" used to transfer pulses from the pacemaker into the accumulator. Specifically, although timing is more automatic for auditory than visual signals in both children and young adults, children have greater difficulty in keeping the switch in the closed state during the timing of visual signals. Language: en
---
paper_title: Timing in the Absence of Clocks: Encoding Time in Neural Network States
paper_content:
Summary Decisions based on the timing of sensory events are fundamental to sensory processing. However, the mechanisms by which the brain measures time over ranges of milliseconds to seconds remain unclear. The dominant model of temporal processing proposes that an oscillator emits events that are integrated to provide a linear metric of time. We examine an alternate model in which cortical networks are inherently able to tell time as a result of time-dependent changes in network state. Using computer simulations we show that within this framework, there is no linear metric of time, and that a given interval is encoded in the context of preceding events. Human psychophysical studies were used to examine the predictions of the model. Our results provide theoretical and experimental evidence that, for short intervals, there is no linear metric of time, and that time may be encoded in the high-dimensional state of local neural networks.
---
paper_title: The dual klepsydra model of internal time representation and time reproduction
paper_content:
Abstract We present a model of the internal representation and reproduction of temporal durations, the ‘dual klepsydra’ model ( DKM ). Unlike most contemporary models operating on a ‘pacemaker-counter’ scheme, the DKM does not assume an oscillatory process as the internal time-base. It is based on irreversible, dissipative processes in inflow/outflow systems (leaky klepsydrae), whose states are continuously compared; if their states are equal, durations are subjectively perceived as equal. Model-based predictions fit experimental time reproduction data with good accuracy, and show qualitative features not accounted for by other models. The deterministic model is characterized by two parameters, κ (outflow rate coefficient) and η (ratio of inflow rates). A stochastic version of the model ( SDKM ) assumes randomly fluctuating inflows, involves two more parameters, and accounts for intra-individual variance of reproduced durations. Analysis of the SDKM leads to non-trivial problems in the stochastic theory, briefly sketched here. Methods of parameter estimation for both deterministic and stochastic versions are given. Applying the DKM to the subjective experience of time passage, we show how subjective measure of elapsed time is constituted. Finally, essential features of the model and its possible neurophysiological interpretation are discussed.
---
paper_title: Learning the temporal dynamics of behavior.
paper_content:
This study presents a dynamic model of how animals learn to regulate their behavior under timebased reinforcement schedules. The model assumes a serial activation of behavioral states during the interreinfbrcement interval, an associative process linking the states with the operant response, and a rule mapping the activation of the states and their associative strength onto response rate or probability. The model fits data sets from fixed-interval schedules, the peak procedure, mixed fixedinterval schedules, and the bisection of temporal intervals. The major difficulties of the model came from experiments that suggest that under some conditions animals may time 2 intervals independently and simultaneously. :
---
paper_title: Simulating autonomous coupling in discrimination of light frequencies
paper_content:
To study the interfacial complexity between an agent and its environment such as the adaptive selection aspects of sensory inputs, we propose a new coupling mechanism, called autonomous coupling, where an agent can spontaneously switch on or off its interaction with the environment. An oscillatory neural system with autonomous coupling sums the sensory inputs and initiates action selection via a sensorimotor coupling. An example task we designed to show dynamical categorization is the classification of light frequencies. An evolved agent selects specified light frequencies by approaching them and avoiding light of other frequencies. Dynamical categorization and active coupling are the key concepts for the understanding of situated and embodied cognitive functions.
---
paper_title: Explorations on Artificial Time Perception
paper_content:
In the field of biologically inspired cognitive systems, time perception, a fundamental aspect of natural cognition is not sufficiently explored. The majority of existing works ignore the importance of experiencing the flow of time, and the implemented agents are rarely furnished with time processing capacities. The current work aims at shedding light on this largely unexplored issue, focusing on the perception of temporal duration. Specifically, we investigate a rule switching task that consists of repeating trials with dynamic temporal lengths. An evolutionary process is employed to search for neuronal mechanisms that accomplish the underlying task and self-organize time-processing dynamics. Our repeated simulation experiments showed that the capacity of perceiving duration biases the functionality of neural mechanisms with other cognitive responsibilities and additionally that time perception and ordinary cognitive processes may share the same neural resources in the cognitive system. The obtained results are related with previous brain imaging studies on time perception, and they are used to formulate suggestions for the cortical representation of time in biological agents.
---
paper_title: Are We There Yet? Grounding Temporal Concepts in Shared Journeys
paper_content:
An understanding of time and temporal concepts is critical for interacting with the world and with other agents in the world. What does a robot need to know to refer to the temporal aspects of events-could a robot gain a grounded understanding of “a long journey,” or “soon?” Cognitive maps constructed by individual agents from their own journey experiences have been used for grounding spatial concepts in robot languages. In this paper, we test whether a similar methodology can be applied to learning temporal concepts and an associated lexicon to answer the question “how long” did it take to complete a journey. Using evolutionary language games for specific and generic journeys, successful communication was established for concepts based on representations of time, distance, and amount of change. The studies demonstrate that a lexicon for journey duration can be grounded using a variety of concepts. Spatial and temporal terms are not identical, but the studies show that both can be learned using similar language evolution methods, and that time, distance, and change can serve as proxies for each other under noisy conditions. Effective concepts and names for duration provide a first step towards a grounded lexicon for temporal interval logic.
---
paper_title: Learning to perceive time: A connectionist, memory-decay model of the development of interval timing in infants.
paper_content:
We present the first developmental model of interval timing. It is a memory-based connectionist model of how infants learn to perceive time. It has two novel features that are not found in other models. First, it uses the uncertainty of a memory for an event as an index of how long ago that event happened. Secondly, embodiment – specifically, infant motor activity – is crucial to the calibration of time-perception both within and across sensory modalities. We describe the model and present three simulations which show (1) how it uses sensory memory uncertainty and bodily representaions to index time, (2) that the scalar property of interval timing (Gibbon, 1977) emerges naturally from this network and (3) that motor activity can synchronize independent timing mechanisms across different sensory modalities.
---
paper_title: An integrated theory of prospective time interval estimation: The role of cognition, attention and learning
paper_content:
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary tasks on attention). The authors subsequently used the module as a part of the adaptive control of thought-rational (ACT-R) architecture to model a new experiment that combines attention, learning, dual tasking, and time perception. Finally, the model predicts time estimation, learning, and attention in a new experiment. The model predictions and fits demonstrate that the proposed integrated theory of prospective time interval estimation explains detailed effects of attention and learning during time interval estimation.
---
paper_title: A model of episodic memory: Mental time travel along encoded trajectories using grid cells
paper_content:
The definition of episodic memory includes the concept of mental time travel: the ability to re-experience a previously experienced trajectory through continuous dimensions of space and time, and to recall specific events or stimuli along this trajectory. Lesions of the hippocampus and entorhinal cortex impair human episodic memory function and impair rat performance in tasks that could be solved by retrieval of trajectories. Recent physiological data suggests a novel model for encoding and retrieval of trajectories, and for associating specific stimuli with specific positions along the trajectory. During encoding in the model, external input drives the activity of head direction cells. Entorhinal grid cells integrate the head direction input to update an internal representation of location, and drive hippocampal place cells. Trajectories are encoded by Hebbian modification of excitatory synaptic connections between hippocampal place cells and head direction cells driven by external action. Associations are also formed between hippocampal cells and sensory stimuli. During retrieval, a sensory input cue activates hippocampal cells that drive head direction activity via previously modified synapses. Persistent spiking of head direction cells maintains the direction and speed of the action, updating the activity of entorhinal grid cells that thereby further update place cell activity. Additional cells, termed arc length cells, provide coding of trajectory segments based on the one-dimensional arc length from the context of prior actions or states, overcoming ambiguity where the overlap of trajectory segments causes multiple head directions to be associated with one place. These mechanisms allow retrieval of complex, self-crossing trajectories as continuous curves through space and time.
---
paper_title: Mathematical learning theory through time
paper_content:
Abstract Stimulus sampling theory (SST: Estes, 1950 , Estes, 1955a , Estes, 1955b , Estes, 1959 ) was the first rigorous mathematical model of learning that posited a central role for an abstract cognitive representation distinct from the stimulus or the response. SST posited that (a) conditioning takes place not on the nominal stimulus presented to the learner, but on a cognitive representation caused by the nominal stimulus, and (b) the cognitive representation caused by a nominal stimulus changes gradually across presentations of that stimulus. Retrieved temporal context models assume that (a) a distributed representation of temporal context changes gradually over time in response to the studied stimuli, and (b) repeating a stimulus can recover a prior state of temporal context. We trace the evolution of these ideas from the early work on SST, and argue that recent neuroscientific evidence provides a physical basis for the abstract models that Estes envisioned more than a half-century ago.
---
paper_title: Modeling working memory: a computational implementation of the Time-Based Resource-Sharing theory
paper_content:
Working memory is a core concept in cognition, predicting about 50% of the variance in IQ and reasoning tasks. A popular test of working memory is the complex span task, in which encoding of memoranda alternates with processing of distractors. A recent model of complex span performance, the Time-Based-Resource-Sharing (TBRS) model of Barrouillet and colleagues, has seemingly accounted for several crucial findings, in particular the intricate trade-off between deterioration and restoration of memory in the complex span task. According to the TBRS, memory traces decay during processing of the distractors, and they are restored by attentional refreshing during brief pauses in between processing steps. However, to date, the theory has been formulated only at a verbal level, which renders it difficult to test and to be certain of its intuited predictions. We present a computational instantiation of the TBRS and show that it can handle most of the findings on which the verbal model was based. We also show that there are potential challenges to the model that await future resolution. This instantiated model, TBRS*, is the first comprehensive computational model of performance in the complex span paradigm. The Matlab model code is available as a supplementary material of this article.
---
paper_title: Human memory reconsolidation can be explained using the temporal context model
paper_content:
Recent work by Hupbach, Gomez, Hardt, and Nadel (Learning & Memory, 14, 47–53, 2007) and Hupbach, Gomez, and Nadel (Memory, 17, 502–510, 2009) suggests that episodic memory for a previously studied list can be updated to include new items, if participants are reminded of the earlier list just prior to learning a new list. The key finding from the Hupbach studies was an asymmetric pattern of intrusions, whereby participants intruded numerous items from the second list when trying to recall the first list, but not viceversa. Hupbach et al. (2007; 2009) explained this pattern in terms of a cellular reconsolidation process, whereby first-list memory is rendered labile by the reminder and the labile memory is then updated to include items from the second list. Here, we show that the temporal context model of memory, which lacks a cellular reconsolidation process, can account for the asymmetric intrusion effect, using well-established principles of contextual reinstatement and item–context binding.
---
paper_title: Mathematical learning theory through time
paper_content:
Abstract Stimulus sampling theory (SST: Estes, 1950 , Estes, 1955a , Estes, 1955b , Estes, 1959 ) was the first rigorous mathematical model of learning that posited a central role for an abstract cognitive representation distinct from the stimulus or the response. SST posited that (a) conditioning takes place not on the nominal stimulus presented to the learner, but on a cognitive representation caused by the nominal stimulus, and (b) the cognitive representation caused by a nominal stimulus changes gradually across presentations of that stimulus. Retrieved temporal context models assume that (a) a distributed representation of temporal context changes gradually over time in response to the studied stimuli, and (b) repeating a stimulus can recover a prior state of temporal context. We trace the evolution of these ideas from the early work on SST, and argue that recent neuroscientific evidence provides a physical basis for the abstract models that Estes envisioned more than a half-century ago.
---
paper_title: Modeling working memory: a computational implementation of the Time-Based Resource-Sharing theory
paper_content:
Working memory is a core concept in cognition, predicting about 50% of the variance in IQ and reasoning tasks. A popular test of working memory is the complex span task, in which encoding of memoranda alternates with processing of distractors. A recent model of complex span performance, the Time-Based-Resource-Sharing (TBRS) model of Barrouillet and colleagues, has seemingly accounted for several crucial findings, in particular the intricate trade-off between deterioration and restoration of memory in the complex span task. According to the TBRS, memory traces decay during processing of the distractors, and they are restored by attentional refreshing during brief pauses in between processing steps. However, to date, the theory has been formulated only at a verbal level, which renders it difficult to test and to be certain of its intuited predictions. We present a computational instantiation of the TBRS and show that it can handle most of the findings on which the verbal model was based. We also show that there are potential challenges to the model that await future resolution. This instantiated model, TBRS*, is the first comprehensive computational model of performance in the complex span paradigm. The Matlab model code is available as a supplementary material of this article.
---
paper_title: Temporal Cognition: A Key Ingredient of Intelligent Systems
paper_content:
Experiencing the flow of time is an important capacity of biological systems that is involved in many ways in the daily activities of humans and animals. However, in the field of robotics, the key role of time in cognition is not adequately considered in contemporary research, with artificial agents focusing mainly on the spatial extent of sensory information, almost always neglecting its temporal dimension. This fact significantly obstructs the development of high level robotic cognitive skills, as well as the autonomous and seamless operation of artificial agents in human environments. Taking inspiration from biological cognition, the present work puts forward time perception as a vital capacity of artificial intelligent systems and contemplates the research path for incorporating temporal cognition in the repertoire of robotic skills.
---
paper_title: Space , Time and Number Space , time , and number : a Kantian research program
paper_content:
The 24th Attention & Performance meeting on ‘Space, Time, and Number: Cerebral Foundations of Mathematical Intuitions’ was sponsored by Commissariat a l’Energie Atomique et aux Energies Alternatives (CEA), Ecole des Neurosciences de Paris (ENP), European Society for Cognitive Psychology (ESCOP), the Bettencourt-Schueller Foundation, the Fondation de France, the Hugot Foundation of the College de France, the IPSEN Foundation, Institut National de la Sante et de la Recherche Medicale (INSERM), the James D. McDonnell Foundation, National Institute of Child Health & Human Development (NICHD R13HD065378), the National Science Foundation (NSF), and Ministere de l’Enseignement Superieur et de la Recherche. We thank Susana Franck, Laurence Labruna and Giovanna Santoro for their help in organizing the meeting.
---
paper_title: Moving Through Time
paper_content:
/ FREE Full Text 8. 1. Riley M.A., 2. Balasubramaniam R., 3. Mitra S., 4. Turvey M.T. (1998). Visual influences on centre of pressure dynamics in upright posture. Ecological Psychology, 10, 65–91. CrossRef Web of Science
---
paper_title: Sensory modality and time perception in children and adults
paper_content:
This experiment investigated the effect of signal modality on time perception in 5- and 8-year-old children as well as young adults using a duration bisection task in which auditory and visual signals were presented in the same test session and shared common anchor durations. Durations were judged shorter for visual than for auditory signals by all age groups. However, the magnitude of this modality difference was larger in the children than in the adults. Sensitivity to time was also observed to increase with age for both modalities. Taken together, these two observations suggest that the greater modality effect on duration judgments for the children, for whom attentional abilities are considered limited, is the result of visual signals requiring more attentional resources than are needed for the processing of auditory signals. Within the framework of the information-processing model of Scalar Timing Theory, these effects are consistent with a developmental difference in the operation of the "attentional switch" used to transfer pulses from the pacemaker into the accumulator. Specifically, although timing is more automatic for auditory than visual signals in both children and young adults, children have greater difficulty in keeping the switch in the closed state during the timing of visual signals. Language: en
---
paper_title: Putting Time in Perspective: A Valid, Reliable Individual-Differences Metric
paper_content:
Time perspective (TP), a fundamental dimension in the construction of psychological time, emerges from cognitive processes partitioning human experience into past, present, and future temporal frames. The authors’ research program proposes that TP is a pervasive and powerful yet largely unrecognized influence on much human behavior. Although TP variations are learned and modified by a variety of personal, social, and institutional influences, TP also functions as an individual-differences variable. Reported is a new measure assessing personal variations in TP profiles and specific TP biases. The five factors of the Zimbardo Time Perspective Inventory were established through exploratory and confirmatory factor analyses and demonstrate acceptable internal and test–retest reliability. Convergent, divergent, discriminant, and predictive validity are shown by correlational and experimental research supplemented by case studies.
---
paper_title: Threaded cognition: An integrated theory of concurrent multitasking
paper_content:
The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking-that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction.
---
paper_title: Habituation revisited: An updated and revised description of the behavioral characteristics of habituation
paper_content:
In the 20th century, great progress was made in understanding the behavioral characteristics of habituation. A landmark paper published by Thompson and Spencer in 1966 clarified the definition of habituation, synthesized the research to date and presented a list of nine behavioral characteristics of habituation that appeared to be common in all organisms studied The history of habituation and the historical context of Thompson & Spencer's (1966) distillation are reviewed more fully in an article by Thompson (2009) that is included in this issue. This list was repeated and expanded upon by Groves and Thompson in 1970. These two papers are now citation classics and are considered to be the authorities on the characteristics of habituation. In August 2007, a group of 15 researchers (the authors of this review) who study habituation in a wide range of species and paradigms met to revisit these characteristics and refine them based on the 40 years of research since Thompson and Spencer 1966. The descriptions and characteristics from 1966 have held up remarkably well, and the revisions we have made to them were often for clarity rather than content. We made substantial changes to only a few of the characteristics, usually to add new information and expand upon the description rather than to substantially alter the original point. We restricted ourselves to an analysis of habituation; there was insufficient time for detailed discussions of the other form of non-associative learning “sensitization.” Thus this review is restricted to our discussions of habituation and dishabituation (as it relates directly to habituation). ::: ::: Many people will be surprised to learn that, although habituation is termed ”the simplest form of learning” and is well studied behaviorally, remarkably little is known about the neural mechanisms underlying habituation. Researchers who work on this form of learning believe that because habituation allows animals to filter out irrelevant stimuli and focus selectively on important stimuli, it is a prerequisite for other forms of learning. Therefore, to fully understand the mechanisms of more complex forms of learning and cognition it is important to understand the basic building blocks of habituation. The objectives of this special issue are to re-ignite interest in studying the mechanisms of habituation and thereby to stimulate efforts to further our understanding of the neural basis of habituation. ::: ::: In this review, we will first define habituation, then review and revise the nine characteristics of habituation that were originally determined by Thompson and Spencer in 1966 and that have been seen across all species studied. In addition, we describe a tenth characteristic that was added at the workshop. Finally we present several issues that were discussed extensively at the meeting and highlight how the view of habituation that arose from our discussions differs from the original characterization by Thompson and Spencer.
---
paper_title: Environmental, seasonal, and social modulations of basal activity in a weakly electric fish
paper_content:
The electric organ discharge (EOD) of weakly electric fish encodes information about species, sex, behavioral, and physiological states throughout the lifetime. Its central command is crucial for sensory-motor coordination, and is therefore the target of plastic mechanisms that adapt fish to environmental and social challenges. The EOD waveform of Brachyhypopomus pinnicaudatus is modulated by environmental factors and the neuroendocrine system. In this study we investigate the effects of water temperature and day-night cycle upon EOD rate in this species during the breeding and non-breeding seasons. During the non-breeding season, EOD rate is a linear function of water temperature and exhibits counterclockwise hysteresis. During breeding, a thermal resistance strategy prevents the decrease of EOD rate to cooling. A nocturnal increase of EOD basal rate independent of water temperature and locomotor activity was demonstrated in isolated non-breeding adults and in male-female dyads all year round. An additional increase of nocturnal EOD rate, probably acting as a social courtship signal, was found in breeding dyads. This additional increase of nocturnal EOD rate could not be fully explained by gonadal maturation and was modulated by social stimuli. This study provides novel data on the complex interactions between environment, reproductive cycle, social behavior, and electromotor output in an advantageous model of the vertebrate central nervous system.
---
paper_title: The inner sense of time: how the brain creates a representation of duration
paper_content:
A large number of competing models exist for how the brain creates a representation of time. However, several human and animal studies point to 'climbing neural activation' as a potential neural mechanism for the representation of duration. Neurophysiological recordings in animals have revealed how climbing neural activation that peaks at the end of a timed interval underlies the processing of duration, and, in humans, climbing neural activity in the insular cortex, which is associated with feeling states of the body and emotions, may be related to the cumulative representation of time.
---
| Title: Time models and cognitive processes: a review
Section 1: INTRODUCTION
Description 1: Write an overview of the importance of time perception in biological systems and its recent integration into cognitive and robotic models.
Section 2: TIME PROCESSING MECHANISMS IN THE BRAIN
Description 2: Discuss various brain areas implicated in time-keeping and their roles in processing temporal information.
Section 3: COMPUTATIONAL MODELS OF TIME PERCEPTION
Description 3: Review the different computational models explaining how time is processed in the brain, differentiating between the dedicated and intrinsic models.
Section 4: COGNITIVE MODELS EXPLOITING SENSE OF TIME
Description 4: Outline existing computational models that integrate time perception with other cognitive functions.
Section 5: TIME IN DECISION MAKING
Description 5: Explore models where robotic agents use time perception for decision-making processes.
Section 6: A GROUNDED TEMPORAL LEXICON
Description 6: Discuss how robots learn terms for space and time and develop a temporal lexicon.
Section 7: INTERVAL TIMING GROUNDED IN MOTOR ACTIVITY
Description 7: Explain models where time perception is grounded in motor activities, particularly in the context of memory-trace decay.
Section 8: REPRESENTATION OF DURATION
Description 8: Investigate alternative representations of duration in cognitive systems through self-organized computational models.
Section 9: TIME PERCEPTION AS A SECONDARY TASK
Description 9: Describe how timing modules in architectures like ACT-R explore the interaction of timing mechanisms with other cognitive functions.
Section 10: PAST, FUTURE PERCEPTION
Description 10: Examine models that discuss the evolution and role of perceiving past and future in cognitive systems.
Section 11: MENTAL TIME TRAVEL
Description 11: Review computational models that explain the brain’s ability to recall past experiences and project future events.
Section 12: LEARNING THROUGH TIME
Description 12: Discuss the role of time in the learning process and contrast different mathematical models for learning over time.
Section 13: FORGETTING
Description 13: Explain computational models of forgetting, particularly how memory decays over time and how it is restored.
Section 14: MEMORY RECONSOLIDATION
Description 14: Review computational models explaining how memory reconsolidation works, including how retrieval changes memory representation over time.
Section 15: DISCUSSION
Description 15: Summarize the multimodal interaction between time perception and various cognitive functions, highlighting future research directions for improving artificial systems' temporal cognition.
Section 16: CONCLUSIONS
Description 16: Provide concluding remarks on the importance of integrating time perception in cognitive models and the future impact of such research on human-robot interaction. |
A Survey on Programmable LDPC Decoders | 18 | ---
paper_title: An efficient logic emulation system
paper_content:
The Realizer, is a logic emulation system that automatically configures a network of field-programmable gate arrays (FPGAs) to implement large digital logic designs, is presented. Logic and interconnect are separated to achieve optimum FPGA utilization. Its interconnection architecture, called the partial crossbar, greatly reduces system-level placement and routing complexity, achieves bounded interconnect delay, scales linearly with pin count, and allows hierarchical expansion to systems with hundreds of thousands of FPGA devices in a fast and uniform way. An actual multiboard system has been built, using 42 Xilinx XC3090 FPGAs for logic. Several designs, including a 32-b CPU datapath, have been automatically realized and operated at speed. They demonstrate very good FPGA utilization. The Realizer has applications in logic verification and prototyping, simulation, architecture development, and special-purpose execution. >
---
paper_title: Near Shannon limit performance of low density parity check codes
paper_content:
The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes.
---
paper_title: The Density Advantage of Configurable Computing
paper_content:
More and more, field-programmable gate arrays (FPGAs) are accelerating computing applications. The absolute performance achieved by these configurable machines has been impressive-often one to two orders of magnitude greater than processor-based alternatives. Configurable computing is one of the fastest, most economical ways to solve problems such as RSA (Rivest-Shamir-Adelman) decryption, DNA sequence matching, signal processing, emulation, and cryptographic attacks. But questions remain as to why FPGAs have been so much more successful than their microprocessor and DSP counterparts. Do FPGA architectures have inherent advantages? Or are these examples just flukes of technology and market pricing? Will advantages increase, decrease, or remain the same as technology advances? Is there some generalization that accounts for the advantages in these cases? The author attempts to answer these questions and to see how configurable computing fits into the arsenal of structures used to build general, programmable computing platforms.
---
paper_title: Programming Massively Parallel Processors. A Hands-on Approach.
paper_content:
Programming Massively Parallel Processors. A Hands-on Approach David Kirk and Wen-mei Hwu ISBN: 978-0-12-381472-2 Copyright 2010 Introduction This book is designed for graduate/undergraduate students and practitioners from any science and engineering discipline who use computational power to further their field of research. This comprehensive test/reference provides a foundation for the understanding and implementation of parallel programming skills which are needed to achieve breakthrough results by developing parallel applications that perform well on certain classes of Graphic Processor Units (GPUs). The book guides the reader to experience programming by using an extension to C language, in CUDA which is a parallel programming environment supported on NVIDIA GPUs, and emulated on less parallel CPUs. Given the fact that parallel programming on any High Performance Computer is complex and requires knowledge about the underlying hardware in order to write an efficient program, it becomes an advantage of this book over others to be specific toward a particular hardware. The book takes the readers through a series of techniques for writing and optimizing parallel programming for several real-world applications. Such experience opens the door for the reader to learn parallel programming in depth. Outline of the Book Kirk and Hwu effectively organize and link a wide spectrum of parallel programming concepts by focusing on the practical applications in contrast to most general parallel programming texts that are mostly conceptual and theoretical. The authors are both affiliated with NVIDIA; Kirk is an NVIDIA Fellow and Hwu is principle investigator for the first NVIDIA CUDA Center of Excellence at the University of Illinois at Urbana-Champaign. Their coverage in the book can be divided into four sections. The first part (Chapters 1–3) starts by defining GPUs and their modern architectures and later providing a history of Graphics Pipelines and GPU computing. It also covers data parallelism, the basics of CUDA memory/threading models, the CUDA extensions to the C language, and the basic programming/debugging tools. The second part (Chapters 4–7) enhances student programming skills by explaining the CUDA memory model and its types, strategies for reducing global memory traffic, the CUDA threading model and granularity which include thread scheduling and basic latency hiding techniques, GPU hardware performance features, techniques to hide latency in memory accesses, floating point arithmetic, modern computer system architecture, and the common data-parallel programming patterns needed to develop a high-performance parallel application. The third part (Chapters 8–11) provides a broad range of parallel execution models and parallel programming principles, in addition to a brief introduction to OpenCL. They also include a wide range of application case studies, such as advanced MRI reconstruction, molecular visualization and analysis. The last chapter (Chapter 12) discusses the great potential for future architectures of GPUs. It provides commentary on the evolution of memory architecture, Kernel Execution Control Evolution, and programming environments. Summary In general, this book is well-written and well-organized. A lot of difficult concepts related to parallel computing areas are easily explained, from which beginners or even advanced parallel programmers will benefit greatly. It provides a good starting point for beginning parallel programmers who can access a Tesla GPU. The book targets specific hardware and evaluates performance based on this specific hardware. As mentioned in this book, approximately 200 million CUDA-capable GPUs have been actively in use. Therefore, the chances are that a lot of beginning parallel programmers can have access to Telsa GPU. Also, this book gives clear descriptions of Tesla GPU architecture, which lays a solid foundation for both beginning parallel programmers and experienced parallel programmers. The book can also serve as a good reference book for advanced parallel computing courses. Jie Cheng, University of Hawaii Hilo
---
paper_title: Area, throughput, and energy-efficiency trade-offs in the VLSI implementation of LDPC decoders
paper_content:
Low-density parity-check (LDPC) codes are key ingredients for improving reliability of modern communication systems and storage devices. On the implementation side however, the design of energy-efficient and high-speed LDPC decoders with a sufficient degree of reconfigurability to meet the flexibility demands of recent standards remains challenging. This survey paper provides an overview of the state-of-the-art in the design of LDPC decoders using digital integrated circuits. To this end, we summarize available algorithms and characterize the design space. We analyze the different architectures and their connection to different codes and requirements. The advantages and disadvantages of the various choices are illustrated by comparing state-of-the-art LDPC decoder designs.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: A Scalable LDPC Decoder on GPU
paper_content:
A flexible and scalable approach for LDPC decodingon CUDA based Graphics Processing Unit (GPU) is presented in this paper. Layered decoding is a popular method for LDPC decoding and is known for its fast convergence. However, efficient implementation of the layered decoding algorithm on GPU is challenging due to the limited amount of data-parallelism available in this algorithm. To overcome this problem, a kernel execution configuration that can decode multiple codewords simultaneously on GPU is developed. This paper proposes a compact data packing scheme to reduce the number of global memory accesses and parity-check matrix representation to reduce constant memory latency. Global memory bandwidth efficiency is improved by coalescing simultaneous memory accesses of threads in a half-warp into a single memory transaction. Asynchronous data transfers are used to hide host memory latency by overlapping kernel execution with data transfers between CPU and GPU. The proposed implementation of LDPC decoder on GPU performs two orders of magnitude faster than the LDPC decoder on a CPU and four times faster than the previously reported LDPC decoder on GPU. This implementation achieves a throughput of 160Mbps, which is comparable to dedicated hardware solutions.
---
paper_title: Configurable high-throughput decoder architecture for quasi-cyclic LDPC codes
paper_content:
We describe a fully reconfigurable low-density parity check (LDPC) decoder for quasi-cyclic (QC) codes. The proposed hardware architecture is able to decode virtually any QC-LDPC code that fits into the allocated memories while achieving high decoding throughput. Our VLSI implementation has been optimized for the IEEE 802.11 n standard and achieves a throughput of 780 Mbit/s with a core area of 3.39 mm2 in 0.18 mum CMOS technology.
---
paper_title: High coded data rate and multicodeword WiMAX LDPC decoding on Cell/BE
paper_content:
A novel, flexible and scalable parallel LDPC decoding approach for the WiMAX wireless broadband standard (IEEE 802.16e) in the multicore Cell broadband engine architecture is proposed. A multicodeword LDPC decoder performing the simultaneous decoding of 96 codewords is presented. The coded data rate achieved a range of 72–80 Mbit/s, which compares well with VLSI-based decoders and is superior to the maximum coded data rate required by the WiMAX standard performing in worst case conditions. The 8-bit precision arithmetic adopted shows additional advantages over traditional 6-bit precision dedicated VLSI-based solutions, allowing better error floors and BER performance.
---
paper_title: Reconfigurable processing: the solution to low-power programmable DSP
paper_content:
One of the most compelling issues in the design of wireless communication components is to keep power dissipation between bounds. While low-power solutions are readily achieved in an application-specific approach, doing so in a programmable environment is a substantially harder problem. This paper presents an approach to low-power programmable DSP that is based on the dynamic reconfiguration of hardware modules. This technique has shown to yield at least an order of magnitude of power reduction compared to traditional instruction-based engines for problems in the area of wireless communication.
---
paper_title: Reconfigurable Computing Architectures
paper_content:
Reconfigurable architectures can bring unique capabilities to computational tasks. They offer the performance and energy efficiency of hardware with the flexibility of software. In some domains, they are the only way to achieve the required, real-time performance without fabricating custom integrated circuits. Their functionality can be upgraded and repaired during their operational lifecycle and specialized to the particular instance of a task. We survey the field of reconfigurable computing, providing a guide to the body-of-knowledge accumulated in architecture, compute models, tools, run-time reconfiguration, and applications.
---
paper_title: ARM System-on-Chip Architecture
paper_content:
Preface. 1. An Introduction to Processor Design. 2. The ARM Architecture. 3. ARM Assembly Language Programming. 4. ARM Organization and Implementation. 5. The ARM Instruction Set. 6. Architectural Support for High-Level Languages. 7. The Thumb Instruction Set. 8. Architectural Support for System Development. 9. ARM Processor Cores. 10. Memory Hierarchy. 11. Architectural Support for Operating Systems. 12. ARM CPU Cores. 13. Embedded ARM Applications. 14. The AMULET Asynchronous ARM Processors. Appendix: Computer Logic. Glossary. Bibliography. Index.
---
paper_title: Design of ion-implanted MOSFET's with very small physical dimensions
paper_content:
This paper considers the design, fabrication, and characterization of very small MOSFET switching devices suitable for digital integrated circuits using dimensions of the order of 1μ. Scaling relationships are presented which show how a conventional MOSFET can be reduced in size. An improved small device structure is presented that uses ion implantation to proVide shallow source and drain regions and a nonuniform substrate doping profile. Onedimensional models are used to predict the substrate doping profile and the corresponding threshold voltage versus source voltage characteristic. A two-dimensional current transport model is used to predict the relative degree of short-channel effects for different device parameter combinations. Polysilicon-gate MOSFETs with channel lengths as short as 0.5μ were fabricated, and the device characteristics measured and compared with predicted values. Ibe performance improvement expected from using these very small devices in highly miniaturized integrated circuits is projected. Reprintedfrom the IEEE Journal of Solid-State Circuits, Vol. SC-9, October 1974, pp. 256-268.]
---
paper_title: Data Mapping for Unreliable Memories
paper_content:
Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern semiconductor process technologies. In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. In particular, we propose a novel framework to characterize the effects of unreliable memories, which enables us to devise novel methods to mitigate the associated performance loss. We propose to deploy specifically designed data representations, which have the capability of substantially improving the system reliability compared to that realized by conventional data representations used in digital integrated circuits, such as 2's-complement or sign-magnitude number formats. To demonstrate the efficacy of the proposed framework, we analyze the impact of unreliable memories on coded communication systems, and we show that the deployment of optimized data representations substantially improves the error-rate performance of such systems.
---
paper_title: Digital Integrated Circuit Design: From VLSI Architectures to CMOS Fabrication
paper_content:
VLSI circuits are ubiquitous in the modern world, and designing them efficiently is becoming increasingly challenging with the development of ever smaller chips. This practically oriented textbook covers the important aspects of VLSI design using a top-down approach, reflecting the way digital circuits are actually designed. Using practical hints and tips, case studies and checklists, this comprehensive guide to how and when to design VLSI circuits, covers the advances, challenges and past mistakes in design, acting as an introduction to graduate students and a reference for practising electronic engineers.
---
paper_title: Memory Access Optimized Implementation of Cyclic and Quasi-Cyclic LDPC Codes on a GPGPU
paper_content:
Software based decoding of low-density parity-check (LDPC) codes frequently takes very long time, thus the general purpose graphics processing units (GPGPUs) that support massively parallel processing can be very useful for speeding up the simulation. In LDPC decoding, the parity-check matrix H needs to be accessed at every node updating process, and the size of the matrix is often larger than that of GPU on-chip memory especially when the code length is long or the weight is high. In this work, the parity-check matrix of cyclic or quasi-cyclic (QC) LDPC codes is greatly compressed by exploiting the periodic property of the matrix. Also, vacant elements are eliminated from the sparse message arrays to utilize the coalesced access of global memory supported by GPGPUs. Regular projective geometry (PG) and irregular QC LDPC codes are used for sum-product algorithm based decoding with the GTX-285 NVIDIA graphics processing unit (GPU), and considerable speed-up results are obtained.
---
paper_title: Tridimensional block multiword LDPC decoding on GPUs
paper_content:
In this paper, we describe a parallel algorithm for LDPC (Low Density Parity Check codes) decoding on a GPU (Graphics Processing Unit) using CUDA (Compute Unified Device Architecture). The strategy of the kernel grid and block design is shown and the multiword decoding operation is described using tridimensional blocks. The performance (speedup) of the proposed parallel algorithm is slightly better than the performance found in the literature when this is relatively good, and shows a great improvement in those cases with previously reported moderate or bad performance.
---
paper_title: Real-time DVB-S2 LDPC decoding on many-core GPU accelerators
paper_content:
It is well known that LDPC decoding is computationally demanding and one of the hardest signal operations to parallelize. Beyond data dependencies that restrict the decoding of a single word, it requires a large number of memory accesses. In this paper we propose parallel algorithms for performing in GPUs the most demanding case of irregular and long length LDPC codes adopted in the Digital Video Broadcasting - Satellite 2 (DVB-S2) standard used in data communications. By performing simultaneous multicodeword decoding and adopting special data structures, experimental results show that throughputs superior to 90 Mbps can be achieved when LDPC decoders for the DVB-S2 are implemented in the current GPUs.
---
paper_title: Massively LDPC Decoding on Multicore Architectures
paper_content:
Unlike usual VLSI approaches necessary for the computation of intensive Low-Density Parity-Check (LDPC) code decoders, this paper presents flexible software-based LDPC decoders. Algorithms and data structures suitable for parallel computing are proposed in this paper to perform LDPC decoding on multicore architectures. To evaluate the efficiency of the proposed parallel algorithms, LDPC decoders were developed on recent multicores, such as off-the-shelf general-purpose x86 processors, Graphics Processing Units (GPUs), and the CELL Broadband Engine (CELL/B.E.). Challenging restrictions, such as memory access conflicts, latency, coalescence, or unknown behavior of thread and block schedulers, were unraveled and worked out. Experimental results for different code lengths show throughputs in the order of 1 ~ 2 Mbps on the general-purpose multicores, and ranging from 40 Mbps on the GPU to nearly 70 Mbps on the CELL/B.E. The analysis of the obtained results allows to conclude that the CELL/B.E. performs better for short to medium length codes, while the GPU achieves superior throughputs with larger codes. They achieve throughputs that in some cases approach very well those obtained with VLSI decoders. From the analysis of the results, we can predict a throughput increase with the rise of the number of cores.
---
paper_title: Flexible Parallel Architecture for DVB-S2 LDPC Decoders
paper_content:
State-of-the-art decoders for Low-Density Parity-Check (LDPQ codes adopted by the DVB-S2 standard, explore the periodicity M = 360 features of the selected special LDPC- IRA codes. This paper addresses the generalization of a well known M-kernel parallel hardware structure and proposes an efficient partitioning by any factor of M, without memory addressing overhead and keeping unchanged the efficient message mapping scheme. The method provides a simple and efficient way to reduce the decoder complexity. Synthesizing the proposed decoder architecture for N = {45,90,180} parallel processing units using an FPGA family from Xilinx shows a minimum throughput above the minimal 90 Mbps.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: DVB-S2: The Second Generation Standard for Satellite Broad-Band Services
paper_content:
DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.
---
paper_title: High Throughput LDPC Decoder on GPU
paper_content:
The available Lower Density Parity Check (LDPC) decoders on Graphics Processing Unit (GPU) do not simultaneously read and write contiguous data blocks in memory because of the random nature of LDPC codes. One of these two operations has to be performed using noncontiguous accesses, resulting in long access time. To overcome this issue, we designed a multi-codeword parallel decoder with fully coalesced memory access. To test the performance of the method, we applied the method using an 8-bit compact data. The experimental results demonstrated that the method achieved more than 550Mbps throughput on Compute Unified Device Architecture (CUDA) enabled GPU.
---
paper_title: Implementation of Decoders for LDPC Block Codes and LDPC Convolutional Codes Based on GPUs
paper_content:
In this paper, efficient LDPC block-code decoders/simulators which run on graphics processing units (GPUs) are proposed. We also implement the decoder for the LDPC convolutional code (LDPCCC). The LDPCCC is derived from a predesigned quasi-cyclic LDPC block code with good error performance. Compared to the decoder based on the randomly constructed LDPCCC code, the complexity of the proposed LDPCCC decoder is reduced due to the periodicity of the derived LDPCCC and the properties of the quasicyclic structure. In our proposed decoder architecture, Γ (Γ is a multiple of a warp) codewords are decoded together, and hence, the messages of Γ codewords are also processed together. Since all the Γ codewords share the same Tanner graph, messages of the Γ distinct codewords corresponding to the same edge can be grouped into one package and stored linearly. By optimizing the data structures of the messages used in the decoding process, both the read and write processes can be performed in a highly parallel manner by the GPUs. In addition, a thread hierarchy minimizing the divergence of the threads is deployed, and it can maximize the efficiency of the parallel execution. With the use of a large number of cores in the GPU to perform the simple computations simultaneously, our GPU-based LDPC decoder can obtain hundreds of times speedup compared with a serial CPU-based simulator and over 40 times speedup compared with an eight-thread CPU-based simulator.
---
paper_title: A Comprehensive Performance Comparison of CUDA and OpenCL
paper_content:
This paper presents a comprehensive performance comparison between CUDA and OpenCL. We have selected 16 benchmarks ranging from synthetic applications to real-world ones. We make an extensive analysis of the performance gaps taking into account programming models, ptimization strategies, architectural details, and underlying compilers. Our results show that, for most applications, CUDA performs at most 30\% better than OpenCL. We also show that this difference is due to unfair comparisons: in fact, OpenCL can achieve similar performance to CUDA under a fair comparison. Therefore, we define a fair comparison of the two types of applications, providing guidelines for more potential analyses. We also investigate OpenCL's portability by running the benchmarks on other prevailing platforms with minor modifications. Overall, we conclude that OpenCL's portability does not fundamentally affect its performance, and OpenCL can be a good alternative to CUDA.
---
paper_title: Parallel LDPC decoding using CUDA and OpenMP
paper_content:
Digital mobile communication technologies, such as next generation mobile communication and mobile TV, are rapidly advancing. Hardware designs to provide baseband processing of new protocol standards are being actively attempted, because of concurrently emerging multiple standards and diverse needs on device functions, hardware-only implementation may have reached a limit. To overcome this challenge, digital communication system designs are adopting software solutions that use central processing units or graphics processing units (GPUs) to implement communication protocols. In this article we propose a parallel software implementation of low density parity check decoding algorithms, and we use a multi-core processor and a GPU to achieve both flexibility and high performance. Specifically, we use OpenMP for parallelizing software on a multi-core processor and Compute Unified Device Architecture (CUDA) for parallel software running on a GPU. We process information on H-matrices using OpenMP pragmas on a multi-core processor and execute decoding algorithms in parallel using CUDA on a GPU. We evaluated the performance of the proposed implementation with respect to two different code rates for the China Multimedia Mobile Broadcasting (CMMB) standard, and we verified that the proposed implementation satisfies the CMMB bandwidth requirement.
---
paper_title: Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach
paper_content:
Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the Tanner graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMlD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude.
---
paper_title: LDPC error correction for Gbit/s QKD
paper_content:
Low Density Parity Check (LDPC) error correction is a one-way algorithm that has become popular for quantum key ::: distribution (QKD) post-processing. Graphic processing units (GPUs) provide an interesting attached platform that may ::: deliver high rates of error correction performance for QKD. We present the details of our various LDPC GPU ::: implementations and both error correction and execution throughput performance that each achieves. We also discuss ::: the potential for implementation on a GPU platform to achieve Gbit/s throughput.
---
paper_title: Cell Processor Based LDPC Encoder/Decoder for WiMAX Applications
paper_content:
Encoder and decoder are the two most important and complex components of a wireless transceiver. Traditionally, dedicated hardware solutions are used because of their computational intensive algorithms. This paper presents an alternative software-based solution that has several advantages over dedicated hardware solutions. LDPC codes are chosen for their excellent error correcting performance and cell processor is chosen for its tremendous computational power. Sparse and structural properties of LDPC codes are exploited to reduce computation and memory requirements. Several optimization techniques suitable to cell processor architecture such as multi-threading, vectorization, loop unrolling are used to improve performance. The proposed solution achieved significant performance improvement over existing software and dedicated hardware solutions.
---
paper_title: A multi-standard efficient column-layered LDPC decoder for Software Defined Radio on GPUs
paper_content:
In this paper, we propose a multi-standard high-throughput column-layered (CL) low-density parity-check (LDPC) decoder for Software-Defined Radio (SDR) on a Graphics Processing Unit (GPU) platform. Multiple columns in the sub-matrix of quasi-cyclic LDPC (QC-LDPC) code are parallel performed inside a block, while multiple codewords are simultaneously decoded among many blocks on the GPU. Several optimization methods are employed to enhance the throughput, such as the compressed matrix structure, memory optimization, codeword packing scheme, two-dimension thread configuration and asynchronous data transfer. The experiment shows that our decoder has low bit error ratio and the peak throughput is 712Mbps, which is about two orders of magnitude faster than that of CPU implementation and comparable to the dedicated hardware solutions. Compared to the existing fastest GPU-based implementation, the presented decoder can achieve a performance improvement of 3.0x times.
---
paper_title: Caravela: A Novel Stream-Based Distributed Computing Environment
paper_content:
Distributed computing implies sharing computation, data, and network resources around the world. The Caravela environment applies a proposed flow model for stream computing on graphics processing units that encapsulates a program to be executed in local or remote computers and directly collects the data through the memory or the network.
---
paper_title: Massively LDPC Decoding on Multicore Architectures
paper_content:
Unlike usual VLSI approaches necessary for the computation of intensive Low-Density Parity-Check (LDPC) code decoders, this paper presents flexible software-based LDPC decoders. Algorithms and data structures suitable for parallel computing are proposed in this paper to perform LDPC decoding on multicore architectures. To evaluate the efficiency of the proposed parallel algorithms, LDPC decoders were developed on recent multicores, such as off-the-shelf general-purpose x86 processors, Graphics Processing Units (GPUs), and the CELL Broadband Engine (CELL/B.E.). Challenging restrictions, such as memory access conflicts, latency, coalescence, or unknown behavior of thread and block schedulers, were unraveled and worked out. Experimental results for different code lengths show throughputs in the order of 1 ~ 2 Mbps on the general-purpose multicores, and ranging from 40 Mbps on the GPU to nearly 70 Mbps on the CELL/B.E. The analysis of the obtained results allows to conclude that the CELL/B.E. performs better for short to medium length codes, while the GPU achieves superior throughputs with larger codes. They achieve throughputs that in some cases approach very well those obtained with VLSI decoders. From the analysis of the results, we can predict a throughput increase with the rise of the number of cores.
---
paper_title: A Parallel IRRWBF LDPC Decoder Based on Stream-Based Processor
paper_content:
Low-density parity check (LDPC) codes have gained much attention due to their use of various belief-propagation (BP) decoding algorithms to impart excellent error-correcting capability. The BP decoders are quite simple; however, their computation-intensive and repetitive process prohibits their use in energy-sensitive applications such as sensor networks. Bit flipping-based decoding algorithms, especially implementation-efficient, reliability ratio-based, weighted bit-flipping (IRRWBF) decoding; have shown an excellent tradeoff between error-correction performance and implementation cost. In this paper, we show that with IRRWBF, iterative re-computation can be replaced by iterative selective updating. When compared with the original algorithm, simulation results show that, decoding speed can be increased by 200 to 600 percent , as the number of decoding iterations is increased from 5 to 1,000. The decoding steps are broken down into various stages such that the update operations are mostly of the single-instruction, multiple-data (SIMD) type. In this paper, we show that by using Intel Wireless MMX 2 accelerating technology in the proposed algorithm, the speed increased by 500 to 1,500 percent. The results of implementing the proposed scheme using an Intel/Marvell PXA320 (806 MHz) CPU are presented. The proposed scheme can be used effectively in real-time LDPC codes for energy-sensitive mobile devices.
---
paper_title: Real-time decoding for LDPC based distributed video coding
paper_content:
Wyner-Ziv (WZ) video coding -- a particular case of distributed video coding (DVC), is well known for its low-complexity encoding and high-complexity decoding characteristics. Although some works have been made in recent years, especially for improving the coding efficiency, most reported WZ codecs have high time delay in the decoder, which hinders its practical values for applications with critical timing constraint. In this paper, a fully parallelized sum-product algorithm (SPA) for low density parity check accumulate (LDPCA) codes is proposed and realized through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU). Simulation results show that, through our work, QCIF (surveillance) videos can be decoded in real-time with extremely high quality and without rate-distortion (RD) performance loss.
---
paper_title: Performance evaluation of LDPC decoding on a general purpose mobile CPU
paper_content:
This paper explores using a mobile platform for performing the calculations required for the building blocks of telecommunication systems. The building block analyzed in this paper is LDPC (low-density parity-check) channel decoding, performed on the LDPC design used in the DVB-T2 standard. Implementation details are given, and a performance analysis on a mobile CPU is performed. The implementation is compared against a very similar implementation running on a desktop computer CPU, as well as a GPU (graphics processing unit) implementation. The results give indications of the current state of typical mobile platforms of today.
---
paper_title: Efficient GPU and CPU-based LDPC decoders for long codewords
paper_content:
The next generation DVB-T2, DVB-S2, and DVB-C2 standards for digital television broadcasting specify the use of low-density parity-check (LDPC) codes with codeword lengths of up to 64800 bits. The real-time decoding of these codes on general purpose computing hardware is useful for completely software defined receivers, as well as for testing and simulation purposes. Modern graphics processing units (GPUs) are capable of massively parallel computation, and can in some cases, given carefully designed algorithms, outperform general purpose CPUs (central processing units) by an order of magnitude or more. The main problem in decoding LDPC codes on GPU hardware is that LDPC decoding generates irregular memory accesses, which tend to carry heavy performance penalties (in terms of efficiency) on GPUs. Memory accesses can be efficiently parallelized by decoding several codewords in parallel, as well as by using appropriate data structures. In this article we present the algorithms and data structures used to make log-domain decoding of the long LDPC codes specified by the DVB-T2 standard—at the high data rates required for television broadcasting—possible on a modern GPU. Furthermore, we also describe a similar decoder implemented on a general purpose CPU, and show that high performance LDPC decoders are also possible on modern multi-core CPUs.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: Accelerating Regular LDPC Code Decoders on GPUs
paper_content:
Modern active and passive satellite and airborne sensors with higher temporal, spectral and spatial resolutions for Earth remote sensing result in a significant increase in data volume. This poses a challenge for data transmission over error-prone wireless links to a ground receiving station. Low-density parity-check (LDPC) codes have been adopted in modern communication systems for robust error correction. Demands for LDPC decoders at a ground receiving station for efficient and flexible data communication links have inspired the usage of a cost-effective high-performance computing device. In this paper we propose a graphic-processing-unit (GPU)-based regular LDPC decoders with the log sum-product iterative decoding algorithm (log-SPA). The GPU code was written to run NVIDIA GPUs using the compute unified device architecture (CUDA) language with a novel implementation of asynchronous data transfer for LDPC decoding. Experimental results showed that the proposed GPU-based high-throughput regular LDPC decoder achieved a significant 271x speedup compared to its CPU-based single-threaded counterpart written in the C language.
---
paper_title: Simulation of LDPC convolutional decoders with CPU and GPU
paper_content:
In this paper, the Sum Product Algorithm (SPA) and the Min-Sum Algorithm (MSA) are used for decoding low-density parity-check convolutional codes (LDPC-CCs). The two algorithms have been implemented and run on three different computing environments. The first environment is a single-threading Central Processing Unit (CPU); the second one is the multi-threading CPU based on OpenMP (Open Multi-Processing); and the third one is the multi-threading Graphics Processing Unit (GPU). The error performance of the LDPC-CCs and the simulation time taken under the three specific computing environments and the two decoding algorithms are evaluated and compared. It is found that the different computing environments produce very similar error results. It is also concluded that using the GPU computing platform can reduce the simulation time substantially.
---
paper_title: LDPC decoding for CMMB utilizing OpenMP and CUDA parallelization
paper_content:
As the 4G mobile communication systems require high transmission rate with reliability, the demand for efficient error correcting code increases. In this paper, a novel LDPC (Low Density Parity Check) decoding method is introduced. We address a parallel software implementation of LDPC decoding for CMMB (China Multimedia Mobile Broadcasting) standard. LDPC codes for CMMB employ a regular H-matrix which has the fixed row and column weights. While effectively utilizing the regularity of the H-matrix, we process information on H-matrices for multiple code rates using OpenMP pragmas on a multi-core processor and execute the decoding algorithm in parallel using CUDA (Compute Unified Device Architecture) on a GPU (Graphics Processing Unit). We evaluated the performance of the proposed implementation with respect to two different code rates, and verified that the proposed implementation satisfies the bandwidth requirement for CMMB.
---
paper_title: Shortening Design Time through Multiplatform Simulations with a Portable OpenCL Golden-model: The LDPC Decoder Case
paper_content:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bit width and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on Open CL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. Open CL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpen CL (Silicon to Open CL), a tool that automatically converts Open CL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified Open CL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3× faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
---
paper_title: High coded data rate and multicodeword WiMAX LDPC decoding on Cell/BE
paper_content:
A novel, flexible and scalable parallel LDPC decoding approach for the WiMAX wireless broadband standard (IEEE 802.16e) in the multicore Cell broadband engine architecture is proposed. A multicodeword LDPC decoder performing the simultaneous decoding of 96 codewords is presented. The coded data rate achieved a range of 72–80 Mbit/s, which compares well with VLSI-based decoders and is superior to the maximum coded data rate required by the WiMAX standard performing in worst case conditions. The 8-bit precision arithmetic adopted shows additional advantages over traditional 6-bit precision dedicated VLSI-based solutions, allowing better error floors and BER performance.
---
paper_title: Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach
paper_content:
Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the Tanner graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMlD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude.
---
paper_title: Caravela: A Novel Stream-Based Distributed Computing Environment
paper_content:
Distributed computing implies sharing computation, data, and network resources around the world. The Caravela environment applies a proposed flow model for stream computing on graphics processing units that encapsulates a program to be executed in local or remote computers and directly collects the data through the memory or the network.
---
paper_title: Programming Massively Parallel Processors. A Hands-on Approach.
paper_content:
Programming Massively Parallel Processors. A Hands-on Approach David Kirk and Wen-mei Hwu ISBN: 978-0-12-381472-2 Copyright 2010 Introduction This book is designed for graduate/undergraduate students and practitioners from any science and engineering discipline who use computational power to further their field of research. This comprehensive test/reference provides a foundation for the understanding and implementation of parallel programming skills which are needed to achieve breakthrough results by developing parallel applications that perform well on certain classes of Graphic Processor Units (GPUs). The book guides the reader to experience programming by using an extension to C language, in CUDA which is a parallel programming environment supported on NVIDIA GPUs, and emulated on less parallel CPUs. Given the fact that parallel programming on any High Performance Computer is complex and requires knowledge about the underlying hardware in order to write an efficient program, it becomes an advantage of this book over others to be specific toward a particular hardware. The book takes the readers through a series of techniques for writing and optimizing parallel programming for several real-world applications. Such experience opens the door for the reader to learn parallel programming in depth. Outline of the Book Kirk and Hwu effectively organize and link a wide spectrum of parallel programming concepts by focusing on the practical applications in contrast to most general parallel programming texts that are mostly conceptual and theoretical. The authors are both affiliated with NVIDIA; Kirk is an NVIDIA Fellow and Hwu is principle investigator for the first NVIDIA CUDA Center of Excellence at the University of Illinois at Urbana-Champaign. Their coverage in the book can be divided into four sections. The first part (Chapters 1–3) starts by defining GPUs and their modern architectures and later providing a history of Graphics Pipelines and GPU computing. It also covers data parallelism, the basics of CUDA memory/threading models, the CUDA extensions to the C language, and the basic programming/debugging tools. The second part (Chapters 4–7) enhances student programming skills by explaining the CUDA memory model and its types, strategies for reducing global memory traffic, the CUDA threading model and granularity which include thread scheduling and basic latency hiding techniques, GPU hardware performance features, techniques to hide latency in memory accesses, floating point arithmetic, modern computer system architecture, and the common data-parallel programming patterns needed to develop a high-performance parallel application. The third part (Chapters 8–11) provides a broad range of parallel execution models and parallel programming principles, in addition to a brief introduction to OpenCL. They also include a wide range of application case studies, such as advanced MRI reconstruction, molecular visualization and analysis. The last chapter (Chapter 12) discusses the great potential for future architectures of GPUs. It provides commentary on the evolution of memory architecture, Kernel Execution Control Evolution, and programming environments. Summary In general, this book is well-written and well-organized. A lot of difficult concepts related to parallel computing areas are easily explained, from which beginners or even advanced parallel programmers will benefit greatly. It provides a good starting point for beginning parallel programmers who can access a Tesla GPU. The book targets specific hardware and evaluates performance based on this specific hardware. As mentioned in this book, approximately 200 million CUDA-capable GPUs have been actively in use. Therefore, the chances are that a lot of beginning parallel programmers can have access to Telsa GPU. Also, this book gives clear descriptions of Tesla GPU architecture, which lays a solid foundation for both beginning parallel programmers and experienced parallel programmers. The book can also serve as a good reference book for advanced parallel computing courses. Jie Cheng, University of Hawaii Hilo
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: Accelerating Regular LDPC Code Decoders on GPUs
paper_content:
Modern active and passive satellite and airborne sensors with higher temporal, spectral and spatial resolutions for Earth remote sensing result in a significant increase in data volume. This poses a challenge for data transmission over error-prone wireless links to a ground receiving station. Low-density parity-check (LDPC) codes have been adopted in modern communication systems for robust error correction. Demands for LDPC decoders at a ground receiving station for efficient and flexible data communication links have inspired the usage of a cost-effective high-performance computing device. In this paper we propose a graphic-processing-unit (GPU)-based regular LDPC decoders with the log sum-product iterative decoding algorithm (log-SPA). The GPU code was written to run NVIDIA GPUs using the compute unified device architecture (CUDA) language with a novel implementation of asynchronous data transfer for LDPC decoding. Experimental results showed that the proposed GPU-based high-throughput regular LDPC decoder achieved a significant 271x speedup compared to its CPU-based single-threaded counterpart written in the C language.
---
paper_title: Memory Access Optimized Implementation of Cyclic and Quasi-Cyclic LDPC Codes on a GPGPU
paper_content:
Software based decoding of low-density parity-check (LDPC) codes frequently takes very long time, thus the general purpose graphics processing units (GPGPUs) that support massively parallel processing can be very useful for speeding up the simulation. In LDPC decoding, the parity-check matrix H needs to be accessed at every node updating process, and the size of the matrix is often larger than that of GPU on-chip memory especially when the code length is long or the weight is high. In this work, the parity-check matrix of cyclic or quasi-cyclic (QC) LDPC codes is greatly compressed by exploiting the periodic property of the matrix. Also, vacant elements are eliminated from the sparse message arrays to utilize the coalesced access of global memory supported by GPGPUs. Regular projective geometry (PG) and irregular QC LDPC codes are used for sum-product algorithm based decoding with the GTX-285 NVIDIA graphics processing unit (GPU), and considerable speed-up results are obtained.
---
paper_title: Parallel LDPC decoding using CUDA and OpenMP
paper_content:
Digital mobile communication technologies, such as next generation mobile communication and mobile TV, are rapidly advancing. Hardware designs to provide baseband processing of new protocol standards are being actively attempted, because of concurrently emerging multiple standards and diverse needs on device functions, hardware-only implementation may have reached a limit. To overcome this challenge, digital communication system designs are adopting software solutions that use central processing units or graphics processing units (GPUs) to implement communication protocols. In this article we propose a parallel software implementation of low density parity check decoding algorithms, and we use a multi-core processor and a GPU to achieve both flexibility and high performance. Specifically, we use OpenMP for parallelizing software on a multi-core processor and Compute Unified Device Architecture (CUDA) for parallel software running on a GPU. We process information on H-matrices using OpenMP pragmas on a multi-core processor and execute decoding algorithms in parallel using CUDA on a GPU. We evaluated the performance of the proposed implementation with respect to two different code rates for the China Multimedia Mobile Broadcasting (CMMB) standard, and we verified that the proposed implementation satisfies the CMMB bandwidth requirement.
---
paper_title: LDPC error correction for Gbit/s QKD
paper_content:
Low Density Parity Check (LDPC) error correction is a one-way algorithm that has become popular for quantum key ::: distribution (QKD) post-processing. Graphic processing units (GPUs) provide an interesting attached platform that may ::: deliver high rates of error correction performance for QKD. We present the details of our various LDPC GPU ::: implementations and both error correction and execution throughput performance that each achieves. We also discuss ::: the potential for implementation on a GPU platform to achieve Gbit/s throughput.
---
paper_title: Information reconciliation for QKD with rate-compatible non-binary LDPC codes
paper_content:
We study the information reconciliation (IR) scheme for quantum key distribution (QKD) protocols. The IR for the QKD can be seen as the asymmetric Slepian-Wolf problem, which low-density parity-check (LDPC) codes can solve with efficient algorithms, i.e., the belief propagation. However, the LDPC codes are needed to be chosen properly from a collection of codes optimized for multiple key rates, which leads to complex decoder devices and performance degradation for unoptimized key rates. Therefore, it is desired that establish an IR scheme with a single LDPC code which supports multiple rates. To this end, in this paper, we propose an IR scheme with a rate-compatible non-binary LDPC code. Numerical results show the proposed scheme achieves IR efficiency comparable to the best know conventional IR scheme with lower decoding error rates.
---
paper_title: Real-time DVB-S2 LDPC decoding on many-core GPU accelerators
paper_content:
It is well known that LDPC decoding is computationally demanding and one of the hardest signal operations to parallelize. Beyond data dependencies that restrict the decoding of a single word, it requires a large number of memory accesses. In this paper we propose parallel algorithms for performing in GPUs the most demanding case of irregular and long length LDPC codes adopted in the Digital Video Broadcasting - Satellite 2 (DVB-S2) standard used in data communications. By performing simultaneous multicodeword decoding and adopting special data structures, experimental results show that throughputs superior to 90 Mbps can be achieved when LDPC decoders for the DVB-S2 are implemented in the current GPUs.
---
paper_title: Massively parallel implementation of cyclic LDPC codes on a general purpose graphics processing unit
paper_content:
Simulation of low-density parity-check (LDPC) codes frequently takes several days, thus the use of general purpose graphics processing units (GPGPUs) is very promising. However, GPGPUs are designed for compute-intensive applications, and they are not optimized for data caching or control management. In LDPC decoding, the parity check matrix H needs to be accessed at every node updating process, and the size of H matrix is often larger than that of GPU on-chip memory especially when the code-length is long or the weight is high. In this work, the parity check matrix of cyclic or quasi-cyclic LDPC codes is greatly compressed by exploiting the periodic property of the matrix. In our experiments, the Compute Unified Device Architecture (CUDA) of Nvidia is used. With the (1057, 813) and (4161, 3431) projective geometry (PG)—LDPC codes, the execution speed of the proposed method is more than twice of the reference implementations that do not exploit the cyclic property of the parity check matrices.
---
paper_title: Massively LDPC Decoding on Multicore Architectures
paper_content:
Unlike usual VLSI approaches necessary for the computation of intensive Low-Density Parity-Check (LDPC) code decoders, this paper presents flexible software-based LDPC decoders. Algorithms and data structures suitable for parallel computing are proposed in this paper to perform LDPC decoding on multicore architectures. To evaluate the efficiency of the proposed parallel algorithms, LDPC decoders were developed on recent multicores, such as off-the-shelf general-purpose x86 processors, Graphics Processing Units (GPUs), and the CELL Broadband Engine (CELL/B.E.). Challenging restrictions, such as memory access conflicts, latency, coalescence, or unknown behavior of thread and block schedulers, were unraveled and worked out. Experimental results for different code lengths show throughputs in the order of 1 ~ 2 Mbps on the general-purpose multicores, and ranging from 40 Mbps on the GPU to nearly 70 Mbps on the CELL/B.E. The analysis of the obtained results allows to conclude that the CELL/B.E. performs better for short to medium length codes, while the GPU achieves superior throughputs with larger codes. They achieve throughputs that in some cases approach very well those obtained with VLSI decoders. From the analysis of the results, we can predict a throughput increase with the rise of the number of cores.
---
paper_title: High speed decoding of non-binary irregular LDPC codes using GPUs
paper_content:
Low-Density Parity-Check (LDPC) codes are very powerful channel coding schemes with a broad range of applications. The existence of low complexity (i.e., linear time) iterative message passing decoders with close to optimum error correction performance is one of the main strengths of LDPC codes. It has been shown that the performance of these decoders can be further enhanced if the LDPC codes are extended to higher order Galois fields, yielding so called non-binary LDPC codes. However, this performance gain comes at the cost of rapidly increasing decoding complexity. To deal with this increased complexity, we present an efficient implementation of a signed-log domain FFT decoder for non-binary irregular LDPC codes that exploits the inherent massive parallelization capabilities of message passing decoders. We employ Nvidia's Compute Unified Device Architecture (CUDA) to incorporate the available processing power of state-of-the-art Graphics Processing Units (GPU s).
---
paper_title: Real-time decoding for LDPC based distributed video coding
paper_content:
Wyner-Ziv (WZ) video coding -- a particular case of distributed video coding (DVC), is well known for its low-complexity encoding and high-complexity decoding characteristics. Although some works have been made in recent years, especially for improving the coding efficiency, most reported WZ codecs have high time delay in the decoder, which hinders its practical values for applications with critical timing constraint. In this paper, a fully parallelized sum-product algorithm (SPA) for low density parity check accumulate (LDPCA) codes is proposed and realized through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU). Simulation results show that, through our work, QCIF (surveillance) videos can be decoded in real-time with extremely high quality and without rate-distortion (RD) performance loss.
---
paper_title: Fast decoding for LDPC based distributed video coding
paper_content:
Distributed video coding (DVC) is a new coding paradigm targeting applications with the need for low-complexity encoding at the cost of a higher decoding complexity. In the DVC architecture based on a feedback channel, the high decoding complexity is mainly due to the request-decode operation with repetitively fixed step size (induced by Slepian-Wolf decoding). In this paper, a parallel message-passing decoding algorithm for low density parity check (LDPC) syndrome is applied through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU). Furthermore, we propose an approach to reduce the number of requests dubbed as Ladder Request Step Size (LRSS) which leads to more speedup gain. Experimental results show that, through our work, significant speedup in decoding time is achieved with negligible loss in rate-distortion (RD) performance.
---
paper_title: Efficient simulation of QC LDPC decoding on GPU platform by CUDA
paper_content:
An efficient parallel simulation scheme of quasi-cyclic (QC) low-density parity-check (LDPC) decoding is proposed to improve the simulation efficiency greatly. It employs multi-threads with the multi-processors of a graphic processing unit (GPU) to perform the simulation of LDPC decoding in parallel. Other than full hardware based LDPC decoding, it obtains good features of low cost, easy programming complexity by using the compute unified device architecture (CUDA) techniques. The CUDA also provides parallel computing by the GPU with efficient multi-thread computation and very high memory bandwidth. Based on the proposed scheme, all bit nodes or check nodes can be updated in an LDPC decoding iteration simultaneously. Therefore, it provides an efficient and fast approach of QC LDPC decoding.
---
paper_title: Area, throughput, and energy-efficiency trade-offs in the VLSI implementation of LDPC decoders
paper_content:
Low-density parity-check (LDPC) codes are key ingredients for improving reliability of modern communication systems and storage devices. On the implementation side however, the design of energy-efficient and high-speed LDPC decoders with a sufficient degree of reconfigurability to meet the flexibility demands of recent standards remains challenging. This survey paper provides an overview of the state-of-the-art in the design of LDPC decoders using digital integrated circuits. To this end, we summarize available algorithms and characterize the design space. We analyze the different architectures and their connection to different codes and requirements. The advantages and disadvantages of the various choices are illustrated by comparing state-of-the-art LDPC decoder designs.
---
paper_title: LDPC decoding for CMMB utilizing OpenMP and CUDA parallelization
paper_content:
As the 4G mobile communication systems require high transmission rate with reliability, the demand for efficient error correcting code increases. In this paper, a novel LDPC (Low Density Parity Check) decoding method is introduced. We address a parallel software implementation of LDPC decoding for CMMB (China Multimedia Mobile Broadcasting) standard. LDPC codes for CMMB employ a regular H-matrix which has the fixed row and column weights. While effectively utilizing the regularity of the H-matrix, we process information on H-matrices for multiple code rates using OpenMP pragmas on a multi-core processor and execute the decoding algorithm in parallel using CUDA (Compute Unified Device Architecture) on a GPU (Graphics Processing Unit). We evaluated the performance of the proposed implementation with respect to two different code rates, and verified that the proposed implementation satisfies the bandwidth requirement for CMMB.
---
paper_title: High efficient distributed video coding with parallelized design for LDPCA decoding on CUDA based GPGPU
paper_content:
Distributed video coding (DVC) is a new coding paradigm targeting on applications with the need of low-complexity and/or low-power encoding at the cost of a high-complexity decoding. In the DVC architectures based on Error Control Codes (ECCs) with a feedback channel, the high decoding complexity comes from the decode-check-request iterations between the ECC encoder and the ECC decoder. In this paper, a parallel message-passing decoding algorithm for computing low density parity check (LDPC) syndromes is applied through the Compute Unified Device Architecture (CUDA) based on General Purpose Graphics Processing Unit (GPGPU). Furthermore, we proposed a novel rate control mechanism, dubbed as the Ladder Step Size Request (LSSR), to reduce the number of requests which leads to much speedup gain. Experimental results show that, through our work, the overall DVC decoding speedup gain can reach 46.52 with only 0.2dB rate distortion performance loss.
---
paper_title: Massive parallel LDPC decoding on GPU
paper_content:
Low-Density Parity-Check (LDPC) codes are powerful error correcting codes (ECC). They have recently been adopted by several data communication standards such as DVB-S2 and WiMax. LDPCs are represented by bipartite graphs, also called Tanner graphs, and their decoding demands very intensive computation. For that reason, VLSI dedicated architectures have been investigated and developed over the last few years. This paper proposes a new approach for LDPC decoding on graphics processing units (GPUs). Efficient data structures and an new algorithm are proposed to represent the Tanner graph and to perform LDPC decoding according to the stream-based computing model. GPUs were programmed to efficiently implement the proposed algorithms by applying data-parallel intensive computing. Experimental results show that GPUs perform LDPC decoding nearly three orders of magnitude faster than modern CPUs. Moreover, they lead to the conclusion that GPUs with their tremendous processing power can be considered as a consistent alternative to state-of-the-art hardware LDPC decoders.
---
paper_title: Programming Massively Parallel Processors. A Hands-on Approach.
paper_content:
Programming Massively Parallel Processors. A Hands-on Approach David Kirk and Wen-mei Hwu ISBN: 978-0-12-381472-2 Copyright 2010 Introduction This book is designed for graduate/undergraduate students and practitioners from any science and engineering discipline who use computational power to further their field of research. This comprehensive test/reference provides a foundation for the understanding and implementation of parallel programming skills which are needed to achieve breakthrough results by developing parallel applications that perform well on certain classes of Graphic Processor Units (GPUs). The book guides the reader to experience programming by using an extension to C language, in CUDA which is a parallel programming environment supported on NVIDIA GPUs, and emulated on less parallel CPUs. Given the fact that parallel programming on any High Performance Computer is complex and requires knowledge about the underlying hardware in order to write an efficient program, it becomes an advantage of this book over others to be specific toward a particular hardware. The book takes the readers through a series of techniques for writing and optimizing parallel programming for several real-world applications. Such experience opens the door for the reader to learn parallel programming in depth. Outline of the Book Kirk and Hwu effectively organize and link a wide spectrum of parallel programming concepts by focusing on the practical applications in contrast to most general parallel programming texts that are mostly conceptual and theoretical. The authors are both affiliated with NVIDIA; Kirk is an NVIDIA Fellow and Hwu is principle investigator for the first NVIDIA CUDA Center of Excellence at the University of Illinois at Urbana-Champaign. Their coverage in the book can be divided into four sections. The first part (Chapters 1–3) starts by defining GPUs and their modern architectures and later providing a history of Graphics Pipelines and GPU computing. It also covers data parallelism, the basics of CUDA memory/threading models, the CUDA extensions to the C language, and the basic programming/debugging tools. The second part (Chapters 4–7) enhances student programming skills by explaining the CUDA memory model and its types, strategies for reducing global memory traffic, the CUDA threading model and granularity which include thread scheduling and basic latency hiding techniques, GPU hardware performance features, techniques to hide latency in memory accesses, floating point arithmetic, modern computer system architecture, and the common data-parallel programming patterns needed to develop a high-performance parallel application. The third part (Chapters 8–11) provides a broad range of parallel execution models and parallel programming principles, in addition to a brief introduction to OpenCL. They also include a wide range of application case studies, such as advanced MRI reconstruction, molecular visualization and analysis. The last chapter (Chapter 12) discusses the great potential for future architectures of GPUs. It provides commentary on the evolution of memory architecture, Kernel Execution Control Evolution, and programming environments. Summary In general, this book is well-written and well-organized. A lot of difficult concepts related to parallel computing areas are easily explained, from which beginners or even advanced parallel programmers will benefit greatly. It provides a good starting point for beginning parallel programmers who can access a Tesla GPU. The book targets specific hardware and evaluates performance based on this specific hardware. As mentioned in this book, approximately 200 million CUDA-capable GPUs have been actively in use. Therefore, the chances are that a lot of beginning parallel programmers can have access to Telsa GPU. Also, this book gives clear descriptions of Tesla GPU architecture, which lays a solid foundation for both beginning parallel programmers and experienced parallel programmers. The book can also serve as a good reference book for advanced parallel computing courses. Jie Cheng, University of Hawaii Hilo
---
paper_title: Systematic construction and verification methodology for LDPC codes
paper_content:
In this paper, a novel and systematic LDPC codeword construction and verification methodology is proposed. The methodology is composed by the simulated annealing based LDPC codeword constructor, the GPU based high-speed codeword selector and the ant colony optimization based pipeline scheduler. Compared to the traditional ways, this methodology enables us to construct both decoding-performance-aware and hardware-efficiency-aware LDPC codewords in a short time. Simulation results show that the generated codewords have much less cycles (length 6 cycles eliminated) and memory conflicts (75% reduction on idle clocks), while having noBER performance loss compared to WiMAXcodewords. Additionally, the simulation speeds up by 490 times under float precision against CPU and a net throughput 24.5Mbps is achieved.
---
paper_title: A Scalable LDPC Decoder on GPU
paper_content:
A flexible and scalable approach for LDPC decodingon CUDA based Graphics Processing Unit (GPU) is presented in this paper. Layered decoding is a popular method for LDPC decoding and is known for its fast convergence. However, efficient implementation of the layered decoding algorithm on GPU is challenging due to the limited amount of data-parallelism available in this algorithm. To overcome this problem, a kernel execution configuration that can decode multiple codewords simultaneously on GPU is developed. This paper proposes a compact data packing scheme to reduce the number of global memory accesses and parity-check matrix representation to reduce constant memory latency. Global memory bandwidth efficiency is improved by coalescing simultaneous memory accesses of threads in a half-warp into a single memory transaction. Asynchronous data transfers are used to hide host memory latency by overlapping kernel execution with data transfers between CPU and GPU. The proposed implementation of LDPC decoder on GPU performs two orders of magnitude faster than the LDPC decoder on a CPU and four times faster than the previously reported LDPC decoder on GPU. This implementation achieves a throughput of 160Mbps, which is comparable to dedicated hardware solutions.
---
paper_title: Fourier domain decoding algorithm of non-binary LDPC codes for parallel implementation
paper_content:
For decoding non-binary low-density parity-check (LDPC) codes, logarithm-domain sum-product (Log-SP) algorithms were proposed for reducing quantization effects of SP algorithm in conjunction with FFT. Since FFT is not applicable in the logarithm domain, the computations required at check nodes in the Log-SP algorithms are computationally intensive. What is worse, check nodes usually have higher degree than variable nodes. As a result, most of the time for decoding is used for check node computations, which leads to a bottleneck effect. In this paper, we propose a Log-SP algorithm in the Fourier domain. With this algorithm, the role of variable nodes and check nodes are switched. The intensive computations are spread over lower-degree variable nodes, which can be efficiently calculated in parallel. Furthermore, we develop a fast calculation method for the estimated bits and syndromes in the Fourier domain.
---
paper_title: Sequential decoding of non-binary LDPC codes on graphics processing units
paper_content:
Non-binary low-density parity-check (LDPC) codes have been shown to attain near capacity error correcting performance in noisy wireless communication channels. It is well known that these codes require a very large number of operations per-bit to decode. This high computational complexity along with a parallel decoder structure makes graphics processing units (GPUs) an attractive platform for acceleration of the decoding algorithm. The seemingly random memory access patterns associated with decoding are generally beneficial to error-correcting performance but present a challenge to designers who want to leverage the computational capabilities of the GPU. In this paper we describe the design of an efficient decoder implementation based on GPUs and a corresponding set of powerful non-binary LDPC codes. Using the belief propagation algorithm with a sequential message updating scheme it is shown that we are able to exploit parallelism inherent in the decoding algorithm while decreasing the number of decoding iterations required for convergence.
---
paper_title: Extremely fast simulator for decoding LDPC codes
paper_content:
Decoding low-density parity-check (LDPC) codes requires a lot of computation time, particularly when bit error rates as low as 10−9 are needed. In this paper, we improve the simulation speed by making use of an inexpensive graphics processing unit (GPU). A dedicated program is written to utilize the hardware resources in the GPU to decode LDPC codes in a parallel manner. Codes with rate 1/2 and length 2, 304 and 10, 008 are simulated by both GPU and central processing unit (CPU). We also show the average iteration time when LDPC codes with length 15, 000 and 20, 000 are simulated.
---
paper_title: Cell Processor Based LDPC Encoder/Decoder for WiMAX Applications
paper_content:
Encoder and decoder are the two most important and complex components of a wireless transceiver. Traditionally, dedicated hardware solutions are used because of their computational intensive algorithms. This paper presents an alternative software-based solution that has several advantages over dedicated hardware solutions. LDPC codes are chosen for their excellent error correcting performance and cell processor is chosen for its tremendous computational power. Sparse and structural properties of LDPC codes are exploited to reduce computation and memory requirements. Several optimization techniques suitable to cell processor architecture such as multi-threading, vectorization, loop unrolling are used to improve performance. The proposed solution achieved significant performance improvement over existing software and dedicated hardware solutions.
---
paper_title: Programming graphics processing units for the decoding of low-density parity-check codes
paper_content:
Simulating the error performance of low-density parity-check (LDPC) codes usually takes a lot of computation time. Inexpensive graphics processing units (GPUs) have recently been used to accelerate the decoding process by allowing the smaller and identical tasks to be performed in a highly parallel manner. In this paper, we propose decoding a number of LDPC codes at the same time using GPUs. We show that the decoding speed is improved with our proposed method.
---
paper_title: Quasi-cyclic low-density parity-check convolutional code
paper_content:
This paper proposes a novel quasi-cyclic low-density parity-check convolutional code, and a two-stage construction algorithm with modified progressive edge growth (PEG) method is provided. We propose both encoder and decoder implementation architecture for this code. The quasi-cyclic form provides the parallelism for encoder and decoder, which can increase the throughput and decrease the delay significantly. The proposed modified min-sum decoding algorithm can speed up the process of convergence and reduce the hardware complexity. We also designed a GPU based simulation platform to speed up about 200 times against CPU to verify the code performance. Simulation results show the proposed code can get 0.5∼1dB coding gain and lower error floor compared with the LDPC codes in WiMAX standard with the same code length, while the decoder only has 20 iterators.
---
paper_title: High Throughput LDPC Decoder on GPU
paper_content:
The available Lower Density Parity Check (LDPC) decoders on Graphics Processing Unit (GPU) do not simultaneously read and write contiguous data blocks in memory because of the random nature of LDPC codes. One of these two operations has to be performed using noncontiguous accesses, resulting in long access time. To overcome this issue, we designed a multi-codeword parallel decoder with fully coalesced memory access. To test the performance of the method, we applied the method using an 8-bit compact data. The experimental results demonstrated that the method achieved more than 550Mbps throughput on Compute Unified Device Architecture (CUDA) enabled GPU.
---
paper_title: Embedded multicore architectures for LDPC decoding
paper_content:
Recently, the development of Low-Density Parity-Check (LDPC) decoding solutions has been proposed for a vast set of architectures, ranging from dedicated hardware to fully programmable ones (e.g. Cell/B.E. and graphics processing units). In this paper we propose efficient embedded programmable multicore architectures for achieving real-time LDPC decoding. The proposed multicore architectures allow to exploit data parallelism by decoding in parallel multiple codewords on the provided cores, with enough local memory capacity to store all data corresponding to the Tanner graph. Therefore, with this distributed memory and local computing approach, only a single shared bus is required to communicate the codewords. The proposed class of architectures can be prototyped on field programmable gate arrays or implemented on application specific integrated circuits, and it is validated by using the popular Cell processor, which relates very closely with the one here proposed. Finally, we discuss the related art of dedicated and programmable LDPC decoders, and discuss the advantages and disadvantages regarding the proposed solution.
---
paper_title: Massively LDPC Decoding on Multicore Architectures
paper_content:
Unlike usual VLSI approaches necessary for the computation of intensive Low-Density Parity-Check (LDPC) code decoders, this paper presents flexible software-based LDPC decoders. Algorithms and data structures suitable for parallel computing are proposed in this paper to perform LDPC decoding on multicore architectures. To evaluate the efficiency of the proposed parallel algorithms, LDPC decoders were developed on recent multicores, such as off-the-shelf general-purpose x86 processors, Graphics Processing Units (GPUs), and the CELL Broadband Engine (CELL/B.E.). Challenging restrictions, such as memory access conflicts, latency, coalescence, or unknown behavior of thread and block schedulers, were unraveled and worked out. Experimental results for different code lengths show throughputs in the order of 1 ~ 2 Mbps on the general-purpose multicores, and ranging from 40 Mbps on the GPU to nearly 70 Mbps on the CELL/B.E. The analysis of the obtained results allows to conclude that the CELL/B.E. performs better for short to medium length codes, while the GPU achieves superior throughputs with larger codes. They achieve throughputs that in some cases approach very well those obtained with VLSI decoders. From the analysis of the results, we can predict a throughput increase with the rise of the number of cores.
---
paper_title: Performance evaluation of LDPC decoding on a general purpose mobile CPU
paper_content:
This paper explores using a mobile platform for performing the calculations required for the building blocks of telecommunication systems. The building block analyzed in this paper is LDPC (low-density parity-check) channel decoding, performed on the LDPC design used in the DVB-T2 standard. Implementation details are given, and a performance analysis on a mobile CPU is performed. The implementation is compared against a very similar implementation running on a desktop computer CPU, as well as a GPU (graphics processing unit) implementation. The results give indications of the current state of typical mobile platforms of today.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: High coded data rate and multicodeword WiMAX LDPC decoding on Cell/BE
paper_content:
A novel, flexible and scalable parallel LDPC decoding approach for the WiMAX wireless broadband standard (IEEE 802.16e) in the multicore Cell broadband engine architecture is proposed. A multicodeword LDPC decoder performing the simultaneous decoding of 96 codewords is presented. The coded data rate achieved a range of 72–80 Mbit/s, which compares well with VLSI-based decoders and is superior to the maximum coded data rate required by the WiMAX standard performing in worst case conditions. The 8-bit precision arithmetic adopted shows additional advantages over traditional 6-bit precision dedicated VLSI-based solutions, allowing better error floors and BER performance.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: A Comprehensive Performance Comparison of CUDA and OpenCL
paper_content:
This paper presents a comprehensive performance comparison between CUDA and OpenCL. We have selected 16 benchmarks ranging from synthetic applications to real-world ones. We make an extensive analysis of the performance gaps taking into account programming models, ptimization strategies, architectural details, and underlying compilers. Our results show that, for most applications, CUDA performs at most 30\% better than OpenCL. We also show that this difference is due to unfair comparisons: in fact, OpenCL can achieve similar performance to CUDA under a fair comparison. Therefore, we define a fair comparison of the two types of applications, providing guidelines for more potential analyses. We also investigate OpenCL's portability by running the benchmarks on other prevailing platforms with minor modifications. Overall, we conclude that OpenCL's portability does not fundamentally affect its performance, and OpenCL can be a good alternative to CUDA.
---
paper_title: Reduced complexity iterative decoding of low-density parity check codes based on belief propagation
paper_content:
Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance-complexity trade-off and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.
---
paper_title: Implementation of Decoders for LDPC Block Codes and LDPC Convolutional Codes Based on GPUs
paper_content:
In this paper, efficient LDPC block-code decoders/simulators which run on graphics processing units (GPUs) are proposed. We also implement the decoder for the LDPC convolutional code (LDPCCC). The LDPCCC is derived from a predesigned quasi-cyclic LDPC block code with good error performance. Compared to the decoder based on the randomly constructed LDPCCC code, the complexity of the proposed LDPCCC decoder is reduced due to the periodicity of the derived LDPCCC and the properties of the quasicyclic structure. In our proposed decoder architecture, Γ (Γ is a multiple of a warp) codewords are decoded together, and hence, the messages of Γ codewords are also processed together. Since all the Γ codewords share the same Tanner graph, messages of the Γ distinct codewords corresponding to the same edge can be grouped into one package and stored linearly. By optimizing the data structures of the messages used in the decoding process, both the read and write processes can be performed in a highly parallel manner by the GPUs. In addition, a thread hierarchy minimizing the divergence of the threads is deployed, and it can maximize the efficiency of the parallel execution. With the use of a large number of cores in the GPU to perform the simple computations simultaneously, our GPU-based LDPC decoder can obtain hundreds of times speedup compared with a serial CPU-based simulator and over 40 times speedup compared with an eight-thread CPU-based simulator.
---
paper_title: A multi-standard efficient column-layered LDPC decoder for Software Defined Radio on GPUs
paper_content:
In this paper, we propose a multi-standard high-throughput column-layered (CL) low-density parity-check (LDPC) decoder for Software-Defined Radio (SDR) on a Graphics Processing Unit (GPU) platform. Multiple columns in the sub-matrix of quasi-cyclic LDPC (QC-LDPC) code are parallel performed inside a block, while multiple codewords are simultaneously decoded among many blocks on the GPU. Several optimization methods are employed to enhance the throughput, such as the compressed matrix structure, memory optimization, codeword packing scheme, two-dimension thread configuration and asynchronous data transfer. The experiment shows that our decoder has low bit error ratio and the peak throughput is 712Mbps, which is about two orders of magnitude faster than that of CPU implementation and comparable to the dedicated hardware solutions. Compared to the existing fastest GPU-based implementation, the presented decoder can achieve a performance improvement of 3.0x times.
---
paper_title: Programming graphics processing units for the decoding of low-density parity-check codes
paper_content:
Simulating the error performance of low-density parity-check (LDPC) codes usually takes a lot of computation time. Inexpensive graphics processing units (GPUs) have recently been used to accelerate the decoding process by allowing the smaller and identical tasks to be performed in a highly parallel manner. In this paper, we propose decoding a number of LDPC codes at the same time using GPUs. We show that the decoding speed is improved with our proposed method.
---
paper_title: Parallel LDPC decoder implementation on GPU based on unbalanced memory coalescing
paper_content:
We consider flexible decoder implementation of low density parity check (LDPC) codes via compute-unified-device-architecture (CUDA) programming on graphics processing unit (GPU), a research subject of considerable recent interest. An important issue in LDPC decoder design based on CUDA-GPU is realizing coalesced memory access, a technique that reduces memory transaction time considerably. In previous works along this direction, it has not been possible to achieve coalesced memory access in both the read and write operations due to the asymmetric nature of the bipartite graph describing the LDPC code structure. In this paper, a new algorithm is proposed that enables coalesced memory access in both the read and write operations for one half of the decoding process — either the bit-to-check or the check-to-bit message passing. For the remaining half of the decoding step our scheme requires address transformation in both the read and write operations but one translating array is sufficient. We also describe the use of on-chip shared memory and texture cache. Overall, experimental results show that proposed GPU-based LDPC decoder achieves more than 234×-speedup compared to CPU-based LDPC decoders and also outperforms existing GPU-based decoders by a significant margin.
---
paper_title: Performance evaluation of LDPC decoding on a general purpose mobile CPU
paper_content:
This paper explores using a mobile platform for performing the calculations required for the building blocks of telecommunication systems. The building block analyzed in this paper is LDPC (low-density parity-check) channel decoding, performed on the LDPC design used in the DVB-T2 standard. Implementation details are given, and a performance analysis on a mobile CPU is performed. The implementation is compared against a very similar implementation running on a desktop computer CPU, as well as a GPU (graphics processing unit) implementation. The results give indications of the current state of typical mobile platforms of today.
---
paper_title: Shortening Design Time through Multiplatform Simulations with a Portable OpenCL Golden-model: The LDPC Decoder Case
paper_content:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bit width and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on Open CL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. Open CL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpen CL (Silicon to Open CL), a tool that automatically converts Open CL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified Open CL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3× faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: Massively LDPC Decoding on Multicore Architectures
paper_content:
Unlike usual VLSI approaches necessary for the computation of intensive Low-Density Parity-Check (LDPC) code decoders, this paper presents flexible software-based LDPC decoders. Algorithms and data structures suitable for parallel computing are proposed in this paper to perform LDPC decoding on multicore architectures. To evaluate the efficiency of the proposed parallel algorithms, LDPC decoders were developed on recent multicores, such as off-the-shelf general-purpose x86 processors, Graphics Processing Units (GPUs), and the CELL Broadband Engine (CELL/B.E.). Challenging restrictions, such as memory access conflicts, latency, coalescence, or unknown behavior of thread and block schedulers, were unraveled and worked out. Experimental results for different code lengths show throughputs in the order of 1 ~ 2 Mbps on the general-purpose multicores, and ranging from 40 Mbps on the GPU to nearly 70 Mbps on the CELL/B.E. The analysis of the obtained results allows to conclude that the CELL/B.E. performs better for short to medium length codes, while the GPU achieves superior throughputs with larger codes. They achieve throughputs that in some cases approach very well those obtained with VLSI decoders. From the analysis of the results, we can predict a throughput increase with the rise of the number of cores.
---
paper_title: A Parallel IRRWBF LDPC Decoder Based on Stream-Based Processor
paper_content:
Low-density parity check (LDPC) codes have gained much attention due to their use of various belief-propagation (BP) decoding algorithms to impart excellent error-correcting capability. The BP decoders are quite simple; however, their computation-intensive and repetitive process prohibits their use in energy-sensitive applications such as sensor networks. Bit flipping-based decoding algorithms, especially implementation-efficient, reliability ratio-based, weighted bit-flipping (IRRWBF) decoding; have shown an excellent tradeoff between error-correction performance and implementation cost. In this paper, we show that with IRRWBF, iterative re-computation can be replaced by iterative selective updating. When compared with the original algorithm, simulation results show that, decoding speed can be increased by 200 to 600 percent , as the number of decoding iterations is increased from 5 to 1,000. The decoding steps are broken down into various stages such that the update operations are mostly of the single-instruction, multiple-data (SIMD) type. In this paper, we show that by using Intel Wireless MMX 2 accelerating technology in the proposed algorithm, the speed increased by 500 to 1,500 percent. The results of implementing the proposed scheme using an Intel/Marvell PXA320 (806 MHz) CPU are presented. The proposed scheme can be used effectively in real-time LDPC codes for energy-sensitive mobile devices.
---
paper_title: Cell Processor Based LDPC Encoder/Decoder for WiMAX Applications
paper_content:
Encoder and decoder are the two most important and complex components of a wireless transceiver. Traditionally, dedicated hardware solutions are used because of their computational intensive algorithms. This paper presents an alternative software-based solution that has several advantages over dedicated hardware solutions. LDPC codes are chosen for their excellent error correcting performance and cell processor is chosen for its tremendous computational power. Sparse and structural properties of LDPC codes are exploited to reduce computation and memory requirements. Several optimization techniques suitable to cell processor architecture such as multi-threading, vectorization, loop unrolling are used to improve performance. The proposed solution achieved significant performance improvement over existing software and dedicated hardware solutions.
---
paper_title: A multi-standard efficient column-layered LDPC decoder for Software Defined Radio on GPUs
paper_content:
In this paper, we propose a multi-standard high-throughput column-layered (CL) low-density parity-check (LDPC) decoder for Software-Defined Radio (SDR) on a Graphics Processing Unit (GPU) platform. Multiple columns in the sub-matrix of quasi-cyclic LDPC (QC-LDPC) code are parallel performed inside a block, while multiple codewords are simultaneously decoded among many blocks on the GPU. Several optimization methods are employed to enhance the throughput, such as the compressed matrix structure, memory optimization, codeword packing scheme, two-dimension thread configuration and asynchronous data transfer. The experiment shows that our decoder has low bit error ratio and the peak throughput is 712Mbps, which is about two orders of magnitude faster than that of CPU implementation and comparable to the dedicated hardware solutions. Compared to the existing fastest GPU-based implementation, the presented decoder can achieve a performance improvement of 3.0x times.
---
paper_title: Sequential decoding of non-binary LDPC codes on graphics processing units
paper_content:
Non-binary low-density parity-check (LDPC) codes have been shown to attain near capacity error correcting performance in noisy wireless communication channels. It is well known that these codes require a very large number of operations per-bit to decode. This high computational complexity along with a parallel decoder structure makes graphics processing units (GPUs) an attractive platform for acceleration of the decoding algorithm. The seemingly random memory access patterns associated with decoding are generally beneficial to error-correcting performance but present a challenge to designers who want to leverage the computational capabilities of the GPU. In this paper we describe the design of an efficient decoder implementation based on GPUs and a corresponding set of powerful non-binary LDPC codes. Using the belief propagation algorithm with a sequential message updating scheme it is shown that we are able to exploit parallelism inherent in the decoding algorithm while decreasing the number of decoding iterations required for convergence.
---
paper_title: Decoding of a quasi-cyclic LDPC code on a stream processor
paper_content:
The TDMP layered belief-propagation algorithm is investigated for decoding a quasi-cyclic low-density parity-check code on a stream processor using fixed-point arithmetic. The effect of the processor's fixed-point resolution on the decoder performance is determined, and a simple technique is described for minimizing the performance penalty incurred when using the (highest throughput) lowest-resolution arithmetic mode of the processor. A reordering of the decoder schedule and a modification of the parity checks are also considered which permit increased software pipelining and improved latency hiding, with a corresponding increase in the data throughput. The fixed-point Storm-1 stream processor is used for comparative throughput results.
---
paper_title: Scheduling parity checks for increased throughput in early-termination, layered decoding of QC-LDPC codes on a stream processor
paper_content:
A stream processor is a power-efficient, high-level-language programmable option for embedded applications that are computation intensive and admit high levels of data parallelism. Many signal-processing algorithms for communications are well matched to stream-processor architectures, including partially parallel implementations of layered decoding algorithms such as the turbo-decoding message-passing (TDMP) algorithm. Communication among clusters of functional units in the stream processor impose a latency cost during both the message-passing phase and the parity-check phase of the TDMP algorithm with early termination; the inter-cluster communications latency is a significant factor in limiting the throughput of the decoder. We consider two modifications of the schedule for the TDMP algorithm with early termination; each halves the communication required between functional-unit clusters of the stream processor in each iteration. We show that these can provide a substantial increase in the information throughput of the decoder without increasing the probability of error.
---
paper_title: Systematic construction and verification methodology for LDPC codes
paper_content:
In this paper, a novel and systematic LDPC codeword construction and verification methodology is proposed. The methodology is composed by the simulated annealing based LDPC codeword constructor, the GPU based high-speed codeword selector and the ant colony optimization based pipeline scheduler. Compared to the traditional ways, this methodology enables us to construct both decoding-performance-aware and hardware-efficiency-aware LDPC codewords in a short time. Simulation results show that the generated codewords have much less cycles (length 6 cycles eliminated) and memory conflicts (75% reduction on idle clocks), while having noBER performance loss compared to WiMAXcodewords. Additionally, the simulation speeds up by 490 times under float precision against CPU and a net throughput 24.5Mbps is achieved.
---
paper_title: A Turbo-Decoding Message-Passing Algorithm for Sparse Parity-Check Matrix Codes
paper_content:
A turbo-decoding message-passing (TDMP) algorithm for sparse parity-check matrix (SPCM) codes such as low-density parity-check, repeat-accumulate, and turbo-like codes is presented. The main advantages of the proposed algorithm over the standard decoding algorithm are 1) its faster convergence speed by a factor of two in terms of decoding iterations, 2) improvement in coding gain by an order of magnitude at high signal-to-noise ratio (SNR), 3) reduced memory requirements, and 4) reduced decoder complexity. In addition, an efficient algorithm for message computation using simple "max" operations is also presented. Analysis using EXIT charts shows that the TDMP algorithm offers a better performance-complexity tradeoff when the number of decoding iterations is small, which is attractive for high-speed applications. A parallel version of the TDMP algorithm in conjunction with architecture-aware (AA) SPCM codes, which have embedded structure that enables efficient high-throughput decoder implementation, are presented. Design examples of AA-SPCM codes based on graphs with large girth demonstrate that AA-SPCM codes have very good error-correcting capability using the TDMP algorithm
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: A Scalable LDPC Decoder on GPU
paper_content:
A flexible and scalable approach for LDPC decodingon CUDA based Graphics Processing Unit (GPU) is presented in this paper. Layered decoding is a popular method for LDPC decoding and is known for its fast convergence. However, efficient implementation of the layered decoding algorithm on GPU is challenging due to the limited amount of data-parallelism available in this algorithm. To overcome this problem, a kernel execution configuration that can decode multiple codewords simultaneously on GPU is developed. This paper proposes a compact data packing scheme to reduce the number of global memory accesses and parity-check matrix representation to reduce constant memory latency. Global memory bandwidth efficiency is improved by coalescing simultaneous memory accesses of threads in a half-warp into a single memory transaction. Asynchronous data transfers are used to hide host memory latency by overlapping kernel execution with data transfers between CPU and GPU. The proposed implementation of LDPC decoder on GPU performs two orders of magnitude faster than the LDPC decoder on a CPU and four times faster than the previously reported LDPC decoder on GPU. This implementation achieves a throughput of 160Mbps, which is comparable to dedicated hardware solutions.
---
paper_title: Highly Parallel FPGA Emulation for LDPC Error Floor Characterization in Perpendicular Magnetic Recording Channel
paper_content:
Low-density parity-check (LDPC) codes offer a promising error correction approach for high-density magnetic recording systems due to their near-Shannon limit error-correcting performance. However, evaluation of LDPC codes at the extremely low bit error rates (BER) required by hard disk drive systems, typically around 10-12 to 10- 15, cannot be carried out on high-performance workstations using conventional Monte Carlo techniques in a tractable amount of time. Even field-programmable gate array (FPGA) emulation platforms take a few weeks to reach BER between 10-11 and 10-12. Thus, we implemented a highly parallel FPGA processing cluster to emulate a perpendicular magnetic recording channel, which enabled us to accelerate the emulation by > 100times over the fastest reported emulation. This increased throughput enabled us to characterize the performance of LDPC code BER down to near 10-14 and investigate its error floor.
---
paper_title: Optimal Overlapped Message Passing Decoding of Quasi-Cyclic LDPC Codes
paper_content:
Efficient hardware implementation of low-density parity-check (LDPC) codes is of great interest since LDPC codes are being considered for a wide range of applications. Recently, overlapped message passing (OMP) decoding has been proposed to improve the throughput and hardware utilization efficiency (HUE) of decoder architectures for LDPC codes. In this paper, we first study the scheduling for the OMP decoding of LDPC codes, and show that maximizing the throughput gain amounts to minimizing the intra- and inter-iteration waiting times. We then focus on the OMP decoding of quasi-cyclic (QC) LDPC codes. We propose a partly parallel OMP decoder architecture and implement it using FPGA. For any QC LDPC code, our OMP decoder achieves the maximum throughput gain and HUE due to overlapping, hence has higher throughput and HUE than previously proposed OMP decoders while maintaining the same hardware requirements. We also show that the maximum throughput gain and HUE achieved by our OMP decoder are ultimately determined by the given code. Thus, we propose a coset-based construction method, which results in QC LDPC codes that allow our optimal OMP decoder to achieve higher throughput and HUE.
---
paper_title: Reconfigurable Computing Architectures
paper_content:
Reconfigurable architectures can bring unique capabilities to computational tasks. They offer the performance and energy efficiency of hardware with the flexibility of software. In some domains, they are the only way to achieve the required, real-time performance without fabricating custom integrated circuits. Their functionality can be upgraded and repaired during their operational lifecycle and specialized to the particular instance of a task. We survey the field of reconfigurable computing, providing a guide to the body-of-knowledge accumulated in architecture, compute models, tools, run-time reconfiguration, and applications.
---
paper_title: Synthesis of Platform Architectures from OpenCL Programs
paper_content:
The problem of automatically generating hardware modules from a high level representation of an application has been at the research forefront in the last few years. In this paper, we use OpenCL, an industry supported standard for writing programs that execute on multicore platforms and accelerators such as GPUs. Our architectural synthesis tool, SOpenCL (Silicon-OpenCL), adapts OpenCL into a novel hardware design flow which efficiently maps coarse and fine-grained parallelism of an application onto an FPGA reconfigurable fabric. SOpenCL is based on a source-to-source code transformation step that coarsens the OpenCL fine-grained parallelism into a series of nested loops, and on a template-based hardware generation back-end that configures the accelerator based on the functionality and the application performance and area requirements. Our experimentation with a variety of OpenCL and C kernel benchmarks reveals that area, throughput and frequency optimized hardware implementations are attainable using SOpenCL.
---
paper_title: Shortening Design Time through Multiplatform Simulations with a Portable OpenCL Golden-model: The LDPC Decoder Case
paper_content:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bit width and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on Open CL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. Open CL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpen CL (Silicon to Open CL), a tool that automatically converts Open CL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified Open CL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3× faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
---
paper_title: Shortening Design Time through Multiplatform Simulations with a Portable OpenCL Golden-model: The LDPC Decoder Case
paper_content:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bit width and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on Open CL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. Open CL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpen CL (Silicon to Open CL), a tool that automatically converts Open CL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified Open CL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3× faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
---
paper_title: Maximum Performance Computing with Dataflow Engines
paper_content:
Multidisciplinary dataflow computing is a powerful approach to scientific computing that has led to orders-of-magnitude performance improvements for a wide range of applications.
---
paper_title: High throughput low latency LDPC decoding on GPU for SDR systems
paper_content:
In this paper, we present a high throughput and low latency LDPC (low-density parity-check) decoder implementation on GPUs (graphics processing units). The existing GPU-based LDPC decoder implementations suffer from low throughput and long latency, which prevent them from being used in practical SDR (software-defined radio) systems. To overcome this problem, we present optimization techniques for a parallel LDPC decoder including algorithm optimization, fully coalesced memory access, asynchronous data transfer and multi-stream concurrent kernel execution for modern GPU architectures. Experimental results demonstrate that the proposed LDPC decoder achieves 316 Mbps (at 10 iterations) peak throughput on a single GPU. The decoding latency, which is much lower than that of the state of the art, varies from 0.207 ms to 1.266 ms for different throughput requirements from 62.5 Mbps to 304.16 Mbps. When using four GPUs concurrently, we achieve an aggregate peak throughput of 1.25 Gbps (at 10 iterations).
---
paper_title: High-Throughput Multi-Core LDPC Decoders Based on x86 Processor
paper_content:
Low-Density Parity-Check (LDPC) codes are an efficient way to correct transmission errors in digital communication systems. Although initially targeting strictly to ASICs due to computation complexity, LDPC decoders have been recently ported to multicore and many-core systems. Most works focused on taking advantage of GPU devices. In this paper, we propose an alternative solution based on a layered OMS/NMS LDPC decoding algorithm that can be efficiently implemented on a multi-core device using Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD) programming models. Several experimentations were performed on a x86 processor target. Throughputs up to 170 Mbps were achieved on a single core of an INTEL Core i7 processor when executing 20 layered-based decoding iterations. Throughputs reaches up to 560 Mbps on four INTEL Core-i7 cores. Experimentation results show that the proposed implementations achieved similar BER correction performance than previous works. Moreover, much higher throughputs have been achieved by comparison with all previous GPU and CPU works. They range from x1.4 to x8 by comparison with recent GPU works.
---
paper_title: Shortening Design Time through Multiplatform Simulations with a Portable OpenCL Golden-model: The LDPC Decoder Case
paper_content:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bit width and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on Open CL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. Open CL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpen CL (Silicon to Open CL), a tool that automatically converts Open CL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified Open CL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3× faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
---
paper_title: Reconfigurable processing: the solution to low-power programmable DSP
paper_content:
One of the most compelling issues in the design of wireless communication components is to keep power dissipation between bounds. While low-power solutions are readily achieved in an application-specific approach, doing so in a programmable environment is a substantially harder problem. This paper presents an approach to low-power programmable DSP that is based on the dynamic reconfiguration of hardware modules. This technique has shown to yield at least an order of magnitude of power reduction compared to traditional instruction-based engines for problems in the area of wireless communication.
---
paper_title: High-Level Synthesis: from Algorithm to Digital Circuit
paper_content:
This book presents an excellent collection of contributions addressing different aspects of high-level synthesis from both industry and academia. High-Level Synthesis: from Algorithm to Digital Circuit should be on each designers and CAD developers shelf, as well as on those of project managers who will soon embrace high level design and synthesis for all aspects of digital system design.
---
| Title: A Survey on Programmable LDPC Decoders
Section 1: INTRODUCTION
Description 1: Provide a brief background on LDPC codes, their historical context, and recent advancements in their application and computational exploitation.
Section 2: THE PROBLEM
Description 2: Discuss the main challenges and requirements in designing efficient LDPC decoders in modern ECC systems, particularly in the context of VLSI technology.
Section 3: MOTIVATION
Description 3: Outline the motivations behind developing LDPC decoders for programmable architectures, including trends in semiconductor manufacturing and multicore processing.
Section 4: NOTATION
Description 4: Explain the notation and symbols used throughout the paper to describe LDPC decoding solutions and algorithms.
Section 5: EVALUATION
Description 5: Summarize the evaluation criteria and challenges for benchmarking LDPC decoding solutions, emphasizing computational complexity and memory layout considerations.
Section 6: FIGURES OF MERIT
Description 6: Discuss the key performance metrics used to evaluate LDPC decoders, such as decoding throughput and latency.
Section 7: DECODING ON PROGRAMMABLE ARCHITECTURES
Description 7: Provide a detailed survey of LDPC decoders implemented on various programmable architectures such as GPUs, CPUs, and streaming accelerators, highlighting key characteristics and performance figures.
Section 8: PROGRAMMABLE LDPC DECODER MAPPING
Description 8: Describe the techniques for mapping LDPC decoders onto programmable architectures, including memory layout and computational resource optimization strategies.
Section 9: TANNER GRAPH INDEXING SCHEMES
Description 9: Outline different methods for indexing Tanner graph connections in memory, comparing their effectiveness for various code structures.
Section 10: PROGRAMMING MODELS
Description 10: Discuss the programming models used for developing LDPC decoders, including OPenMP, POSIX threads, CUDA, OpenCL, and SIMD instructions.
Section 11: THREAD-LEVEL PARALLELISM
Description 11: Examine the strategies for exploiting thread-level parallelism in LDPC decoders, categorizing them by granularity and performance impact.
Section 12: DATA-LEVEL PARALLELISM
Description 12: Explain the approaches to data-level parallelism in LDPC decoders, including single codeword, codeword batch, padded, and interleaved codeword batches.
Section 13: DECODING ALGORITHMS
Description 13: Survey the different decoding algorithms employed in LDPC decoders, with a focus on those optimized for programmable hardware.
Section 14: DECODING SCHEDULES
Description 14: Describe the scheduling approaches for LDPC decoding, particularly TPMP and TDMP, and their impact on performance.
Section 15: FUTURE DIRECTIONS: RECONFIGURABLE ARCHITECTURES USING HIGH-LEVEL SYNTHESIS
Description 15: Explore the potential of using high-level synthesis for developing LDPC decoders on reconfigurable architectures like FPGAs, including emerging trends and tools.
Section 16: PROGRAMMING MODELS
Description 16: Detail the high-level synthesis programming models used for FPGA-based LDPC decoders, such as OpenCL for FPGAs and Vivado HLS.
Section 17: PARALLELISM
Description 17: Discuss how parallelism is implemented in reconfigurable architectures using high-level synthesis, highlighting loop annotation and wide-pipeline strategies.
Section 18: SUMMARY
Description 18: Summarize the key insights from the survey, emphasizing the performance trade-offs and future prospects for LDPC decoders on programmable and reconfigurable hardware. |
Intelligent Notification Systems: A Survey of the State of the Art and Research Challenges | 12 | ---
paper_title: I've got 99 problems, but vibration ain't one: a survey of smartphone users' concerns
paper_content:
Smartphone operating systems warn users when third-party applications try to access sensitive functions or data. However, all of the major smartphone platforms warn users about different application actions. To our knowledge, their selection of warnings was not grounded in user research; past research on mobile privacy has focused exclusively on the risks pertained to sharing location. To expand the scope of smartphone security and privacy research, we surveyed 3,115 smartphone users about 99 risks associated with 54 smartphone privileges. We asked participants to rate how upset they would be if given risks occurred and used this data to rank risks by levels of user concern. We then asked 41 smartphone users to discuss the risks in their own words; their responses confirmed that people find the lowest-ranked risks merely annoying but might seek legal or financial retribution for the highest-ranked risks. In order to determine the relative frequency of risks, we also surveyed the 3,115 users about experiences with "misbehaving" applications. Our ranking and frequency data can be used to guide the selection of warnings on smartphone platforms.
---
paper_title: Effects of Content and Time of Delivery on Receptivity to Mobile Interruptions
paper_content:
In this paper we investigate effects of the content of interruptions and of the time of interruption delivery on mobile phones. We review related work and report on a naturalistic quasi-experiment using experience-sampling that showed that the receptivity to an interruption is influenced by its content rather than by its time of delivery in the employed modality of delivery - SMS. We also examined the underlying variables that increase the perceived quality of content and found that the factors interest, entertainment, relevance and actionability influence people's receptivity significantly. Our findings inform system design that seeks to provide context-sensitive information or to predict interruptibility and suggest the consideration of receptivity as an extension to the way we think and reason about interruptibility.
---
paper_title: Designing content-driven intelligent notification mechanisms for mobile applications
paper_content:
An increasing number of notifications demanding the smartphone user's attention, often arrive at an inappropriate moment, or carry irrelevant content. In this paper we present a study of mobile user interruptibility with respect to notification content, its sender, and the context in which a notification is received. In a real-world study we collect around 70,000 instances of notifications from 35 users. We group notifications according to the applications that initiated them, and the social relationship between the sender and the receiver. Then, by considering both content and context information, such as the current activity of a user, we discuss the design of classifiers for learning the most opportune moment for the delivery of a notification carrying a specific type of information. Our results show that such classifiers lead to a more accurate prediction of users' interruptibility than an alternative approach based on user-defined rules of their own interruptibility.
---
paper_title: My Phone and Me: Understanding People's Receptivity to Mobile Notifications
paper_content:
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
---
paper_title: Large-scale assessment of mobile notifications
paper_content:
Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
---
paper_title: The Scope and Importance of Human Interruption in Human-Computer Interaction Design
paper_content:
At first glance it seems absurd that busy people doing important jobs should want their computers to interrupt them. Interruptions are disruptive and people need to concentrate to make good decisions. However, successful job performance also frequently depends on people's abilities to (a) constantly monitor their dynamically changing information environments, (b) collaborate and communicate with other people in the system, and (c) supervise background autonomous services. These critical abilities can require people to simultaneously query a large set of information sources, continuously monitor for important events, and respond to and communicate with other human operators. Automated monitoring and alerting systems minimize the need to constantly monitor, but they induce alerts that may interrupt other activities. Such interrupting technologies are already widespread and include concurrent multitasking; mixed-initiative interaction; support for delegation and supervisory control of automation, including intelligent agents; and other distributed, background services and technologies that increase human-human communication. ::: ::: People do not perform sustained, simultaneous, multichannel sampling well; however, they have great capacity to manage concurrent activities when given specific kinds of interface support. Literature from many domains shows deleterious consequences of human performance in interrupt-laden situations when interfaces do not support this aspect of the task environment. This article identifies why human interruption is an important human-computer interaction problem, and why it will continue to grow in ubiquity and importance. We provide examples of this problem in real-world systems, and we review theoretical tools for understanding human interruption. Based on interdisciplinary scientific results, we suggest potential approaches to user-interface design to help people effectively manage interruptions.
---
paper_title: A model for notification systems evaluation—assessing user goals for multitasking activity
paper_content:
Addressing the need to tailor usability evaluation methods (UEMs) and promote effective reuse of HCI knowledge for computing activities undertaken in divided-attention situations, we present the foundations of a unifying model that can guide evaluation efforts for notification systems. Often implemented as ubiquitous systems or within a small portion of the traditional desktop, notification systems typically deliver information of interest in a parallel, multitasking approach, extraneous or supplemental to a user's attention priority. Such systems represent a difficult challenge to evaluate meaningfully. We introduce a design model of user goals based on blends of three critical parameters---interruption, reaction, and comprehension. Categorization possibilities form a logical, descriptive design space for notification systems, rooted in human information processing theory. This model allows conceptualization of distinct action models for at least eight classes of notification systems, which we describe and analyze with a human information processing model. System classification regions immediately suggest useful empirical and analytical evaluation metrics from related literature. We present a case study that demonstrates how these techniques can assist an evaluator in adapting traditional UEMs for notification and other multitasking systems. We explain why using the design model categorization scheme enabled us to generate evaluation results that are more relevant for the system redesign than the results of the original exploration done by the system's designers.
---
paper_title: Attention, Intentions, And The Structure Of Discourse
paper_content:
In this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discourse. In this theory, discourse structure is composed of three separate but interrelated components: the structure of the sequence of utterances (called the linguistic structure), a structure of purposes (called the intentional structure), and the state of focus of attention (called the attentional state). The linguistic structure consists of segments of the discourse into which the utterances naturally aggregate. The intentional structure captures the discourse-relevant purposes, expressed in each of the linguistic segments as well as relationships among them. The attentional state is an abstraction of the focus of attention of the participants as the discourse unfolds. The attentional state, being dynamic, records the objects, properties, and relations that are salient at each point of the discourse. The distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases, referring expressions, and interruptions.The theory of attention, intention, and aggregation of utterances is illustrated in the paper with a number of example discourses. Various properties of discourse are described, and explanations for the behavior of cue phrases, referring expressions, and interruptions are explored.This theory provides a framework for describing the processing of utterances in a discourse. Discourse processing requires recognizing how the utterances of the discourse aggregate into segments, recognizing the intentions expressed in the discourse and the relationships among intentions, and tracking the discourse through the operation of the mechanisms associated with attentional state. This processing description specifies in these recognition tasks the role of information from the discourse and from the participants' knowledge of the domain.
---
paper_title: A simplest systematics for the organization of turn-taking for conversation
paper_content:
Publisher Summary Turn taking is used for the ordering of moves in games, for allocating political office, for regulating traffic at intersections, for the servicing of customers at business establishments, and for talking in interviews, meetings, debates, ceremonies, conversations. This chapter discusses the turn-taking system for conversation. On the basis of research using audio recordings of naturally occurring conversations, the chapter highlights the organization of turn taking for conversation and extracts some of the interest that organization has. The turn-taking system for conversation can be described in terms of two components and a set of rules. These two components are turn-constructional component and turn-constructional component. Turn-allocational techniques are distributed into two groups: (1) those in which next turn is allocated by current speaker selecting a next speaker and (2) those in which next turn is allocated by self-selection. The turn-taking rule-set provides for the localization of gap and overlap possibilities at transition-relevance places and their immediate environment, cleansing the rest of a turn's space of systematic bases for their possibility.
---
paper_title: Contributing to Discourse
paper_content:
For people to contribute to discourse, they must do more than utter the right sentence at the right time. The basic requirement is that they odd to their common ground in on orderly way. To do this, we argue, they try to establish for each utterance the mutual belief that the addressees hove understood what the speaker meant well enough for current purposes. This is accomplished by the collective actions of the current contributor and his or her partners, and these result in units of conversation called contributions. We present a model of contributions and show how it accounts for o variety of features of everyday conversations. People take part in conversation in order to plan, debate, discuss, gossip, and carry out other social processes. When they do take part, they could be said to contribute to the discourse. But how do they contribute? At first the answer seems obvious. A discourse is a sequence of utterances produced as the participants proceed turn by turn. All that participants have to do to contribute is utter the right sentence at the right time. They may make errors, but once they have corrected them, they are done. The other participants have merely to listen and understand. This is the view subscribed to in most discourse theories in psychology, linguistics, philosophy; and artificial intelligence. A closer look at actual conversations, however, suggests that they are much more than sequences of utterances produced turn by turn. They are highly coordinated activities in which the current speaker tries to make sure he or she is being attended to, heard, and understood by the other participants, and they in turn try to let the speaker know when he or she has succeeded. Contributing to a discourse, then, appears to require more than just uttering the right words at the right time. It seems to consist of collective acts performed by the participants working together. In this paper we describe a model of contributions as parts of collective acts. We first describe the need for such a model, next present the model
---
paper_title: Bluetooth and WAP push based location-aware mobile advertising system
paper_content:
Advertising on mobile devices has large potential due to the very personal and intimate nature of the devices and high targeting possibilities. We introduce a novel B-MAD system for delivering permission-based location-aware mobile advertisements to mobile phones using Bluetooth positioning and Wireless Application Protocol (WAP) Push. We present a thorough quantitative evaluation of the system in a laboratory environment and qualitative user evaluation in form of a field trial in the real environment of use. Experimental results show that the system provides a viable solution for realizing permission-based mobile advertising.
---
paper_title: The Scope and Importance of Human Interruption in Human-Computer Interaction Design
paper_content:
At first glance it seems absurd that busy people doing important jobs should want their computers to interrupt them. Interruptions are disruptive and people need to concentrate to make good decisions. However, successful job performance also frequently depends on people's abilities to (a) constantly monitor their dynamically changing information environments, (b) collaborate and communicate with other people in the system, and (c) supervise background autonomous services. These critical abilities can require people to simultaneously query a large set of information sources, continuously monitor for important events, and respond to and communicate with other human operators. Automated monitoring and alerting systems minimize the need to constantly monitor, but they induce alerts that may interrupt other activities. Such interrupting technologies are already widespread and include concurrent multitasking; mixed-initiative interaction; support for delegation and supervisory control of automation, including intelligent agents; and other distributed, background services and technologies that increase human-human communication. ::: ::: People do not perform sustained, simultaneous, multichannel sampling well; however, they have great capacity to manage concurrent activities when given specific kinds of interface support. Literature from many domains shows deleterious consequences of human performance in interrupt-laden situations when interfaces do not support this aspect of the task environment. This article identifies why human interruption is an important human-computer interaction problem, and why it will continue to grow in ubiquity and importance. We provide examples of this problem in real-world systems, and we review theoretical tools for understanding human interruption. Based on interdisciplinary scientific results, we suggest potential approaches to user-interface design to help people effectively manage interruptions.
---
paper_title: Interruption of People in Human-Computer Interaction: A General Unifying Definition of Human Interruption and Taxonomy
paper_content:
Abstract : User-interruption in human-computer interaction (HCI) is an increasingly important problem. Many of the useful advances in intelligent and multitasking computer systems have the significant side effect of greatly increasing user-interruption. This previously innocuous HCI problem has become critical to the successful function of many kinds of modern computer systems. Unfortunately, no HCI design guidelines exist for solving this problem. In fact, theoretical tools do not yet exist for investigating the HCI problem of user-interruption in a comprehensive and generalizable way. This report asserts that a single unifying definition of user-interruption and the accompanying practical taxonomy would be useful theoretical tools for driving effective investigation of this crucial HCI problem. These theoretical tools are constructed here. A comprehensive analysis is conducted through the existing literature. Theoretical constructs from several relevant but diverse fields are identified and discussed. A unifying definition of user-interruption is synthesized. This new definition is supported with an array of postulates, assertions, and a taxonomy of human interruption to facilitate its practical application.
---
paper_title: Investigating Interruptions: Implications for Flightdeck Performance
paper_content:
A fundamental aspect of multiple task management is attending to new stimuli and integrating associated task requirements into an ongoing task set; this is "interruption management" (IM). Anecdotal evidence and field studies indicate the frequency and consequences of interruptions, however experimental investigations of mechanisms influencing IM are scarce. Interruptions on commercial flightdecks are numerous, of various forms, and have been cited as contributing factors in many aviation incident and accident reports. This research grounds an experimental investigation of flight deck interruptions in a proposed IM stage model. This model organizes basic research, identifies influencing mechanisms, and suggests appropriate dependent measures for IM. Fourteen airline pilots participated in a flightdeck simulation experiment to investigate the general effects of performing an interrupting task and interrupted procedure, and the effects of specific task factors: (1) modality; (2) embeddedness, or goal-level, of an interruption; (3) strength of association, or coupling-strength, between interrupted tasks; (4) semantic similarity; and (5) environmental stress. General effects of interruptions were extremely robust. All individual task factors significantly affected interruption management, except "similarity." Results extend the Interruption Management model, and are interpreted for their implications for interrupted flightdeck performance and intervention strategies for mitigating their effects on the flightdeck.
---
paper_title: Task Interruption and its Effects on Memory
paper_content:
We investigated the recovery from memory of a primary task after an interruption. If the primary task lacked associative support among its task components, recovery was more difficult following an interruption that overlapped either completely or partially in the amount of information shared with the primary task (an interruption-similarity effect). In addition, memory for completed actions was superior to memory for impending unfinished actions. However, if the primary task had associative support among its task components, there was no adverse effect of interruption similarity, and completed and unfinished actions were recalled equally well. We explore possible explanations and implications of these results.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Measuring the effects of interruptions on task performance in the user interface
paper_content:
As users continue offloading more control and responsibility to the computer, coordinating the asynchronous interactions between the user and computer is becoming increasingly important. Without proper coordination, an application attempting to gain the user's attention risks interrupting the user in the midst of performing another task. To justify why an application should avoid interrupting the user whenever possible, we designed an experiment measuring the disruptive effect of an interruption on a user's task performance. The experiment utilized six Web-based task categories and two categories of interruption tasks. The results of the experiment demonstrate that: (i) a user performs slower on an interrupted task than a non-interrupted task, (ii) the disruptive effect of an interruption differs as a function of the task category, and (iii) different interruption tasks cause similar disruptive effects on task performance. These results empirically validate the need to better coordinate user interactions among applications that are competing for the user's attention.
---
paper_title: Creativity and sensory gating indexed by the P50: Selective versus leaky sensory gating in divergent thinkers and creative achievers
paper_content:
Creativity has previously been linked with atypical attention, but it is not clear what aspects of attention, or what types of creativity are associated. Here we investigated specific neural markers of a very early form of attention, namely sensory gating, indexed by the P50 ERP, and how it relates to two measures of creativity: divergent thinking and real-world creative achievement. Data from 84 participants revealed that divergent thinking (assessed with the Torrance Test of Creative Thinking) was associated with se- lective sensory gating, whereas real-world creative achievement was associated with "leaky" sensory gating, both in zero-order correlations and when controlling for academic test scores in a regression. Thus both creativity measures related to sensory gating, but in opposite directions. Additionally, di- vergent thinking and real-world creative achievement did not interact in predicting P50 sensory gating, suggesting that these two creativity measures orthogonally relate to P50 sensory gating. Finally, the ERP effect was specific to the P50 - neither divergent thinking nor creative achievement were related to later components, such as the N100 and P200. Overall results suggest that leaky sensory gating may help people integrate ideas that are outside of focus of attention, leading to creativity in the real world; whereas divergent thinking, measured by divergent thinking tests which emphasize numerous re- sponses within a limited time, may require selective sensory processing more than previously thought. & 2015 Elsevier Ltd. All rights reserved.
---
paper_title: Task Interruption and its Effects on Memory
paper_content:
We investigated the recovery from memory of a primary task after an interruption. If the primary task lacked associative support among its task components, recovery was more difficult following an interruption that overlapped either completely or partially in the amount of information shared with the primary task (an interruption-similarity effect). In addition, memory for completed actions was superior to memory for impending unfinished actions. However, if the primary task had associative support among its task components, there was no adverse effect of interruption similarity, and completed and unfinished actions were recalled equally well. We explore possible explanations and implications of these results.
---
paper_title: The psychology of memory
paper_content:
In this chapter I will try to provide a brief overview of the concepts and techniques that are most widely used in the psychology of memory. Although it may not appear to be the case from sampling the literature, there is in fact a great deal of agreement as to what constitutes the psychology of memory, much of it developed through the interaction of the study of normal memory in the laboratory and of its breakdown in brain-damaged patients. A somewhat more detailed account can be found in Parkin & Leng (1993) and Baddeley (1999), while a more extensive overview is given by Baddeley (1997), and within the various chapters comprising the Handbook of Memory (Tulving & Craik, 2000). THE FRACTIONATION OF MEMORY The concept of human memory as a unitary faculty began to be seriously eroded in the 1960s with the proposal that long-term memory (LTM) and short-term memory (STM) represent separate systems. Among the strongest evidence for this dissociation was the contrast between two types of neuropsychological patient. Patients with the classic amnesic syndrome, typically associated with damage to the temporal lobes and hippocampi, appeared to have a quite general problem in learning and remembering new material, whether verbal or visual (Milner, 1966). They did, however, appear to have normal short-term memory (STM), as measured for example by digit span, the capacity to hear and immediately repeat back a unfamiliar sequence of numbers. Shallice & Warrington (1970) identified an exactly opposite pattern of deficit in patients with damage to the perisylvian region of the left hemisphere. Such patients had a digit span limited to one or two, but apparently normal LTM. By the late 1960s, the evidence seemed to be pointing clearly to a two-component memory system. Figure 1.1 shows the representation of such a system from an influential model of the time, that of Atkinson & Shiffrin (1968). Information is assumed to flow from the environment through a series of very brief sensory memories, that are perhaps best regarded as part of the perceptual system, into a limited capacity short-term store. They proposed that the longer an item resides in this store, the greater the probability of its transfer to LTM. Amnesic patients were assumed to have a deficit in the LTM system, and STM patients in the short-term store.
---
paper_title: Instant Messaging and Interruption : Influence of Task Type on Performance
paper_content:
We describe research on the effects of instant messaging (IM) on ongoing computing tasks. We present a study that builds on earlier work exploring the influence of sending notifications at different times and the kinds of tasks that are particularly susceptible to interruption. This work investigates alternative hypotheses about the nature of disruption for a list evaluation task, an activity we had identified as being particularly costly to interrupt. Our findings replicate earlier work, showing the generally harmful effects of IM, and further show that notifications are more disruptive for fast, stimulus-driven search tasks than for slower, more effortful semantic-based search tasks.
---
paper_title: Interruption as a test of the user-computer interface
paper_content:
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Measuring the effects of interruptions on task performance in the user interface
paper_content:
As users continue offloading more control and responsibility to the computer, coordinating the asynchronous interactions between the user and computer is becoming increasingly important. Without proper coordination, an application attempting to gain the user's attention risks interrupting the user in the midst of performing another task. To justify why an application should avoid interrupting the user whenever possible, we designed an experiment measuring the disruptive effect of an interruption on a user's task performance. The experiment utilized six Web-based task categories and two categories of interruption tasks. The results of the experiment demonstrate that: (i) a user performs slower on an interrupted task than a non-interrupted task, (ii) the disruptive effect of an interruption differs as a function of the task category, and (iii) different interruption tasks cause similar disruptive effects on task performance. These results empirically validate the need to better coordinate user interactions among applications that are competing for the user's attention.
---
paper_title: If not now, when?: the effects of interruption at different moments within task execution
paper_content:
User attention is a scarce resource, and users are susceptible to interruption overload. Systems do not reason about the effects of interrupting a user during a task sequence. In this study, we measure effects of interrupting a user at different moments within task execution in terms of task performance, emotional state, and social attribution. Task models were developed using event perception techniques, and the resulting models were used to identify interruption timings based on a user's predicted cognitive load. Our results show that different interruption moments have different impacts on user emotional state and positive social attribution, and suggest that a system could enable a user to maintain a high level of awareness while mitigating the disruptive effects of interruption. We discuss implications of these results for the design of an attention manager.
---
paper_title: "Silence Your Phones": Smartphone Notifications Increase Inattention and Hyperactivity Symptoms
paper_content:
As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.
---
paper_title: Task Interruption and its Effects on Memory
paper_content:
We investigated the recovery from memory of a primary task after an interruption. If the primary task lacked associative support among its task components, recovery was more difficult following an interruption that overlapped either completely or partially in the amount of information shared with the primary task (an interruption-similarity effect). In addition, memory for completed actions was superior to memory for impending unfinished actions. However, if the primary task had associative support among its task components, there was no adverse effect of interruption similarity, and completed and unfinished actions were recalled equally well. We explore possible explanations and implications of these results.
---
paper_title: Interruption as a test of the user-computer interface
paper_content:
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Human Computer Interaction
paper_content:
Contents Foreword Preface to the third edition Preface to the second edition Preface to the first edition Introduction Part 1 Foundations Chapter 1 The human 1.1 Introduction 1.2 Input-output channels Design Focus: Getting noticed Design Focus: Where's the middle? 1.3 Human memory Design Focus: Cashing in Design Focus: 7 +- 2 revisited 1.4 Thinking: reasoning and problem solving Design Focus: Human error and false memories 1.5 Emotion 1.6 Individual differences 1.7 Psychology and the design of interactive systems 1.8 Summary Exercises Recommended reading Chapter 2 The computer 2.1 Introduction Design Focus: Numeric keypads 2.2 Text entry devices 2.3 Positioning, pointing and drawing 2.4 Display devices Design Focus: Hermes: a situated display 2.5 Devices for virtual reality and 3D interaction 2.6 Physical controls, sensors and special devices Design Focus: Feeling the road Design Focus: Smart-Its - making sensors easy 2.7 Paper: printing and scanning Design Focus: Readability of text 2.8 Memory 2.9 Processing and networks Design Focus: The myth of the infinitely fast machine 2.10 Summary Exercises Recommended reading Chapter 3 The interaction 3.1 Introduction 3.2 Models of interaction Design Focus: Video recorder 3.3 Frameworks and HCI 3.4 Ergonomics Design Focus: Industrial interfaces 3.5 Interaction styles Design Focus: Navigation in 3D and 2D 3.6 Elements of the WIMP interface Design Focus: Learning toolbars 3.7 Interactivity 3.8 The context of the interaction Design Focus: Half the picture? 3.9 Experience, engagement and fun 3.10 Summary Exercises Recommended reading Chapter 4 Paradigms 4.1 Introduction 4.2 Paradigms for interaction 4.3 Summary Exercises Recommended reading Part 2 Design process Chapter 5 Interaction design basics 5.1 Introduction 5.2 What is design? 5.3 The process of design 5.4 User focus Design Focus: Cultural probes 5.5 Scenarios 5.6 Navigation design Design Focus: Beware the big button trap Design Focus: Modes 5.7 Screen design and layout Design Focus: Alignment and layout matter Design Focus: Checking screen colors 5.8 Iteration and prototyping 5.9 Summary Exercises Recommended reading Chapter 6 HCI in the software process 6.1 Introduction 6.2 The software life cycle 6.3 Usability engineering 6.4 Iterative design and prototyping Design Focus: Prototyping in practice 6.5 Design rationale 6.6 Summary Exercises Recommended reading Chapter 7 Design rules 7.1 Introduction 7.2 Principles to support usability 7.3 Standards 7.4 Guidelines 7.5 Golden rules and heuristics 7.6 HCI patterns 7.7 Summary Exercises Recommended reading Chapter 8 Implementation support 8.1 Introduction 8.2 Elements of windowing systems 8.3 Programming the application Design Focus: Going with the grain 8.4 Using toolkits Design Focus: Java and AWT 8.5 User interface management systems 8.6 Summary Exercises Recommended reading Chapter 9 Evaluation techniques 9.1 What is evaluation? 9.2 Goals of evaluation 9.3 Evaluation through expert analysis 9.4 Evaluation through user participation 9.5 Choosing an evaluation method 9.6 Summary Exercises Recommended reading Chapter 10 Universal design 10.1 Introduction 10.2 Universal design principles 10.3 Multi-modal interaction Design Focus: Designing websites for screen readers Design Focus: Choosing the right kind of speech Design Focus: Apple Newton 10.4 Designing for diversity Design Focus: Mathematics for the blind 10.5 Summary Exercises Recommended reading Chapter 11 User support 11.1 Introduction 11.2 Requirements of user support 11.3 Approaches to user support 11.4 Adaptive help systems Design Focus: It's good to talk - help from real people 11.5 Designing user support systems 11.6 Summary Exercises Recommended reading Part 3 Models and theories Chapter 12 Cognitive models 12.1 Introduction 12.2 Goal and task hierarchies Design Focus: GOMS saves money 12.3 Linguistic models 12.4 The challenge of display-based systems 12.5 Physical and device models 12.6 Cognitive architectures 12.7 Summary Exercises Recommended reading Chapter 13 Socio-organizational issues and stakeholder requirements 13.1 Introduction 13.2 Organizational issues Design Focus: Implementing workflow in Lotus Notes 13.3 Capturing requirements Design Focus: Tomorrow's hospital - using participatory design 13.4 Summary Exercises Recommended reading Chapter 14 Communication and collaboration models 14.1 Introduction 14.2 Face-to-face communication Design Focus: Looking real - Avatar Conference 14.3 Conversation 14.4 Text-based communication 14.5 Group working 14.6 Summary Exercises Recommended reading Chapter 15 Task analysis 15.1 Introduction 15.2 Differences between task analysis and other techniques 15.3 Task decomposition 15.4 Knowledge-based analysis 15.5 Entity-relationship-based techniques 15.6 Sources of information and data collection 15.7 Uses of task analysis 15.8 Summary Exercises Recommended reading Chapter 16 Dialog notations and design 16.1 What is dialog? 16.2 Dialog design notations 16.3 Diagrammatic notations Design Focus: Using STNs in prototyping Design Focus: Digital watch - documentation and analysis 16.4 Textual dialog notations 16.5 Dialog semantics 16.6 Dialog analysis and design 16.7 Summary Exercises Recommended reading Chapter 17 Models of the system 17.1 Introduction 17.2 Standard formalisms 17.3 Interaction models 17.4 Continuous behavior 17.5 Summary Exercises Recommended reading Chapter 18 Modeling rich interaction 18.1 Introduction 18.2 Status-event analysis 18.3 Rich contexts 18.4 Low intention and sensor-based interaction Design Focus: Designing a car courtesy light 18.5 Summary Exercises Recommended reading Part 4 Outside the box Chapter 19 Groupware 19.1 Introduction 19.2 Groupware systems 19.3 Computer-mediated communication Design Focus: SMS in action 19.4 Meeting and decision support systems 19.5 Shared applications and artifacts 19.6 Frameworks for groupware Design Focus: TOWER - workspace awareness Exercises Recommended reading Chapter 20 Ubiquitous computing and augmented realities 20.1 Introduction 20.2 Ubiquitous computing applications research Design Focus: Ambient Wood - augmenting the physical Design Focus: Classroom 2000/eClass - deploying and evaluating ubicomp 20.3 Virtual and augmented reality Design Focus: Shared experience Design Focus: Applications of augmented reality 20.4 Information and data visualization Design Focus: Getting the size right 20.5 Summary Exercises Recommended reading Chapter 21 Hypertext, multimedia and the world wide web 21.1 Introduction 21.2 Understanding hypertext 21.3 Finding things 21.4 Web technology and issues 21.5 Static web content 21.6 Dynamic web content 21.7 Summary Exercises Recommended reading References Index
---
paper_title: Instant Messaging and Interruption : Influence of Task Type on Performance
paper_content:
We describe research on the effects of instant messaging (IM) on ongoing computing tasks. We present a study that builds on earlier work exploring the influence of sending notifications at different times and the kinds of tasks that are particularly susceptible to interruption. This work investigates alternative hypotheses about the nature of disruption for a list evaluation task, an activity we had identified as being particularly costly to interrupt. Our findings replicate earlier work, showing the generally harmful effects of IM, and further show that notifications are more disruptive for fast, stimulus-driven search tasks than for slower, more effortful semantic-based search tasks.
---
paper_title: Interruption as a test of the user-computer interface
paper_content:
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Measuring the effects of interruptions on task performance in the user interface
paper_content:
As users continue offloading more control and responsibility to the computer, coordinating the asynchronous interactions between the user and computer is becoming increasingly important. Without proper coordination, an application attempting to gain the user's attention risks interrupting the user in the midst of performing another task. To justify why an application should avoid interrupting the user whenever possible, we designed an experiment measuring the disruptive effect of an interruption on a user's task performance. The experiment utilized six Web-based task categories and two categories of interruption tasks. The results of the experiment demonstrate that: (i) a user performs slower on an interrupted task than a non-interrupted task, (ii) the disruptive effect of an interruption differs as a function of the task category, and (iii) different interruption tasks cause similar disruptive effects on task performance. These results empirically validate the need to better coordinate user interactions among applications that are competing for the user's attention.
---
paper_title: If not now, when?: the effects of interruption at different moments within task execution
paper_content:
User attention is a scarce resource, and users are susceptible to interruption overload. Systems do not reason about the effects of interrupting a user during a task sequence. In this study, we measure effects of interrupting a user at different moments within task execution in terms of task performance, emotional state, and social attribution. Task models were developed using event perception techniques, and the resulting models were used to identify interruption timings based on a user's predicted cognitive load. Our results show that different interruption moments have different impacts on user emotional state and positive social attribution, and suggest that a system could enable a user to maintain a high level of awareness while mitigating the disruptive effects of interruption. We discuss implications of these results for the design of an attention manager.
---
paper_title: "Silence Your Phones": Smartphone Notifications Increase Inattention and Hyperactivity Symptoms
paper_content:
As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.
---
paper_title: TIME-SHARING REVISITED: TEST OF A COMPONENTIAL MODEL FOR THE ASSESSMENT OF INDIVIDUAL DIFFERENCES
paper_content:
Abstract Time-sharing ability as an individual differences variable in dual task performance was examined using a componential model. Five proposed components were assessed: (1) serial processing ability; (2) an internal model of the system dynamics; (3) performing heterogeneous operations; (4) adaptation to rapidly changing dynamic conditions; (5) parallel processing ability. The approach combined methodologies from experimental psychology and from individual differences research. Forty subjects were given four single task pretests, and then performed a compensatory tracking task in various dual task combinations administered during six sessions over a period of three days. At the conclusion of the experiment the subjects had to perform three different dual task transfer tasks. The results of a factor analysis and a series of stepwise multiple-regression analyses revealed two important dimensions of individual differences in dual task performance: (1) individual differences in cognitive style linked to t...
---
paper_title: Interruption of a Monotonous Activity with Complex Tasks: Effects of Individual Differences
paper_content:
The fluctuations of vigilance and performance for operators working in monotonous conditions were studied in laboratory. Three experimental designs were achieved on 20 subjects:—one reference condition with a vigilance task of 3 hours 30 during day time,—one experimental condition with interruptions of monotony during the vigilance task by a sustained task, during the same day time period,—the same experimental condition during night time.The purpose was to analyse the effect of breakdown in monotony on arousal and human performance and to look for individual differences in human performance. Physiological data were collected in order to study the variation of arousal. Response times and omissions were used as performance index. Individual differences can be observed for performance and vigilance. Two kinds of behavior are defined: stable subjects and subjects characterized by fluctuations both for arousal and performance during the task. During day time period, breakdown of monotony has a positive effect...
---
paper_title: Towards an index of opportunity: understanding changes in mental workload during task execution
paper_content:
To contribute to systems that reason about human attention, our work empirically demonstrates how a user's mental workload changes during task execution. We conducted a study where users performed interactive, hierarchical tasks while mental workload was measured through the use of pupil size. Results show that (i) different types of subtasks impose different mental workload, (ii) workload decreases at subtask boundaries, (iii) workload decreases more at boundaries higher in a task model and less at boundaries lower in the model, (iv) workload changes among subtask boundaries within the same level of a task model, and (v) effective understanding of why changes in workload occur requires that the measure be tightly coupled to a validated task model. From the results, we show how to map mental workload onto a computational Index of Opportunity that systems can use to better reason about human attention.
---
paper_title: Effects of Content and Time of Delivery on Receptivity to Mobile Interruptions
paper_content:
In this paper we investigate effects of the content of interruptions and of the time of interruption delivery on mobile phones. We review related work and report on a naturalistic quasi-experiment using experience-sampling that showed that the receptivity to an interruption is influenced by its content rather than by its time of delivery in the employed modality of delivery - SMS. We also examined the underlying variables that increase the perceived quality of content and found that the factors interest, entertainment, relevance and actionability influence people's receptivity significantly. Our findings inform system design that seeks to provide context-sensitive information or to predict interruptibility and suggest the consideration of receptivity as an extension to the way we think and reason about interruptibility.
---
paper_title: Didn't you see my message?: predicting attentiveness to mobile instant messages
paper_content:
Mobile instant messaging (e.g., via SMS or WhatsApp) often goes along with an expectation of high attentiveness, i.e., that the receiver will notice and read the message within a few minutes. Hence, existing instant messaging services for mobile phones share indicators of availability, such as the last time the user has been online. However, in this paper we not only provide evidence that these cues create social pressure, but that they are also weak predictors of attentiveness. As remedy, we propose to share a machine-computed prediction of whether the user will view a message within the next few minutes or not. For two weeks, we collected behavioral data from 24 users of mobile instant messaging services. By the means of machine-learning techniques, we identified that simple features extracted from the phone, such as the user's interaction with the notification center, the screen activity, the proximity sensor, and the ringer mode, are strong predictors of how quickly the user will attend to the messages. With seven automatically selected features our model predicts whether a phone user will view a message within a few minutes with 70.6% accuracy and a precision for fast attendance of 81.2%
---
paper_title: Predicting human interruptibility with sensors: a Wizard of Oz feasibility study
paper_content:
A person seeking someone else's attention is normally able to quickly assess how interruptible they are. This assessment allows for behavior we perceive as natural, socially appropriate, or simply polite. On the other hand, today's computer systems are almost entirely oblivious to the human world they operate in, and typically have no way to take into account the interruptibility of the user. This paper presents a Wizard of Oz study exploring whether, and how, robust sensor-based predictions of interruptibility might be constructed, which sensors might be most useful to such predictions, and how simple such sensors might be.The study simulates a range of possible sensors through human coding of audio and video recordings. Experience sampling is used to simultaneously collect randomly distributed self-reports of interruptibility. Based on these simulated sensors, we construct statistical models predicting human interruptibility and compare their predictions with the collected self-report data. The results of these models, although covering a demographically limited sample, are very promising, with the overall accuracy of several models reaching about 78%. Additionally, a model tuned to avoiding unwanted interruptions does so for 90% of its predictions, while retaining 75% overall accuracy.
---
paper_title: Models of attention in computing and communication: from principles to applications
paper_content:
Creating computing and communication systems that sense and reason about human attention by fusing together information from multiple streams.
---
paper_title: Examining the robustness of sensor-based statistical models of human interruptibility
paper_content:
Current systems often create socially awkward interruptions or unduly demand attention because they have no way of knowing if a person is busy and should not be interrupted. Previous work has examined the feasibility of using sensors and statistical models to estimate human interruptibility in an office environment, but left open some questions about the robustness of such an approach. This paper examines several dimensions of robustness in sensor-based statistical models of human interruptibility. We show that real sensors can be constructed with sufficient accuracy to drive the predictive models. We also create statistical models for a much broader group of people than was studied in prior work. Finally, we examine the effects of training data quantity on the accuracy of these models and consider tradeoffs associated with different combinations of sensors. As a whole, our analyses demonstrate that sensor-based statistical models of human interruptibility can provide robust estimates for a variety of office workers in a range of circumstances, and can do so with accuracy as good as or better than people. Integrating these models into systems could support a variety of advances in human computer interaction and computer-mediated communication.
---
paper_title: Attention-Sensitive Alerting
paper_content:
We introduce utility-directed procedures for mediating the flow of potentially distracting alerts and communications to computer users. We present models and inference procedures that balance the context-sensitive costs of deferring alerts with the cost of interruption. We describe the challenge of reasoning about such costs under uncertainty via an analysis of user activity and the content of notifications. After introducing principles of attention-sensitive alerting, we focus on the problem of guiding alerts about email messages. We dwell on the problem of inferring the expected criticality of email and discuss work on the PRIORITIES system, centering on prioritizing email by criticality and modulating the communication of notifications to users about the presence and nature of incoming email.
---
paper_title: If not now, when?: the effects of interruption at different moments within task execution
paper_content:
User attention is a scarce resource, and users are susceptible to interruption overload. Systems do not reason about the effects of interrupting a user during a task sequence. In this study, we measure effects of interrupting a user at different moments within task execution in terms of task performance, emotional state, and social attribution. Task models were developed using event perception techniques, and the resulting models were used to identify interruption timings based on a user's predicted cognitive load. Our results show that different interruption moments have different impacts on user emotional state and positive social attribution, and suggest that a system could enable a user to maintain a high level of awareness while mitigating the disruptive effects of interruption. We discuss implications of these results for the design of an attention manager.
---
paper_title: Predicting human interruptibility with sensors
paper_content:
A person seeking another person's attention is normally able to quickly assess how interruptible the other person currently is. Such assessments allow behavior that we consider natural, socially appropriate, or simply polite. This is in sharp contrast to current computer and communication systems, which are largely unaware of the social situations surrounding their usage and the impact that their actions have on these situations. If systems could model human interruptibility, they could use this information to negotiate interruptions at appropriate times, thus improving human computer interaction.This article presents a series of studies that quantitatively demonstrate that simple sensors can support the construction of models that estimate human interruptibility as well as people do. These models can be constructed without using complex sensors, such as vision-based techniques, and therefore their use in everyday office environments is both practical and affordable. Although currently based on a demographically limited sample, our results indicate a substantial opportunity for future research to validate these results over larger groups of office workers. Our results also motivate the development of systems that use these models to negotiate interruptions at socially appropriate times.
---
paper_title: Examining task engagement in sensor-based statistical models of human interruptibility
paper_content:
The computer and communication systems that office workers currently use tend to interrupt at inappropriate times or unduly demand attention because they have no way to determine when an interruption is appropriate. Sensor?based statistical models of human interruptibility offer a potential solution to this problem. Prior work to examine such models has primarily reported results related to social engagement, but it seems that task engagement is also important. Using an approach developed in our prior work on sensor?based statistical models of human interruptibility, we examine task engagement by studying programmers working on a realistic programming task. After examining many potential sensors, we implement a system to log low?level input events in a development environment. We then automatically extract features from these low?level event logs and build a statistical model of interruptibility. By correctly identifying situations in which programmers are non?interruptible and minimizing cases where the model incorrectly estimates that a programmer is non?interruptible, we can support a reduction in costly interruptions while still allowing systems to convey notifications in a timely manner.
---
paper_title: Predicting human interruptibility with sensors: a Wizard of Oz feasibility study
paper_content:
A person seeking someone else's attention is normally able to quickly assess how interruptible they are. This assessment allows for behavior we perceive as natural, socially appropriate, or simply polite. On the other hand, today's computer systems are almost entirely oblivious to the human world they operate in, and typically have no way to take into account the interruptibility of the user. This paper presents a Wizard of Oz study exploring whether, and how, robust sensor-based predictions of interruptibility might be constructed, which sensors might be most useful to such predictions, and how simple such sensors might be.The study simulates a range of possible sensors through human coding of audio and video recordings. Experience sampling is used to simultaneously collect randomly distributed self-reports of interruptibility. Based on these simulated sensors, we construct statistical models predicting human interruptibility and compare their predictions with the collected self-report data. The results of these models, although covering a demographically limited sample, are very promising, with the overall accuracy of several models reaching about 78%. Additionally, a model tuned to avoiding unwanted interruptions does so for 90% of its predictions, while retaining 75% overall accuracy.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Examining the robustness of sensor-based statistical models of human interruptibility
paper_content:
Current systems often create socially awkward interruptions or unduly demand attention because they have no way of knowing if a person is busy and should not be interrupted. Previous work has examined the feasibility of using sensors and statistical models to estimate human interruptibility in an office environment, but left open some questions about the robustness of such an approach. This paper examines several dimensions of robustness in sensor-based statistical models of human interruptibility. We show that real sensors can be constructed with sufficient accuracy to drive the predictive models. We also create statistical models for a much broader group of people than was studied in prior work. Finally, we examine the effects of training data quantity on the accuracy of these models and consider tradeoffs associated with different combinations of sensors. As a whole, our analyses demonstrate that sensor-based statistical models of human interruptibility can provide robust estimates for a variety of office workers in a range of circumstances, and can do so with accuracy as good as or better than people. Integrating these models into systems could support a variety of advances in human computer interaction and computer-mediated communication.
---
paper_title: Attention-Sensitive Alerting
paper_content:
We introduce utility-directed procedures for mediating the flow of potentially distracting alerts and communications to computer users. We present models and inference procedures that balance the context-sensitive costs of deferring alerts with the cost of interruption. We describe the challenge of reasoning about such costs under uncertainty via an analysis of user activity and the content of notifications. After introducing principles of attention-sensitive alerting, we focus on the problem of guiding alerts about email messages. We dwell on the problem of inferring the expected criticality of email and discuss work on the PRIORITIES system, centering on prioritizing email by criticality and modulating the communication of notifications to users about the presence and nature of incoming email.
---
paper_title: Predicting human interruptibility with sensors
paper_content:
A person seeking another person's attention is normally able to quickly assess how interruptible the other person currently is. Such assessments allow behavior that we consider natural, socially appropriate, or simply polite. This is in sharp contrast to current computer and communication systems, which are largely unaware of the social situations surrounding their usage and the impact that their actions have on these situations. If systems could model human interruptibility, they could use this information to negotiate interruptions at appropriate times, thus improving human computer interaction.This article presents a series of studies that quantitatively demonstrate that simple sensors can support the construction of models that estimate human interruptibility as well as people do. These models can be constructed without using complex sensors, such as vision-based techniques, and therefore their use in everyday office environments is both practical and affordable. Although currently based on a demographically limited sample, our results indicate a substantial opportunity for future research to validate these results over larger groups of office workers. Our results also motivate the development of systems that use these models to negotiate interruptions at socially appropriate times.
---
paper_title: Examining task engagement in sensor-based statistical models of human interruptibility
paper_content:
The computer and communication systems that office workers currently use tend to interrupt at inappropriate times or unduly demand attention because they have no way to determine when an interruption is appropriate. Sensor?based statistical models of human interruptibility offer a potential solution to this problem. Prior work to examine such models has primarily reported results related to social engagement, but it seems that task engagement is also important. Using an approach developed in our prior work on sensor?based statistical models of human interruptibility, we examine task engagement by studying programmers working on a realistic programming task. After examining many potential sensors, we implement a system to log low?level input events in a development environment. We then automatically extract features from these low?level event logs and build a statistical model of interruptibility. By correctly identifying situations in which programmers are non?interruptible and minimizing cases where the model incorrectly estimates that a programmer is non?interruptible, we can support a reduction in costly interruptions while still allowing systems to convey notifications in a timely manner.
---
paper_title: The act of task difficulty and eye-movement frequency for the 'Oculo-motor indices'
paper_content:
The oculo-motor re ects the viewer s ability to process visual information. This paper examines whether the oculo-motor was affected by two factors: rstly task dif culty and secondly eye-movement frequency. In this paper, oculo-motor indices were de ned as measurements of pupil size, blink and eye-movement. For the purpose of this study, two experiments were designed based on previous subsequential ocular tasks were subjects were required to solve a series of mathematical problems and to orally report their calculations.The results of this experiment found that pupil size and blink rate increased in response to task dif culty in the oral calculation group. In contrast however both the saccade occurrence rate and saccade length were found to decrease with the increased dif culty of the task. The results suggests that oculo-motor indices respond to task dif culty. Secondly, eye-movement frequencies were elicited by the switching frequency of a visual target. Pupil size and the saccade time were found to increase with the frequency however, blink and gazing time were found to decrease in response to the frequency. There was a negative correlation between blinking and gazing time. Additionally, the correlation between blinking and saccade time appeared in the higher frequencies.These results indicate the oculo-motor indices are affected by both task dif culty and eye-movement frequency. Furthermore, eye-movement frequency appears to play a different role than that of task dif culty.
---
paper_title: Leveraging characteristics of task structure to predict the cost of interruption
paper_content:
A challenge in building interruption reasoning systems is to compute an accurate cost of interruption (COI). Prior work has used interface events and other cues to predict COI, but ignore characteristics related to the structure of a task. This work investigates how well characteristics of task structure can predict COI, as objectively measured by resumption lag. In an experiment, users were interrupted during task execution at various boundaries to collect a large sample of resumption lag values. Statistical methods were employed to create a parsimonious model that uses characteristics of task structure to predict COI. A subsequent experiment with different tasks showed that the model can predict COI with reasonably high accuracy. Our model can be expediently applied to many goal-directed tasks, allowing systems to make more effective decisions about when to interrupt.
---
paper_title: Pupillary dilation as a measure of attention: a quantitative system analysis
paper_content:
It has long been known that the pupil dilates as a consequence of attentional effort. But the function that relates attentional input to pupillary output has never been the subject of quantitative analysis. We present a system analysis of the pupillary response to attentional input. Attentional input is modeled as a string ofattentional pulses. We show that the system is linear; the effects of input pulses on the pupillary response are additive. The impulse response has essentially a gamma distribution with two free parameters. These parameters are estimated; they are fairly constant over tasks and subjects. The paper presents a method of estimating the string of attentional input pulses, given some average pupillary output. The method involves the technique of deconvolution; it can be implemented with a public-domain software package, Pupil.
---
paper_title: Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources
paper_content:
A physiological measure of processing load or "mental effort" required to perform a cognitive task should accurately reflect within-task, between-task, and betweenindividual variations in processing demands. This article reviews all available experimental data and concludes that the task-evoked pupillary response fulfills these criteria. Alternative explanations are considered and rejected. Some implications for neurophysiological and cognitive theories of processing resources are discussed.
---
paper_title: Task-evoked pupillary response to mental workload in human-computer interaction
paper_content:
Accurate assessment of a user's mental workload will be critical for developing systems that manage user attention (interruptions) in the user interface. Empirical evidence suggests that an interruption is much less disruptive when it occurs during a period of lower mental workload. To provide a measure of mental workload for interactive tasks, we investigated the use of task-evoked pupillary response. Results show that a more difficult task demands longer processing time, induces higher subjective ratings of mental workload, and reliably evokes greater pupillary response at salient subtasks. We discuss the findings and their implications for the design of an attention manager.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Towards an index of opportunity: understanding changes in mental workload during task execution
paper_content:
To contribute to systems that reason about human attention, our work empirically demonstrates how a user's mental workload changes during task execution. We conducted a study where users performed interactive, hierarchical tasks while mental workload was measured through the use of pupil size. Results show that (i) different types of subtasks impose different mental workload, (ii) workload decreases at subtask boundaries, (iii) workload decreases more at boundaries higher in a task model and less at boundaries lower in the model, (iv) workload changes among subtask boundaries within the same level of a task model, and (v) effective understanding of why changes in workload occur requires that the measure be tightly coupled to a validated task model. From the results, we show how to map mental workload onto a computational Index of Opportunity that systems can use to better reason about human attention.
---
paper_title: Balancing Awareness and Interruption: Investigation of Notification Deferral Policies
paper_content:
We review experiments with bounded deferral, a method aimed at reducing the disruptiveness of incoming messages and alerts in return for bounded delays in receiving information. Bounded deferral provides users with a means for balancing awareness about potentially urgent information with the cost of interruption.
---
paper_title: Examining the robustness of sensor-based statistical models of human interruptibility
paper_content:
Current systems often create socially awkward interruptions or unduly demand attention because they have no way of knowing if a person is busy and should not be interrupted. Previous work has examined the feasibility of using sensors and statistical models to estimate human interruptibility in an office environment, but left open some questions about the robustness of such an approach. This paper examines several dimensions of robustness in sensor-based statistical models of human interruptibility. We show that real sensors can be constructed with sufficient accuracy to drive the predictive models. We also create statistical models for a much broader group of people than was studied in prior work. Finally, we examine the effects of training data quantity on the accuracy of these models and consider tradeoffs associated with different combinations of sensors. As a whole, our analyses demonstrate that sensor-based statistical models of human interruptibility can provide robust estimates for a variety of office workers in a range of circumstances, and can do so with accuracy as good as or better than people. Integrating these models into systems could support a variety of advances in human computer interaction and computer-mediated communication.
---
paper_title: Examining task engagement in sensor-based statistical models of human interruptibility
paper_content:
The computer and communication systems that office workers currently use tend to interrupt at inappropriate times or unduly demand attention because they have no way to determine when an interruption is appropriate. Sensor?based statistical models of human interruptibility offer a potential solution to this problem. Prior work to examine such models has primarily reported results related to social engagement, but it seems that task engagement is also important. Using an approach developed in our prior work on sensor?based statistical models of human interruptibility, we examine task engagement by studying programmers working on a realistic programming task. After examining many potential sensors, we implement a system to log low?level input events in a development environment. We then automatically extract features from these low?level event logs and build a statistical model of interruptibility. By correctly identifying situations in which programmers are non?interruptible and minimizing cases where the model incorrectly estimates that a programmer is non?interruptible, we can support a reduction in costly interruptions while still allowing systems to convey notifications in a timely manner.
---
paper_title: The act of task difficulty and eye-movement frequency for the 'Oculo-motor indices'
paper_content:
The oculo-motor re ects the viewer s ability to process visual information. This paper examines whether the oculo-motor was affected by two factors: rstly task dif culty and secondly eye-movement frequency. In this paper, oculo-motor indices were de ned as measurements of pupil size, blink and eye-movement. For the purpose of this study, two experiments were designed based on previous subsequential ocular tasks were subjects were required to solve a series of mathematical problems and to orally report their calculations.The results of this experiment found that pupil size and blink rate increased in response to task dif culty in the oral calculation group. In contrast however both the saccade occurrence rate and saccade length were found to decrease with the increased dif culty of the task. The results suggests that oculo-motor indices respond to task dif culty. Secondly, eye-movement frequencies were elicited by the switching frequency of a visual target. Pupil size and the saccade time were found to increase with the frequency however, blink and gazing time were found to decrease in response to the frequency. There was a negative correlation between blinking and gazing time. Additionally, the correlation between blinking and saccade time appeared in the higher frequencies.These results indicate the oculo-motor indices are affected by both task dif culty and eye-movement frequency. Furthermore, eye-movement frequency appears to play a different role than that of task dif culty.
---
paper_title: Lilsys: Sensing Unavailability
paper_content:
As communications systems increasingly gather and propagate information about people's reachability or ``presence'', users need better tools to minimize undesired interruptions while allowing desired ones. We review the salient elements of presence and availability that people use when initiating face-to-face communication. We discuss problems with current strategies for managing one's availability in telecommunication media. We describe a prototype system called Lilsys which passively collects availability cues gathered from users' actions and environment using ambient sensors and provides machine inferencing of unavailability. We discuss observations and design implications from deploying Lilsys.
---
paper_title: Leveraging characteristics of task structure to predict the cost of interruption
paper_content:
A challenge in building interruption reasoning systems is to compute an accurate cost of interruption (COI). Prior work has used interface events and other cues to predict COI, but ignore characteristics related to the structure of a task. This work investigates how well characteristics of task structure can predict COI, as objectively measured by resumption lag. In an experiment, users were interrupted during task execution at various boundaries to collect a large sample of resumption lag values. Statistical methods were employed to create a parsimonious model that uses characteristics of task structure to predict COI. A subsequent experiment with different tasks showed that the model can predict COI with reasonably high accuracy. Our model can be expediently applied to many goal-directed tasks, allowing systems to make more effective decisions about when to interrupt.
---
paper_title: Pupillary dilation as a measure of attention: a quantitative system analysis
paper_content:
It has long been known that the pupil dilates as a consequence of attentional effort. But the function that relates attentional input to pupillary output has never been the subject of quantitative analysis. We present a system analysis of the pupillary response to attentional input. Attentional input is modeled as a string ofattentional pulses. We show that the system is linear; the effects of input pulses on the pupillary response are additive. The impulse response has essentially a gamma distribution with two free parameters. These parameters are estimated; they are fairly constant over tasks and subjects. The paper presents a method of estimating the string of attentional input pulses, given some average pupillary output. The method involves the technique of deconvolution; it can be implemented with a public-domain software package, Pupil.
---
paper_title: Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources
paper_content:
A physiological measure of processing load or "mental effort" required to perform a cognitive task should accurately reflect within-task, between-task, and betweenindividual variations in processing demands. This article reviews all available experimental data and concludes that the task-evoked pupillary response fulfills these criteria. Alternative explanations are considered and rejected. Some implications for neurophysiological and cognitive theories of processing resources are discussed.
---
paper_title: Task-evoked pupillary response to mental workload in human-computer interaction
paper_content:
Accurate assessment of a user's mental workload will be critical for developing systems that manage user attention (interruptions) in the user interface. Empirical evidence suggests that an interruption is much less disruptive when it occurs during a period of lower mental workload. To provide a measure of mental workload for interactive tasks, we investigated the use of task-evoked pupillary response. Results show that a more difficult task demands longer processing time, induces higher subjective ratings of mental workload, and reliably evokes greater pupillary response at salient subtasks. We discuss the findings and their implications for the design of an attention manager.
---
paper_title: BusyBody: creating and fielding personalized models of the cost of interruption
paper_content:
Interest has been growing in opportunities to build and deploy statistical models that can infer a computer user's current interruptability from computer activity and relevant contextual information. We describe a system that intermittently asks users to assess their perceived interruptability during a training phase and that builds decision-theoretic models with the ability to predict the cost of interrupting the user. The models are used at run-time to compute the expected cost of interruptions, providing a mediator for incoming notifications, based on a consideration of a user's current and recent history of computer activity, meeting status, location, time of day, and whether a conversation is detected.
---
paper_title: Towards an index of opportunity: understanding changes in mental workload during task execution
paper_content:
To contribute to systems that reason about human attention, our work empirically demonstrates how a user's mental workload changes during task execution. We conducted a study where users performed interactive, hierarchical tasks while mental workload was measured through the use of pupil size. Results show that (i) different types of subtasks impose different mental workload, (ii) workload decreases at subtask boundaries, (iii) workload decreases more at boundaries higher in a task model and less at boundaries lower in the model, (iv) workload changes among subtask boundaries within the same level of a task model, and (v) effective understanding of why changes in workload occur requires that the measure be tightly coupled to a validated task model. From the results, we show how to map mental workload onto a computational Index of Opportunity that systems can use to better reason about human attention.
---
paper_title: Balancing Awareness and Interruption: Investigation of Notification Deferral Policies
paper_content:
We review experiments with bounded deferral, a method aimed at reducing the disruptiveness of incoming messages and alerts in return for bounded delays in receiving information. Bounded deferral provides users with a means for balancing awareness about potentially urgent information with the cost of interruption.
---
paper_title: Examining the robustness of sensor-based statistical models of human interruptibility
paper_content:
Current systems often create socially awkward interruptions or unduly demand attention because they have no way of knowing if a person is busy and should not be interrupted. Previous work has examined the feasibility of using sensors and statistical models to estimate human interruptibility in an office environment, but left open some questions about the robustness of such an approach. This paper examines several dimensions of robustness in sensor-based statistical models of human interruptibility. We show that real sensors can be constructed with sufficient accuracy to drive the predictive models. We also create statistical models for a much broader group of people than was studied in prior work. Finally, we examine the effects of training data quantity on the accuracy of these models and consider tradeoffs associated with different combinations of sensors. As a whole, our analyses demonstrate that sensor-based statistical models of human interruptibility can provide robust estimates for a variety of office workers in a range of circumstances, and can do so with accuracy as good as or better than people. Integrating these models into systems could support a variety of advances in human computer interaction and computer-mediated communication.
---
paper_title: If not now, when?: the effects of interruption at different moments within task execution
paper_content:
User attention is a scarce resource, and users are susceptible to interruption overload. Systems do not reason about the effects of interrupting a user during a task sequence. In this study, we measure effects of interrupting a user at different moments within task execution in terms of task performance, emotional state, and social attribution. Task models were developed using event perception techniques, and the resulting models were used to identify interruption timings based on a user's predicted cognitive load. Our results show that different interruption moments have different impacts on user emotional state and positive social attribution, and suggest that a system could enable a user to maintain a high level of awareness while mitigating the disruptive effects of interruption. We discuss implications of these results for the design of an attention manager.
---
paper_title: Understanding and developing models for detecting and differentiating breakpoints during interactive tasks
paper_content:
The ability to detect and differentiate breakpoints during task execution is critical for enabling defer-to-breakpoint policies within interruption management. In this work, we examine the feasibility of building statistical models that can detect and differentiate three granularities (types) of perceptually meaningful breakpoints during task execution, without having to recognize the underlying tasks. We collected ecological samples of task execution data, and asked observers to review the interaction in the collected videos and identify any perceived breakpoints and their type. Statistical methods were applied to learn models that map features of the interaction to each type of breakpoint. Results showed that the models were able to detect and differentiate breakpoints with reasonably high accuracy across tasks. Among many uses, our resulting models can enable interruption management systems to better realize defer-to-breakpoint policies for interactive, free-form tasks.
---
paper_title: A method, system, and tools for intelligent interruption management
paper_content:
Interrupting users engaged in tasks typically has negative effects on their task completion time, error rate, and affective state. Empirical research has shown that these negative effects can be mitigated by deferring interruptions until more opportune moments in a user's task sequence. However, existing systems that reason about when to interrupt do not have access to task models that would allow for such finer-grained temporal reasoning. We outline our method of finding opportune moments that links a physiological measure of workload with task modeling techniques and theories of attention. We describe the design and implementation of our interruption management system, showing how it can be used to specify and monitor practical, representative user tasks. We discuss our ongoing empirical work in this area, and how the use of our framework may enable attention aware systems to consider a user's position in a task when reasoning about when to interrupt.
---
paper_title: Examining task engagement in sensor-based statistical models of human interruptibility
paper_content:
The computer and communication systems that office workers currently use tend to interrupt at inappropriate times or unduly demand attention because they have no way to determine when an interruption is appropriate. Sensor?based statistical models of human interruptibility offer a potential solution to this problem. Prior work to examine such models has primarily reported results related to social engagement, but it seems that task engagement is also important. Using an approach developed in our prior work on sensor?based statistical models of human interruptibility, we examine task engagement by studying programmers working on a realistic programming task. After examining many potential sensors, we implement a system to log low?level input events in a development environment. We then automatically extract features from these low?level event logs and build a statistical model of interruptibility. By correctly identifying situations in which programmers are non?interruptible and minimizing cases where the model incorrectly estimates that a programmer is non?interruptible, we can support a reduction in costly interruptions while still allowing systems to convey notifications in a timely manner.
---
paper_title: I've got 99 problems, but vibration ain't one: a survey of smartphone users' concerns
paper_content:
Smartphone operating systems warn users when third-party applications try to access sensitive functions or data. However, all of the major smartphone platforms warn users about different application actions. To our knowledge, their selection of warnings was not grounded in user research; past research on mobile privacy has focused exclusively on the risks pertained to sharing location. To expand the scope of smartphone security and privacy research, we surveyed 3,115 smartphone users about 99 risks associated with 54 smartphone privileges. We asked participants to rate how upset they would be if given risks occurred and used this data to rank risks by levels of user concern. We then asked 41 smartphone users to discuss the risks in their own words; their responses confirmed that people find the lowest-ranked risks merely annoying but might seek legal or financial retribution for the highest-ranked risks. In order to determine the relative frequency of risks, we also surveyed the 3,115 users about experiences with "misbehaving" applications. Our ranking and frequency data can be used to guide the selection of warnings on smartphone platforms.
---
paper_title: Understanding the Role of Places and Activities on Mobile Phone Interaction and Usage Patterns
paper_content:
User interaction patterns with mobile apps and notifications are generally complex due to the many factors involved. However a deep understanding of what influences them can lead to more acceptable applications that are able to deliver information at the right time. In this paper, we present for the first time an in-depth analysis of interaction behavior with notifications in relation to the location and activity of users. We conducted an in-situ study for a period of two weeks to collect more than 36,000 notifications, 17,000 instances of application usage, 77,000 location samples, and 487 days of daily activity entries from 26 students at a UK university. Our results show that users’ attention towards new notifications and willingness to accept them are strongly linked to the location they are in and in minor part to their current activity. We consider both users’ receptivity and attentiveness, and we show that different response behaviors are associated to different locations. These findings are fundamental from a design perspective since they allow us to understand how certain types of places are linked to specific types of interaction behavior. This information can be used as a basis for the development of novel intelligent mobile applications and services.
---
paper_title: An in-situ study of mobile phone notifications
paper_content:
Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.
---
paper_title: Designing content-driven intelligent notification mechanisms for mobile applications
paper_content:
An increasing number of notifications demanding the smartphone user's attention, often arrive at an inappropriate moment, or carry irrelevant content. In this paper we present a study of mobile user interruptibility with respect to notification content, its sender, and the context in which a notification is received. In a real-world study we collect around 70,000 instances of notifications from 35 users. We group notifications according to the applications that initiated them, and the social relationship between the sender and the receiver. Then, by considering both content and context information, such as the current activity of a user, we discuss the design of classifiers for learning the most opportune moment for the delivery of a notification carrying a specific type of information. Our results show that such classifiers lead to a more accurate prediction of users' interruptibility than an alternative approach based on user-defined rules of their own interruptibility.
---
paper_title: Notifications and awareness: a field study of alert usage and preferences
paper_content:
Desktop notifications are designed to provide awareness of information while a user is attending to a primary task. Unfortunately the awareness can come with the price of disruption to the focal task. We review results of a field study on the use and perceived value of email notifications in the workplace. We recorded users' interactions with software applications for two weeks and studied how notifications or their forced absence influenced users' quest for awareness of new email arrival, as well as the impact of notifications on their overall task focus. Results showed that users view notifications as a mechanism to provide passive awareness rather than a trigger to switch tasks. Turing off notifications cause some users to self interrupt more to explicitly monitor email arrival, while others appear to be able to better focus on their tasks. Users acknowledge notifications as disruptive, yet opt for them because of their perceived value in providing awareness.
---
paper_title: My Phone and Me: Understanding People's Receptivity to Mobile Notifications
paper_content:
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
---
paper_title: Large-scale assessment of mobile notifications
paper_content:
Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
---
paper_title: Managing Smartphone Interruptions through Adaptive Modes and Modulation of Notifications
paper_content:
Smartphones are capable of alerting their users to different kinds of digital interruption using different modalities and with varying modulation. Smart notification is the capability of a smartphone for selecting the user's preferred kind of alert in particular situations using the full vocabulary of notification modalities and modulations. It therefore goes well beyond attempts to predict if or when to silence a ringing phone call. We demonstrate smart notification for messages received from a document retrieval system while the user is attending a meeting. The notification manager learns about their notification preferences from users' judgements about videos of meetings. It takes account of the relevance of the interruption to the meeting, whether the user is busy and the sensed location of the smartphone. Through repeated training, the notification manager learns to reliably predict the preferred notification modes for users and this learning continues to improve with use.
---
paper_title: Investigating Mobile Users' Ringer Mode Usage and Attentiveness and Responsiveness to Communication
paper_content:
Smartphones are considered to be "always on, always connected" but mobile users are not always attentive and responsive to incoming communication. We present a mixed methods study investigating how mobile users use ringer modes for managing interruption by and awareness of incoming communication, and how these practices and locales affect their attentiveness and responsiveness. We show that mobile users have diverse ringer mode usage, but they switch ringer modes mainly for three purposes: avoiding interruption, preventing the phone from disrupting the environment, and noticing important notifications. In addition, without signals of notifications, users are less likely to immediately attend to notifications, but they are not less responsive to those they have attended. Finally, ringer mode switches, attentiveness, and responsiveness are all correlated with certain locales. We discuss implications from these findings, and suggest how future CMC tools and notification services take different purposes for using ringer modes and locales into consideration.
---
paper_title: MyTraces: Investigating Correlation and Causation between Users’ Emotional States and Mobile Phone Interaction
paper_content:
Most of the existing work concerning the analysis of emotional states and mobile phone interaction has been based on correlation analysis. In this paper, for the first time, we carry out a causality study to investigate the causal links between users’ emotional states and their interaction with mobile phones, which could provide valuable information to practitioners and researchers. The analysis is based on a dataset collected in-the-wild. We recorded 5,118 mood reports from 28 users over a period of 20 days. Our results show that users’ emotions have a causal impact on different aspects of mobile phone interaction. On the other hand, we can observe a causal impact of the use of specific applications, reflecting the external users’ context, such as socializing and traveling, on happiness and stress level. This study has profound implications for the design of interactive mobile systems since it identifies the dimensions that have causal effects on users’ interaction with mobile phones and vice versa. These findings might lead to the design of more effective computing systems and services that rely on the analysis of the emotional state of users, for example for marketing and digital health applications.
---
paper_title: Assessing the Relationship between Technical Affinity, Stress and Notifications on Smartphones
paper_content:
Smartphones have become an indispensable part of everyday life. By this time, push notifications are at the core of many apps, proactively pushing new content to users. These notifications may raise awareness, but also have the downside of being disruptive. In this paper we present a laboratory study investigating users' attitudes towards notifications and how they deal with notification settings on their smartphones. Permission requests for sending push notifications on iOS don't inform the user about the nature of notifications of this app, leaving the user to make a rather uninformed choice on whether to accept or deny. We show that requests including explanations are significantly more likely to be accepted. Our results further indicate that apart from being disruptive, notifications may create stress due to information overload. Notification settings, once assigned a preset, are rarely changed, although not necessarily matching the favored one.
---
paper_title: Bayesphone: Precomputation of context-sensitive policies for inquiry and action in mobile devices
paper_content:
Inference and decision making with probabilistic user models may be infeasible on portable devices such as cell phones. We highlight the opportunity for storing and using precomputed inferences about ideal actions for future situations, based on offline learning and reasoning with the user models. As a motivating example, we focus on the use precomputation of call-handling policies for cell phones. The methods hinge on the learning of Bayesian user models for predicting whether users will attend meetings on their calendar and the cost of being interrupted by incoming calls should a meeting be attended.
---
paper_title: Using Decision-Theoretic Experience Sampling to Build Personalized Mobile Phone Interruption Models
paper_content:
We contribute a method for approximating users' interruptibility costs to use for experience sampling and validate the method in an application that learns when to automatically turn off and on the phone volume to avoid embarrassing phone interruptions. We demonstrate that users have varying costs associated with interruptions which indicates the need for personalized cost approximations. We compare different experience sampling techniques to learn users' volume preferences and show those that ask when our cost approximation is low reduce the number of embarrassing interruptions and result in more accurate volume classifiers when deployed for long-term use.
---
paper_title: Using context-aware computing to reduce the perceived burden of interruptions from mobile devices
paper_content:
The potential for sensor-enabled mobile devices to proactively present information when and where users need it ranks among the greatest promises of ubiquitous computing. Unfortunately, mobile phones, PDAs, and other computing devices that compete for the user's attention can contribute to interruption irritability and feelings of information overload. Designers of mobile computing interfaces, therefore, require strategies for minimizing the perceived interruption burden of proactively delivered messages. In this work, a context-aware mobile computing device was developed that automatically detects postural and ambulatory activity transitions in real time using wireless accelerometers. This device was used to experimentally measure the receptivity to interruptions delivered at activity transitions relative to those delivered at random times. Messages delivered at activity transitions were found to be better received, thereby suggesting a viable strategy for context-aware message delivery in sensor-enabled mobile computing devices.
---
paper_title: Towards attention-aware adaptive notification on smart phones
paper_content:
As the amount of information to users increases with the trends of an increasing numbers of devices, applications, and web services, the new bottleneck in computing is human attention. To minimize users attentional overload, we propose a novel middleware "Attelia" that detects breakpoints of user's mobile interactions to deliver notifications adaptively. Attelia detects such timings in real-time, using only users phones, without any external sensors, and without any modifications to applications. Our extensive evaluation proved Attelias effectiveness. In-the-wild user study with 30 participants for 16 days showed that, specifically for the users with greater sensitivity for interruptive notification timings, notification scheduling in Attelia's breakpoint timing reduced users frustration by 28% in users' real smart phone environments.
---
paper_title: Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications
paper_content:
We investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction (making voice calls or receiving SMS) based on the assumption that the endings collocate with naturally occurring breakpoint in the user's primary task. Testing this with a naturalistic experiment we find that interruptions (notifications) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition, supporting the potential utility of this notification strategy. We also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks. In situ self-reports and interviews reveal complexities in the subjective experience of the interruption, which suggest that a more nuanced classification of the particular call or SMS and its relationship to the primary task(s) would be desirable.
---
paper_title: Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research
paper_content:
Abstract The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload.
---
paper_title: Towards unobtrusive emotion recognition for affective social communication
paper_content:
Awareness of the emotion of those who communicate with others is a fundamental challenge in building affective intelligent systems. Emotion is a complex state of the mind influenced by external events, physiological changes, or relationships with others. Because emotions can represent a user's internal context or intention, researchers suggested various methods to measure the user's emotions from analysis of physiological signals, facial expressions, or voice. However, existing methods have practical limitations to be used with consumer devices, such as smartphones; they may cause inconvenience to users and require special equipment such as a skin conductance sensor. Our approach is to recognize emotions of the user by inconspicuously collecting and analyzing user-generated data from different types of sensors on the smartphone. To achieve this, we adopted a machine learning approach to gather, analyze and classify device usage patterns, and developed a social network service client for Android smartphones which unobtrusively find various behavioral patterns and the current context of users. Also, we conducted a pilot study to gather real-world data which imply various behaviors and situations of a participant in her/his everyday life. From these data, we extracted 10 features and applied them to build a Bayesian Network classifier for emotion recognition. Experimental results show that our system can classify user emotions into 7 classes such as happiness, surprise, anger, disgust, sadness, fear, and neutral with a surprisingly high accuracy. The proposed system applied to a smartphone demonstrated the feasibility of an unobtrusive emotion recognition approach and a user scenario for emotion-oriented social communication between users.
---
paper_title: InterruptMe: designing intelligent prompting mechanisms for pervasive applications
paper_content:
The mobile phone represents a unique platform for interactive applications that can harness the opportunity of an immediate contact with a user in order to increase the impact of the delivered information. However, this accessibility does not necessarily translate to reachability, as recipients might refuse an initiated contact or disfavor a message that comes in an inappropriate moment. In this paper we seek to answer whether, and how, suitable moments for interruption can be identified and utilized in a mobile system. We gather and analyze a real-world smartphone data trace and show that users' broader context, including their activity, location, time of day, emotions and engagement, determine different aspects of interruptibility. We then design and implement InterruptMe, an interruption management library for Android smartphones. An extensive experiment shows that, compared to a context-unaware approach, interruptions elicited through our library result in increased user satisfaction and shorter response times.
---
paper_title: Didn't you see my message?: predicting attentiveness to mobile instant messages
paper_content:
Mobile instant messaging (e.g., via SMS or WhatsApp) often goes along with an expectation of high attentiveness, i.e., that the receiver will notice and read the message within a few minutes. Hence, existing instant messaging services for mobile phones share indicators of availability, such as the last time the user has been online. However, in this paper we not only provide evidence that these cues create social pressure, but that they are also weak predictors of attentiveness. As remedy, we propose to share a machine-computed prediction of whether the user will view a message within the next few minutes or not. For two weeks, we collected behavioral data from 24 users of mobile instant messaging services. By the means of machine-learning techniques, we identified that simple features extracted from the phone, such as the user's interaction with the notification center, the screen activity, the proximity sensor, and the ringer mode, are strong predictors of how quickly the user will attend to the messages. With seven automatically selected features our model predicts whether a phone user will view a message within a few minutes with 70.6% accuracy and a precision for fast attendance of 81.2%
---
paper_title: Designing content-driven intelligent notification mechanisms for mobile applications
paper_content:
An increasing number of notifications demanding the smartphone user's attention, often arrive at an inappropriate moment, or carry irrelevant content. In this paper we present a study of mobile user interruptibility with respect to notification content, its sender, and the context in which a notification is received. In a real-world study we collect around 70,000 instances of notifications from 35 users. We group notifications according to the applications that initiated them, and the social relationship between the sender and the receiver. Then, by considering both content and context information, such as the current activity of a user, we discuss the design of classifiers for learning the most opportune moment for the delivery of a notification carrying a specific type of information. Our results show that such classifiers lead to a more accurate prediction of users' interruptibility than an alternative approach based on user-defined rules of their own interruptibility.
---
paper_title: A survey of mobile phone sensing
paper_content:
Mobile phones or smartphones are rapidly becoming the central computer and communication device in people's lives. Application delivery channels such as the Apple AppStore are transforming mobile phones into App Phones, capable of downloading a myriad of applications in an instant. Importantly, today's smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera, which are enabling the emergence of personal, group, and communityscale sensing applications. We believe that sensor-equipped mobile phones will revolutionize many sectors of our economy, including business, healthcare, social networks, environmental monitoring, and transportation. In this article we survey existing mobile phone sensing algorithms, applications, and systems. We discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research.
---
paper_title: Towards multi-modal anticipatory monitoring of depressive states through the analysis of human-smartphone interaction
paper_content:
Remarkable advances in smartphone technology, especially in terms of passive sensing, have enabled researchers to passively monitor user behavior in real-time and at a granularity that was not possible just a few years ago. Recently, different approaches have been proposed to investigate the use of different sensing and phone interaction features, including location, call, SMS and overall application usage logs, to infer the depressive state of users. In this paper, we propose an approach for monitoring of depressive states using multi-modal sensing via smartphones. Through a brief literature review we show the sensing modalities that have been exploited in the past studies for monitoring depression. We then present the initial results of an ongoing study to demonstrate the association of depressive states with the smartphone interaction features. Finally, we discuss the challenges in predicting depression through multimodal mobile sensing.
---
paper_title: EmotionSense: a mobile phones based adaptive platform for experimental social psychology research
paper_content:
Today's mobile phones represent a rich and powerful computing platform, given their sensing, processing and communication capabilities. Phones are also part of the everyday life of billions of people, and therefore represent an exceptionally suitable tool for conducting social and psychological experiments in an unobtrusive way. de the ability of sensing individual emotions as well as activities, verbal and proximity interactions among members of social groups. Moreover, the system is programmable by means of a declarative language that can be used to express adaptive rules to improve power saving. We evaluate a system prototype on Nokia Symbian phones by means of several small-scale experiments aimed at testing performance in terms of accuracy and power consumption. Finally, we present the results of real deployment where we study participants emotions and interactions. We cross-validate our measurements with the results obtained through questionnaires filled by the users, and the results presented in social psychological studies using traditional methods. In particular, we show how speakers and participants' emotions can be automatically detected by means of classifiers running locally on off-the-shelf mobile phones, and how speaking and interactions can be correlated with activity and location measures.
---
paper_title: Enabling large-scale human activity inference on smartphones using community similarity networks (csn)
paper_content:
Sensor-enabled smartphones are opening a new frontier in the development of mobile sensing applications. The recognition of human activities and context from sensor-data using classification models underpins these emerging applications. However, conventional approaches to training classifiers struggle to cope with the diverse user populations routinely found in large-scale popular mobile applications. Differences between users (e.g., age, sex, behavioral patterns, lifestyle) confuse classifiers, which assume everyone is the same. To address this, we propose Community Similarity Networks (CSN), which incorporates inter-person similarity measurements into the classifier training process. Under CSN every user has a unique classifier that is tuned to their own characteristics. CSN exploits crowd-sourced sensor-data to personalize classifiers with data contributed from other similar users. This process is guided by similarity networks that measure different dimensions of inter-person similarity. Our experiments show CSN outperforms existing approaches to classifier training under the presence of population diversity.
---
paper_title: SenSocial: a middleware for integrating online social networks and mobile sensing data streams
paper_content:
Smartphone sensing enables inference of physical context, while online social networks (OSNs) allow mobile applications to harness users' interpersonal relationships. However, OSNs and smartphone sensing remain disconnected, since obstacles, including the synchronization of mobile sensing and OSN monitoring, inefficiency of smartphone sensors, and privacy concerns, stand in the way of merging the information from these two sources. ::: In this paper we present the design, implementation and evaluation of SenSocial, a middleware that automates the process of obtaining and joining OSN and physical context data streams for the development of ubiquitous computing applications. SenSocial enables instantiation, management and aggregation of context streams from multiple remote devices. Through micro-benchmarks we show that SenSocial successfully and efficiently captures OSN and mobile sensed data streams. We developed two prototype applications in order to evaluate our middleware and we demonstrate that SenSocial significantly reduces the amount of programming effort needed for building social sensing applications.
---
paper_title: I'll be there for you: Quantifying Attentiveness towards Mobile Messaging
paper_content:
Social norm has it that people are expected to respond to mobile phone messages quickly. We investigate how attentive people really are and how timely they actually check and triage new messages throughout the day. By collecting more than 55,000 messages from 42 mobile phone users over the course of two weeks, we were able to predict people's attentiveness through their mobile phone usage with close to 80% accuracy. We found that people were attentive to messages 12.1 hours a day, i.e. 84.8 hours per week, and provide statistical evidence how very short people's inattentiveness lasts: in 75% of the cases mobile phone users return to their attentive state within 5 minutes. In this paper, we present a comprehensive analysis of attentiveness throughout each hour of the day and show that intelligent notification delivery services, such as bounded deferral, can assume that inattentiveness will be rare and subside quickly.
---
paper_title: Large-scale evaluation of call-availability prediction
paper_content:
We contribute evidence to which extent sensor- and contextual information available on mobile phones allow to predict whether a user would pick up a call or not. Using an app publicly available for Android phones, we logged anonymous data from 31311 calls of 418 different users. The data shows that information easily available in mobile phones, such as the time since the last call, the time since the last ringer mode change, or the device posture, can predict call availability with an accuracy of 83.2% (Kappa = .646). Personalized models can increase the accuracy to 87% on average. Features related to when the user was last active turned out to be strong predictors. This shows that simple contextual cues approximating user activity are worthwhile investigating when designing context-aware ubiquitous communication systems.
---
paper_title: Trajectories of depression: unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis
paper_content:
One of the most interesting applications of mobile sensing is monitoring of individual behavior, especially in the area of mental health care. Most existing systems require an interaction with the device, for example they may require the user to input his/her mood state at regular intervals. In this paper we seek to answer whether mobile phones can be used to unobtrusively monitor individuals affected by depressive mood disorders by analyzing only their mobility patterns from GPS traces. In order to get ground-truth measurements, we have developed a smartphone application that periodically collects the locations of the users and the answers to daily questionnaires that quantify their depressive mood. We demonstrate that there exists a significant correlation between mobility trace characteristics and the depressive moods. Finally, we present the design of models that are able to successfully predict changes in the depressive mood of individuals by analyzing their movements.
---
paper_title: StressSense: detecting stress in unconstrained acoustic environments using smartphones
paper_content:
Stress can have long term adverse effects on individuals' physical and mental well-being. Changes in the speech production process is one of many physiological changes that happen during stress. Microphones, embedded in mobile phones and carried ubiquitously by people, provide the opportunity to continuously and non-invasively monitor stress in real-life situations. We propose StressSense for unobtrusively recognizing stress from human voice using smartphones. We investigate methods for adapting a one-size-fits-all stress model to individual speakers and scenarios. We demonstrate that the StressSense classifier can robustly identify stress across multiple individuals in diverse acoustic environments: using model adaptation StressSense achieves 81% and 76% accuracy for indoor and outdoor environments, respectively. We show that StressSense can be implemented on commodity Android phones and run in real-time. To the best of our knowledge, StressSense represents the first system to consider voice based stress detection and model adaptation in diverse real-life conversational situations using smartphones.
---
paper_title: Bewell: A smartphone application to monitor, model and promote wellbeing
paper_content:
A key challenge for mobile health is to develop new technology that can assist individuals in maintaining a healthy lifestyle by keeping track of their everyday behaviors. Smartphones embedded with a wide variety of sensors are enabling a new generation of personal health applications that can actively monitor, model and promote wellbeing. Automated wellbeing tracking systems available so far have focused on physical fitness and sleep and often require external non-phone based sensors. In this work, we take a step towards a more comprehensive smartphone based system that can track activities that impact physical, social, and mental wellbeing namely, sleep, physical activity, and social interactions and provides intelligent feedback to promote better health. We present the design, implementation and evaluation of BeWell, an automated wellbeing app for the Android smartphones and demonstrate its feasibility in monitoring multi-dimensional wellbeing. By providing a more complete picture of health, BeWell has the potential to empower individuals to improve their overall wellbeing and identify any early signs of decline.
---
paper_title: Conceptualizing Interpersonal Interruption Management: A Theoretical Framework and Research Program
paper_content:
Previous research exploring interpersonal-technology-mediated interruptions has focused on understanding how the knowledge of an individual's local context can be utilized to reduce unwanted intrusions by employing sensor and agent technology to detect and manage their interruptions. However, this approach has produced limited benefit for users because it fails to take into account who the interruption is from or what it is about. To address this deficiency a theoretical framework and associated research program is presented to provide a fresh perspective on design of interruption management tools.
---
paper_title: Effects of Content and Time of Delivery on Receptivity to Mobile Interruptions
paper_content:
In this paper we investigate effects of the content of interruptions and of the time of interruption delivery on mobile phones. We review related work and report on a naturalistic quasi-experiment using experience-sampling that showed that the receptivity to an interruption is influenced by its content rather than by its time of delivery in the employed modality of delivery - SMS. We also examined the underlying variables that increase the perceived quality of content and found that the factors interest, entertainment, relevance and actionability influence people's receptivity significantly. Our findings inform system design that seeks to provide context-sensitive information or to predict interruptibility and suggest the consideration of receptivity as an extension to the way we think and reason about interruptibility.
---
paper_title: Oasis: A framework for linking notification delivery to the perceptual structure of goal-directed tasks
paper_content:
A notification represents the proactive delivery of information to a user and reduces the need to visually scan or repeatedly check an external information source. At the same time, notifications often interrupt user tasks at inopportune moments, decreasing productivity and increasing frustration. Controlled studies have shown that linking notification delivery to the perceptual structure of a user's tasks can reduce these interruption costs. However, in these studies, the scheduling was always performed manually, and it was not clear whether it would be possible for a system to mimic similar techniques. This article contributes the design and implementation of a novel system called Oasis that aligns notification scheduling with the perceptual structure of user tasks. We describe the architecture of the system, how it detects task structure on the fly without explicit knowledge of the task itself, and how it layers flexible notification scheduling policies on top of this detection mechanism. The system also includes an offline tool for creating customized statistical models for detecting task structure. The value of our system is that it intelligently schedules notifications, enabling the reductions in interruption costs shown within prior controlled studies to now be realized by users in everyday desktop computing tasks. It also provides a test bed for experimenting with how notification management policies and other system functionalities can be linked to task structure.
---
paper_title: PrefMiner: mining user's preferences for intelligent mobile notification management
paper_content:
Mobile notifications are increasingly used by a variety of applications to inform users about events, news or just to send alerts and reminders to them. However, many notifications are neither useful nor relevant to users' interests and, also for this reason, they are considered disruptive and potentially annoying. In this paper we present the design, implementation and evaluation of PrefMiner, a novel interruptibility management solution that learns users' preferences for receiving notifications based on automatic extraction of rules by mining their interaction with mobile phones. The goal is to build a system that is intelligible for users, i.e., not just a "black-box" solution. Rules are shown to users who might decide to accept or discard them at run-time. The design of PrefMiner is based on a large scale mobile notification dataset and its effectiveness is evaluated by means of an in-the-wild deployment.
---
paper_title: Large-scale assessment of mobile notifications
paper_content:
Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
---
paper_title: I've got 99 problems, but vibration ain't one: a survey of smartphone users' concerns
paper_content:
Smartphone operating systems warn users when third-party applications try to access sensitive functions or data. However, all of the major smartphone platforms warn users about different application actions. To our knowledge, their selection of warnings was not grounded in user research; past research on mobile privacy has focused exclusively on the risks pertained to sharing location. To expand the scope of smartphone security and privacy research, we surveyed 3,115 smartphone users about 99 risks associated with 54 smartphone privileges. We asked participants to rate how upset they would be if given risks occurred and used this data to rank risks by levels of user concern. We then asked 41 smartphone users to discuss the risks in their own words; their responses confirmed that people find the lowest-ranked risks merely annoying but might seek legal or financial retribution for the highest-ranked risks. In order to determine the relative frequency of risks, we also surveyed the 3,115 users about experiences with "misbehaving" applications. Our ranking and frequency data can be used to guide the selection of warnings on smartphone platforms.
---
paper_title: Effects of Content and Time of Delivery on Receptivity to Mobile Interruptions
paper_content:
In this paper we investigate effects of the content of interruptions and of the time of interruption delivery on mobile phones. We review related work and report on a naturalistic quasi-experiment using experience-sampling that showed that the receptivity to an interruption is influenced by its content rather than by its time of delivery in the employed modality of delivery - SMS. We also examined the underlying variables that increase the perceived quality of content and found that the factors interest, entertainment, relevance and actionability influence people's receptivity significantly. Our findings inform system design that seeks to provide context-sensitive information or to predict interruptibility and suggest the consideration of receptivity as an extension to the way we think and reason about interruptibility.
---
paper_title: An in-situ study of mobile phone notifications
paper_content:
Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.
---
paper_title: Notifications and awareness: a field study of alert usage and preferences
paper_content:
Desktop notifications are designed to provide awareness of information while a user is attending to a primary task. Unfortunately the awareness can come with the price of disruption to the focal task. We review results of a field study on the use and perceived value of email notifications in the workplace. We recorded users' interactions with software applications for two weeks and studied how notifications or their forced absence influenced users' quest for awareness of new email arrival, as well as the impact of notifications on their overall task focus. Results showed that users view notifications as a mechanism to provide passive awareness rather than a trigger to switch tasks. Turing off notifications cause some users to self interrupt more to explicitly monitor email arrival, while others appear to be able to better focus on their tasks. Users acknowledge notifications as disruptive, yet opt for them because of their perceived value in providing awareness.
---
paper_title: My Phone and Me: Understanding People's Receptivity to Mobile Notifications
paper_content:
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
---
paper_title: Large-scale assessment of mobile notifications
paper_content:
Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
---
paper_title: Managing Smartphone Interruptions through Adaptive Modes and Modulation of Notifications
paper_content:
Smartphones are capable of alerting their users to different kinds of digital interruption using different modalities and with varying modulation. Smart notification is the capability of a smartphone for selecting the user's preferred kind of alert in particular situations using the full vocabulary of notification modalities and modulations. It therefore goes well beyond attempts to predict if or when to silence a ringing phone call. We demonstrate smart notification for messages received from a document retrieval system while the user is attending a meeting. The notification manager learns about their notification preferences from users' judgements about videos of meetings. It takes account of the relevance of the interruption to the meeting, whether the user is busy and the sensed location of the smartphone. Through repeated training, the notification manager learns to reliably predict the preferred notification modes for users and this learning continues to improve with use.
---
paper_title: Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications
paper_content:
We investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction (making voice calls or receiving SMS) based on the assumption that the endings collocate with naturally occurring breakpoint in the user's primary task. Testing this with a naturalistic experiment we find that interruptions (notifications) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition, supporting the potential utility of this notification strategy. We also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks. In situ self-reports and interviews reveal complexities in the subjective experience of the interruption, which suggest that a more nuanced classification of the particular call or SMS and its relationship to the primary task(s) would be desirable.
---
paper_title: Investigating Mobile Users' Ringer Mode Usage and Attentiveness and Responsiveness to Communication
paper_content:
Smartphones are considered to be "always on, always connected" but mobile users are not always attentive and responsive to incoming communication. We present a mixed methods study investigating how mobile users use ringer modes for managing interruption by and awareness of incoming communication, and how these practices and locales affect their attentiveness and responsiveness. We show that mobile users have diverse ringer mode usage, but they switch ringer modes mainly for three purposes: avoiding interruption, preventing the phone from disrupting the environment, and noticing important notifications. In addition, without signals of notifications, users are less likely to immediately attend to notifications, but they are not less responsive to those they have attended. Finally, ringer mode switches, attentiveness, and responsiveness are all correlated with certain locales. We discuss implications from these findings, and suggest how future CMC tools and notification services take different purposes for using ringer modes and locales into consideration.
---
paper_title: MyTraces: Investigating Correlation and Causation between Users’ Emotional States and Mobile Phone Interaction
paper_content:
Most of the existing work concerning the analysis of emotional states and mobile phone interaction has been based on correlation analysis. In this paper, for the first time, we carry out a causality study to investigate the causal links between users’ emotional states and their interaction with mobile phones, which could provide valuable information to practitioners and researchers. The analysis is based on a dataset collected in-the-wild. We recorded 5,118 mood reports from 28 users over a period of 20 days. Our results show that users’ emotions have a causal impact on different aspects of mobile phone interaction. On the other hand, we can observe a causal impact of the use of specific applications, reflecting the external users’ context, such as socializing and traveling, on happiness and stress level. This study has profound implications for the design of interactive mobile systems since it identifies the dimensions that have causal effects on users’ interaction with mobile phones and vice versa. These findings might lead to the design of more effective computing systems and services that rely on the analysis of the emotional state of users, for example for marketing and digital health applications.
---
paper_title: Assessing the Relationship between Technical Affinity, Stress and Notifications on Smartphones
paper_content:
Smartphones have become an indispensable part of everyday life. By this time, push notifications are at the core of many apps, proactively pushing new content to users. These notifications may raise awareness, but also have the downside of being disruptive. In this paper we present a laboratory study investigating users' attitudes towards notifications and how they deal with notification settings on their smartphones. Permission requests for sending push notifications on iOS don't inform the user about the nature of notifications of this app, leaving the user to make a rather uninformed choice on whether to accept or deny. We show that requests including explanations are significantly more likely to be accepted. Our results further indicate that apart from being disruptive, notifications may create stress due to information overload. Notification settings, once assigned a preset, are rarely changed, although not necessarily matching the favored one.
---
paper_title: I've got 99 problems, but vibration ain't one: a survey of smartphone users' concerns
paper_content:
Smartphone operating systems warn users when third-party applications try to access sensitive functions or data. However, all of the major smartphone platforms warn users about different application actions. To our knowledge, their selection of warnings was not grounded in user research; past research on mobile privacy has focused exclusively on the risks pertained to sharing location. To expand the scope of smartphone security and privacy research, we surveyed 3,115 smartphone users about 99 risks associated with 54 smartphone privileges. We asked participants to rate how upset they would be if given risks occurred and used this data to rank risks by levels of user concern. We then asked 41 smartphone users to discuss the risks in their own words; their responses confirmed that people find the lowest-ranked risks merely annoying but might seek legal or financial retribution for the highest-ranked risks. In order to determine the relative frequency of risks, we also surveyed the 3,115 users about experiences with "misbehaving" applications. Our ranking and frequency data can be used to guide the selection of warnings on smartphone platforms.
---
paper_title: Effects of Content and Time of Delivery on Receptivity to Mobile Interruptions
paper_content:
In this paper we investigate effects of the content of interruptions and of the time of interruption delivery on mobile phones. We review related work and report on a naturalistic quasi-experiment using experience-sampling that showed that the receptivity to an interruption is influenced by its content rather than by its time of delivery in the employed modality of delivery - SMS. We also examined the underlying variables that increase the perceived quality of content and found that the factors interest, entertainment, relevance and actionability influence people's receptivity significantly. Our findings inform system design that seeks to provide context-sensitive information or to predict interruptibility and suggest the consideration of receptivity as an extension to the way we think and reason about interruptibility.
---
paper_title: An in-situ study of mobile phone notifications
paper_content:
Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.
---
paper_title: Designing content-driven intelligent notification mechanisms for mobile applications
paper_content:
An increasing number of notifications demanding the smartphone user's attention, often arrive at an inappropriate moment, or carry irrelevant content. In this paper we present a study of mobile user interruptibility with respect to notification content, its sender, and the context in which a notification is received. In a real-world study we collect around 70,000 instances of notifications from 35 users. We group notifications according to the applications that initiated them, and the social relationship between the sender and the receiver. Then, by considering both content and context information, such as the current activity of a user, we discuss the design of classifiers for learning the most opportune moment for the delivery of a notification carrying a specific type of information. Our results show that such classifiers lead to a more accurate prediction of users' interruptibility than an alternative approach based on user-defined rules of their own interruptibility.
---
paper_title: Notifications and awareness: a field study of alert usage and preferences
paper_content:
Desktop notifications are designed to provide awareness of information while a user is attending to a primary task. Unfortunately the awareness can come with the price of disruption to the focal task. We review results of a field study on the use and perceived value of email notifications in the workplace. We recorded users' interactions with software applications for two weeks and studied how notifications or their forced absence influenced users' quest for awareness of new email arrival, as well as the impact of notifications on their overall task focus. Results showed that users view notifications as a mechanism to provide passive awareness rather than a trigger to switch tasks. Turing off notifications cause some users to self interrupt more to explicitly monitor email arrival, while others appear to be able to better focus on their tasks. Users acknowledge notifications as disruptive, yet opt for them because of their perceived value in providing awareness.
---
paper_title: PrefMiner: mining user's preferences for intelligent mobile notification management
paper_content:
Mobile notifications are increasingly used by a variety of applications to inform users about events, news or just to send alerts and reminders to them. However, many notifications are neither useful nor relevant to users' interests and, also for this reason, they are considered disruptive and potentially annoying. In this paper we present the design, implementation and evaluation of PrefMiner, a novel interruptibility management solution that learns users' preferences for receiving notifications based on automatic extraction of rules by mining their interaction with mobile phones. The goal is to build a system that is intelligible for users, i.e., not just a "black-box" solution. Rules are shown to users who might decide to accept or discard them at run-time. The design of PrefMiner is based on a large scale mobile notification dataset and its effectiveness is evaluated by means of an in-the-wild deployment.
---
paper_title: Interpretable Machine Learning for Mobile Notification Management: An Overview of PrefMiner
paper_content:
Mobile notifications are increasingly used by a variety of applications to inform users about events, news or just to send alerts and reminders to them. However, many notifications are neither useful nor relevant to users' interests and, for this reason, they are considered disruptive and potentially annoying, as well. PrefMiner is a novel interruptibility management solution that learns users' preferences for receiving notifications based on automatic extraction of rules by mining their interaction with mobile phones. PrefMiner aims at being intelligible and interpretable for users, i.e., not just a "black box" solution, by suggesting rules to users who might decide to accept or discard them at run-time. The design of PrefMiner is based on a large scale mobile notification dataset and its effectiveness is evaluated by means of an in-the-wild deployment.
---
paper_title: My Phone and Me: Understanding People's Receptivity to Mobile Notifications
paper_content:
Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.
---
paper_title: Large-scale assessment of mobile notifications
paper_content:
Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
---
paper_title: InterruptMe: designing intelligent prompting mechanisms for pervasive applications
paper_content:
The mobile phone represents a unique platform for interactive applications that can harness the opportunity of an immediate contact with a user in order to increase the impact of the delivered information. However, this accessibility does not necessarily translate to reachability, as recipients might refuse an initiated contact or disfavor a message that comes in an inappropriate moment. In this paper we seek to answer whether, and how, suitable moments for interruption can be identified and utilized in a mobile system. We gather and analyze a real-world smartphone data trace and show that users' broader context, including their activity, location, time of day, emotions and engagement, determine different aspects of interruptibility. We then design and implement InterruptMe, an interruption management library for Android smartphones. An extensive experiment shows that, compared to a context-unaware approach, interruptions elicited through our library result in increased user satisfaction and shorter response times.
---
paper_title: Didn't you see my message?: predicting attentiveness to mobile instant messages
paper_content:
Mobile instant messaging (e.g., via SMS or WhatsApp) often goes along with an expectation of high attentiveness, i.e., that the receiver will notice and read the message within a few minutes. Hence, existing instant messaging services for mobile phones share indicators of availability, such as the last time the user has been online. However, in this paper we not only provide evidence that these cues create social pressure, but that they are also weak predictors of attentiveness. As remedy, we propose to share a machine-computed prediction of whether the user will view a message within the next few minutes or not. For two weeks, we collected behavioral data from 24 users of mobile instant messaging services. By the means of machine-learning techniques, we identified that simple features extracted from the phone, such as the user's interaction with the notification center, the screen activity, the proximity sensor, and the ringer mode, are strong predictors of how quickly the user will attend to the messages. With seven automatically selected features our model predicts whether a phone user will view a message within a few minutes with 70.6% accuracy and a precision for fast attendance of 81.2%
---
paper_title: Ask, but don't interrupt: the case for interruptibility-aware mobile experience sampling
paper_content:
The mobile phone-based Experience Sampling Method (ESM) enables in situ recording of human behaviour and experience by querying users, via their smartphones, anywhere and anytime. Sampling can happen on a previously unimaginable scale, and across a diverse pool of participants. Therefore, mobile ESM is not limited to capturing users' manual responses, as the surrounding context can be automatically captured by mobile sensors. However, obtaining high quality data with ESM is challenging, as users may fail to respond honestly, or may even ignore the questionnaire prompts if they perceive the study as too burdensome. In this paper, we discuss the potential of using interruptibility prediction models to deliver mobile ESM questionnaires at opportune moments, and thus improve the effectiveness of a study. We examine context prediction and interruptibility inference, which are fundamental challenges that need we need to overcome in order to make mobile ESMs better aligned with a user's lifestyle, and consequently paint a truthful picture of a user's behaviour.
---
paper_title: Understanding the Role of Places and Activities on Mobile Phone Interaction and Usage Patterns
paper_content:
User interaction patterns with mobile apps and notifications are generally complex due to the many factors involved. However a deep understanding of what influences them can lead to more acceptable applications that are able to deliver information at the right time. In this paper, we present for the first time an in-depth analysis of interaction behavior with notifications in relation to the location and activity of users. We conducted an in-situ study for a period of two weeks to collect more than 36,000 notifications, 17,000 instances of application usage, 77,000 location samples, and 487 days of daily activity entries from 26 students at a UK university. Our results show that users’ attention towards new notifications and willingness to accept them are strongly linked to the location they are in and in minor part to their current activity. We consider both users’ receptivity and attentiveness, and we show that different response behaviors are associated to different locations. These findings are fundamental from a design perspective since they allow us to understand how certain types of places are linked to specific types of interaction behavior. This information can be used as a basis for the development of novel intelligent mobile applications and services.
---
paper_title: Instant Messaging and Interruption : Influence of Task Type on Performance
paper_content:
We describe research on the effects of instant messaging (IM) on ongoing computing tasks. We present a study that builds on earlier work exploring the influence of sending notifications at different times and the kinds of tasks that are particularly susceptible to interruption. This work investigates alternative hypotheses about the nature of disruption for a list evaluation task, an activity we had identified as being particularly costly to interrupt. Our findings replicate earlier work, showing the generally harmful effects of IM, and further show that notifications are more disruptive for fast, stimulus-driven search tasks than for slower, more effortful semantic-based search tasks.
---
paper_title: Leveraging characteristics of task structure to predict the cost of interruption
paper_content:
A challenge in building interruption reasoning systems is to compute an accurate cost of interruption (COI). Prior work has used interface events and other cues to predict COI, but ignore characteristics related to the structure of a task. This work investigates how well characteristics of task structure can predict COI, as objectively measured by resumption lag. In an experiment, users were interrupted during task execution at various boundaries to collect a large sample of resumption lag values. Statistical methods were employed to create a parsimonious model that uses characteristics of task structure to predict COI. A subsequent experiment with different tasks showed that the model can predict COI with reasonably high accuracy. Our model can be expediently applied to many goal-directed tasks, allowing systems to make more effective decisions about when to interrupt.
---
paper_title: Designing content-driven intelligent notification mechanisms for mobile applications
paper_content:
An increasing number of notifications demanding the smartphone user's attention, often arrive at an inappropriate moment, or carry irrelevant content. In this paper we present a study of mobile user interruptibility with respect to notification content, its sender, and the context in which a notification is received. In a real-world study we collect around 70,000 instances of notifications from 35 users. We group notifications according to the applications that initiated them, and the social relationship between the sender and the receiver. Then, by considering both content and context information, such as the current activity of a user, we discuss the design of classifiers for learning the most opportune moment for the delivery of a notification carrying a specific type of information. Our results show that such classifiers lead to a more accurate prediction of users' interruptibility than an alternative approach based on user-defined rules of their own interruptibility.
---
paper_title: Notifications and awareness: a field study of alert usage and preferences
paper_content:
Desktop notifications are designed to provide awareness of information while a user is attending to a primary task. Unfortunately the awareness can come with the price of disruption to the focal task. We review results of a field study on the use and perceived value of email notifications in the workplace. We recorded users' interactions with software applications for two weeks and studied how notifications or their forced absence influenced users' quest for awareness of new email arrival, as well as the impact of notifications on their overall task focus. Results showed that users view notifications as a mechanism to provide passive awareness rather than a trigger to switch tasks. Turing off notifications cause some users to self interrupt more to explicitly monitor email arrival, while others appear to be able to better focus on their tasks. Users acknowledge notifications as disruptive, yet opt for them because of their perceived value in providing awareness.
---
paper_title: NextPlace: A Spatio-Temporal Prediction Framework for Pervasive Systems
paper_content:
Accurate and fine-grained prediction of future user location and geographical profile has interesting and promising applications including targeted content service, advertisement dissemination for mobile users, and recreational social networking tools for smart-phones. Existing techniques based on linear and probabilistic models are not able to provide accurate prediction of the location patterns from a spatio-temporal perspective, especially for long-term estimation. More specifically, they are able to only forecast the next location of a user, but not his/her arrival time and residence time, i.e., the interval of time spent in that location. Moreover, these techniques are often based on prediction models that are not able to extend predictions further in the future. ::: ::: In this paper we present NextPlace, a novel approach to location prediction based on nonlinear time series analysis of the arrival and residence times of users in relevant places. NextPlace focuses on the predictability of single users when they visit their most important places, rather than on the transitions between different locations. We report about our evaluation using four different datasets and we compare our forecasting results to those obtained by means of the prediction techniques proposed in the literature. We show how we achieve higher performance compared to other predictors and also more stability over time, with an overall prediction precision of up to 90% and a performance increment of at least 50% with respect to the state of the art.
---
paper_title: BusyBody: creating and fielding personalized models of the cost of interruption
paper_content:
Interest has been growing in opportunities to build and deploy statistical models that can infer a computer user's current interruptability from computer activity and relevant contextual information. We describe a system that intermittently asks users to assess their perceived interruptability during a training phase and that builds decision-theoretic models with the ability to predict the cost of interrupting the user. The models are used at run-time to compute the expected cost of interruptions, providing a mediator for incoming notifications, based on a consideration of a user's current and recent history of computer activity, meeting status, location, time of day, and whether a conversation is detected.
---
paper_title: Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
paper_content:
We describe a study on the influence of instant messaging (IM) on ongoing computing tasks. The study both replicates and extends earlier work on the cost of sending notifications at different times and the sensitivity of different tasks to interruption. We investigate alternative hypotheses about the nature of disruption for a list evaluation task, an activity identified as being particularly costly to interrupt. Our findings once again show the generally disruptive effects of IM, especially during fast, stimulus-driven search tasks. In addition, we show that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on. These findings have implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
---
paper_title: Using context-aware computing to reduce the perceived burden of interruptions from mobile devices
paper_content:
The potential for sensor-enabled mobile devices to proactively present information when and where users need it ranks among the greatest promises of ubiquitous computing. Unfortunately, mobile phones, PDAs, and other computing devices that compete for the user's attention can contribute to interruption irritability and feelings of information overload. Designers of mobile computing interfaces, therefore, require strategies for minimizing the perceived interruption burden of proactively delivered messages. In this work, a context-aware mobile computing device was developed that automatically detects postural and ambulatory activity transitions in real time using wireless accelerometers. This device was used to experimentally measure the receptivity to interruptions delivered at activity transitions relative to those delivered at random times. Messages delivered at activity transitions were found to be better received, thereby suggesting a viable strategy for context-aware message delivery in sensor-enabled mobile computing devices.
---
paper_title: PrefMiner: mining user's preferences for intelligent mobile notification management
paper_content:
Mobile notifications are increasingly used by a variety of applications to inform users about events, news or just to send alerts and reminders to them. However, many notifications are neither useful nor relevant to users' interests and, also for this reason, they are considered disruptive and potentially annoying. In this paper we present the design, implementation and evaluation of PrefMiner, a novel interruptibility management solution that learns users' preferences for receiving notifications based on automatic extraction of rules by mining their interaction with mobile phones. The goal is to build a system that is intelligible for users, i.e., not just a "black-box" solution. Rules are shown to users who might decide to accept or discard them at run-time. The design of PrefMiner is based on a large scale mobile notification dataset and its effectiveness is evaluated by means of an in-the-wild deployment.
---
paper_title: In-situ investigation of notifications in multi-device environments
paper_content:
Smart devices have arrived in our everyday lives. Being able to notify the user about events is a core feature of these devices. Related work investigated interruptions caused by notifications on single devices. In this paper, we investigate notifications in multi-device environments by analyzing the results of a week-long in-situ study with 16 participants. We used the Experience Sampling Method (ESM) and recorded the participants' interaction with smartphones, smartwatches, tablets and PCs. Disregarding the type or content of notifications, we found that the smartphone is the preferred device on which to be notified. Further, we found that the proximity to the device, whether it is currently being used and the user's current location can be used to predict if the user wants to receive notifications on a device. The findings can be used to design future multi-device aware smart notification systems.
---
paper_title: I'll be there for you: Quantifying Attentiveness towards Mobile Messaging
paper_content:
Social norm has it that people are expected to respond to mobile phone messages quickly. We investigate how attentive people really are and how timely they actually check and triage new messages throughout the day. By collecting more than 55,000 messages from 42 mobile phone users over the course of two weeks, we were able to predict people's attentiveness through their mobile phone usage with close to 80% accuracy. We found that people were attentive to messages 12.1 hours a day, i.e. 84.8 hours per week, and provide statistical evidence how very short people's inattentiveness lasts: in 75% of the cases mobile phone users return to their attentive state within 5 minutes. In this paper, we present a comprehensive analysis of attentiveness throughout each hour of the day and show that intelligent notification delivery services, such as bounded deferral, can assume that inattentiveness will be rare and subside quickly.
---
paper_title: Towards attention-aware adaptive notification on smart phones
paper_content:
As the amount of information to users increases with the trends of an increasing numbers of devices, applications, and web services, the new bottleneck in computing is human attention. To minimize users attentional overload, we propose a novel middleware "Attelia" that detects breakpoints of user's mobile interactions to deliver notifications adaptively. Attelia detects such timings in real-time, using only users phones, without any external sensors, and without any modifications to applications. Our extensive evaluation proved Attelias effectiveness. In-the-wild user study with 30 participants for 16 days showed that, specifically for the users with greater sensitivity for interruptive notification timings, notification scheduling in Attelia's breakpoint timing reduced users frustration by 28% in users' real smart phone environments.
---
paper_title: Attention-Sensitive Alerting
paper_content:
We introduce utility-directed procedures for mediating the flow of potentially distracting alerts and communications to computer users. We present models and inference procedures that balance the context-sensitive costs of deferring alerts with the cost of interruption. We describe the challenge of reasoning about such costs under uncertainty via an analysis of user activity and the content of notifications. After introducing principles of attention-sensitive alerting, we focus on the problem of guiding alerts about email messages. We dwell on the problem of inferring the expected criticality of email and discuss work on the PRIORITIES system, centering on prioritizing email by criticality and modulating the communication of notifications to users about the presence and nature of incoming email.
---
paper_title: Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications
paper_content:
We investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction (making voice calls or receiving SMS) based on the assumption that the endings collocate with naturally occurring breakpoint in the user's primary task. Testing this with a naturalistic experiment we find that interruptions (notifications) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition, supporting the potential utility of this notification strategy. We also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks. In situ self-reports and interviews reveal complexities in the subjective experience of the interruption, which suggest that a more nuanced classification of the particular call or SMS and its relationship to the primary task(s) would be desirable.
---
paper_title: If not now, when?: the effects of interruption at different moments within task execution
paper_content:
User attention is a scarce resource, and users are susceptible to interruption overload. Systems do not reason about the effects of interrupting a user during a task sequence. In this study, we measure effects of interrupting a user at different moments within task execution in terms of task performance, emotional state, and social attribution. Task models were developed using event perception techniques, and the resulting models were used to identify interruption timings based on a user's predicted cognitive load. Our results show that different interruption moments have different impacts on user emotional state and positive social attribution, and suggest that a system could enable a user to maintain a high level of awareness while mitigating the disruptive effects of interruption. We discuss implications of these results for the design of an attention manager.
---
paper_title: Mobile User Research: A Practical Guide
paper_content:
This book will give you a practical overview of several methods and approaches for designing mobile technologies and conducting mobile user research, including how to understand behavior and evaluate how such technologies are being (or may be) used out in the world. Each chapter includes case studies from our own work and highlights advantages, limitations, and very practical steps that should be taken to increase the validity of the studies you conduct and the data you collect. This book is intended as a practical guide for conducting mobile research focused on the user and their experience. We hope that the depth and breadth of case studies presented, as well as specific best practices, will help you to design the best technologies possible and choose appropriate methods to gather ethical, reliable, and generalizable data to explore the use of mobile technologies out in the world.
---
paper_title: A method, system, and tools for intelligent interruption management
paper_content:
Interrupting users engaged in tasks typically has negative effects on their task completion time, error rate, and affective state. Empirical research has shown that these negative effects can be mitigated by deferring interruptions until more opportune moments in a user's task sequence. However, existing systems that reason about when to interrupt do not have access to task models that would allow for such finer-grained temporal reasoning. We outline our method of finding opportune moments that links a physiological measure of workload with task modeling techniques and theories of attention. We describe the design and implementation of our interruption management system, showing how it can be used to specify and monitor practical, representative user tasks. We discuss our ongoing empirical work in this area, and how the use of our framework may enable attention aware systems to consider a user's position in a task when reasoning about when to interrupt.
---
paper_title: The attentional costs of interrupting task performance at various stages
paper_content:
Abstract : The visual occlusion technique has received considerable attention in recent years as a method for measuring the interruptible aspects of in-vehicle information system (IVIS) task performance. Because the visual occlusion technique lacks a loading task during "occluded" periods, an alternate method was adopted to provide increased sensitivity to the attentional costs of interruptions on IVIS-style task performance. Participants alternated between performing a VCR programming task and a simple tracking task. Results indicate that it does matter at which point the VCR task is interrupted in terms of time to resume the VCR task. Specifically, the resumption time, or lag, was lowest right before beginning a new task stage such as entering the show end-time, or when performing a repetitive scrolling task. The results suggest that it might be appropriate to include measures of resumption lag when testing the interruptability of IVIS-style tasks.
---
paper_title: Human Behavior On the need for attention-aware systems : Measuring effects of interruption on task performance , error rate , and affective state
paper_content:
Abstract This paper reports results from a controlled experiment ( N = 50) measuring effects of interruption on task completion time, error rate, annoyance, and anxiety. The experiment used a sample of primary and peripheral tasks representative of those often performed by users. Our experiment differs from prior interruption experiments because it measures effects of interrupting a user’s tasks along both performance and affective dimensions and controls for task workload by manipulating only the time at which peripheral tasks were displayed – between vs. during the execution of primary tasks. Results show that when peripheral tasks interrupt the execution of primary tasks, users require from 3% to 27% more time to complete the tasks, commit twice the number of errors across tasks, experience from 31% to 106% more annoyance, and experience twice the increase in anxiety than when those same peripheral tasks are presented at the boundary between primary tasks. An important implication of our work is that attention-aware systems could mitigate effects of interruption by deferring presentation of peripheral information until coarse boundaries are reached during task execution. As our results show, deferring presentation for a short time, i.e. just a few seconds, can lead to a large mitigation of disruption.
---
| Title: Intelligent Notification Systems: A Survey of the State of the Art and Research Challenges
Section 1: INTRODUCTION
Description 1: This section introduces the role of mobile phones in daily life, the importance of notifications, and the key issues around unwanted interruptions and notification management.
Section 2: Definitions of Interruptions from Different Research Fields
Description 2: This section discusses various definitions and interpretations of interruptions from different disciplines, including linguistics, psychology, and computer science.
Section 3: Types of Interruptions
Description 3: This section categorizes and explains different types of interruptions, distinguishing between internal and external interruptions and further classifying external ones into implicit and explicit interruptions.
Section 4: Definition of Interruptions in Context of this Survey
Description 4: This section provides a definition of interruptions tailored to the survey's focus on external interruptions in desktop and mobile environments.
Section 5: Sources of Interruptions
Description 5: This section discusses various sources of interruptions in both human-human discourse and human-computer interaction.
Section 6: COST OF INTERRUPTION
Description 6: This section explores the detrimental effects of interruptions on memory, task completion, error rates, emotional state, and user experience.
Section 7: INDIVIDUAL DIFFERENCES IN PERCEIVING INTERRUPTIONS
Description 7: This section delves into individual variability in handling interruptions and multitasking, highlighting factors like motivation, anxiety, and cognitive style.
Section 8: INTERRUPTIBILITY MANAGEMENT
Description 8: This section defines interruptibility management, discusses attentiveness and receptivity to interruptions, and presents an overview of methods to build interruptibility models.
Section 9: INTERRUPTIBILITY MANAGEMENT IN DESKTOP ENVIRONMENTS
Description 9: This section reviews research on managing interruptions in desktop settings, including using sensor data, task phases, and real-time inferences.
Section 10: INTERRUPTIBILITY MANAGEMENT IN MOBILE ENVIRONMENTS
Description 10: This section discusses interruptibility management for mobile devices, focusing on user perception, filtering irrelevant information, and utilizing contextual data.
Section 11: LIMITATIONS OF THE STATE OF THE ART AND OPEN CHALLENGES
Description 11: This section highlights the limitations of current interruptibility studies and outlines open research challenges, such as deferring notifications, monitoring cognitive context, and modeling for multiple devices.
Section 12: SUMMARY
Description 12: This section summarizes the survey's discussions, emphasizing the need for intelligent notification mechanisms and the future direction of research in this field. |
The constructive use of images in medical teaching: a literature review | 14 | ---
paper_title: Learning to Use a Home Medical Device: Mediating Age-Related Differences with Training
paper_content:
We examined the differential benefits of instructional materials for younger and older adults learning to use a home medical device. Participants received training on use of a blood glucose meter via either a user manual (a text guide with pictures) or an instructional video. Performance was measured immediately and then after a 2-week retention interval. Type of instruction was critical for determining older adults' performance. Older adults trained using the manual had poorer performance than did all other groups. After only 1 calibration, older adults who received video training performed as accurately as did the younger adults. Older adults' performance was more influenced by the retention interval; however, the benefit of the video training was maintained for the older adults across the retention interval. Confidence ratings paralleled subjective workload ratings. The data provide practical information to guide the development of training programs for systems that will be used by both younger and old...
---
paper_title: Thomas the Tank Engine and Friends improve the understanding of oxygen delivery and the pathophysiology of hypoxaemia.
paper_content:
Understanding basic pathophysiological principles underpins the practice of many healthcare workers, particularly in a critical care setting. Undergraduate curricula have the potential to separate physiology teaching from clinical contexts, making understanding difficult. We therefore assessed the use of analogous imagery as an aid to understanding. Two groups of first year physiotherapy students were randomly assigned to receive either a control lecture (oxygen delivery and hypoxaemia) or a study lecture (control lecture plus images of a train set delivering rocks: an analogy to oxygen delivery.) Qualitative assessment of the lectures showed a significant (p < 0.001) improvement in understanding by the study group, and increased the proportion of students that found the lecture 'interesting and stimulating' (p = 0.01). Quantitative assessment demonstrated a significant increase in the multiple choice questionnaire marks of the study group (p = 0.03). In conclusion, analogous imagery can significantly increase the understanding of this physiological concept.
---
paper_title: Imagery and Text: A Dual Coding Theory of Reading and Writing
paper_content:
Contents: Preface. Introduction: A Unified Theory of Literacy. Historical and Philosophical Background. Dual Coding in Literacy. Meaning and Comprehension. Memory and Remembering. The Reading Process. Written Composition. Educational Implications.
---
paper_title: Relaxed conditions can provide memory cues in both undergraduates and primary school children
paper_content:
Background: Memory can be impaired by changes between the contexts of learning and retrieval (context-dependent memory, CDM). However, the reminder properties of context have usually been investigated by experimental manipulation of cues in isolation, underestimating CDM that results from interactions between cues. ::: ::: ::: ::: Aims: To test whether CDM can be demonstrated using multiple contextual cues combined to create relaxing versus neutral contexts at separate learning and memory testing stages of the experiments. ::: ::: ::: ::: Sample: Forty university undergraduates (in Experiment 1), and forty 9-10 year-olds (in Experiment 2). ::: ::: ::: ::: Methods: All participants were given age-appropriate tasks under either relaxing or neutral conditions. The next day they were tested for retrieval or practice effects, under the same or different (relaxing versus neutral) conditions. ::: ::: ::: ::: Results: For both age groups, there was a (mostly asymmetric) CDM effect with performance generally best in the relaxing—relaxing condition. There was also some overall benefit of having learned under relaxed conditions. ::: ::: ::: ::: Conclusion: A relaxed learning environment can provide effective retrieval cues, as well as improve learning. ::: ::: ::: ::: Comment: For both primary school children and university students, the educational implication of these findings is that learning can be improved in a relaxed state. For this benefit to be fully manifest, the assessment of learning should also take place under relaxed conditions.
---
paper_title: Twelve tips for running a successful body painting teaching session
paper_content:
Body painting in the medical education context is the painting of internal structures on the surface of the body with high verisimilitude. Body painting has many educational benefits, from the obvious acquisition of anatomical knowledge, to the less obvious benefits of improved communication skills and greater body awareness. As with any activity, which involves physical examination and undressing, sensitive delivery is imperative. The 12 tips given in this article offer advice on the practicalities of running a successful body painting session in a supportive environment, thus promoting maximum student participation.
---
paper_title: The Theory Underlying Concept Maps and How to Construct Them
paper_content:
This text presents the origin of the concept map tool and some of the early history in the development of this tool. Some of the ideas from Ausubel’s (1963; 1968) assimilation theory of cognitive learning that served as a foundation for concept mapping are presented, including the important role that assimilating new concepts and propositions into a learner’s existing cognitive framework plays in meaning making. Epistemological foundations are also presented including the idea that creative production of new knowledge can be seen as a very high level of meaningful learning, and concept mapping can facilitate the process. The wide range of tools available in free CmapTools software and some applications are illustrated, including application for facilitating meaningful learning, better curriculum development, capturing and archiving tacit and explicit expert knowledge, and enhancing creative production. Using CmapTools, WWW resources, and other digital resources provide for a powerful New Model for Education leading to the creation of individual knowledge portfolios that can document signifi cant learning and serve as a foundation for future related learning. CmapTools also provides extensive support for collaboration, publishing and sharing of knowledge models.
---
paper_title: Imagery and Text: A Dual Coding Theory of Reading and Writing
paper_content:
Contents: Preface. Introduction: A Unified Theory of Literacy. Historical and Philosophical Background. Dual Coding in Literacy. Meaning and Comprehension. Memory and Remembering. The Reading Process. Written Composition. Educational Implications.
---
paper_title: The contributions of color to recognition memory for natural scenes
paper_content:
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5%–10% better for colored than for blackand-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.
---
paper_title: The Cambridge handbook of expertise and expert performance
paper_content:
Introduction and perspective -- An introduction to Cambridge handbook of expertise and expert performance : its development, organization, and content / K. Anders Ericsson -- Two approaches to the study of experts' characteristics / Michelene T.H. Chi -- Expertise, talent, and social encouragement / Earl Hunt -- Overview of approaches to the study of expertise : brief historical accounts of theories and methods -- Studies of expertise from psychological perspectives / Paul J. Feltovich, Michael J. Prietula & K. Anders Ericsson -- Educators and expertise : a brief history of theories and models / Ray J. Amirault & Robert K. Branson -- Expert systems : a perspective from computer science / Bruce G. Buchanan, Randall Davis, & Edward A. Feigenbaum -- Professionalization, scientific expertise, and elitism : a sociological perspective / Julia Evetts, Harald A. Mieg, & Ulrike Felt -- Methods for studying the structure of expertise --^
---
| Title: The Constructive Use of Images in Medical Teaching: A Literature Review
Section 1: Summary
Description 1: Write a summary that illustrates the ways images are used in medical teaching, the evidence supporting this, and advice regarding permissions and use.
Section 2: Definition of images
Description 2: Define what is meant by images in the context of medical teaching.
Section 3: Methodology
Description 3: Describe the method used to conduct the literature search and selection of papers for the study.
Section 4: Images as 'icebreakers'
Description 4: Discuss how images are used to motivate an audience at the start of a presentation.
Section 5: Images may act as a focus of interest
Description 5: Explain the use of images as a focal point to illustrate lecture objectives.
Section 6: Images may help observation skills
Description 6: Detail how images can be used to enhance observation skills among students.
Section 7: Images as metaphors
Description 7: Describe how images are used as metaphors to illustrate complex concepts.
Section 8: Images in lectures can act as 'signposts'
Description 8: Explain the role of images in guiding students through the narrative of a lecture.
Section 9: Timing of use of images and image quality
Description 9: Discuss the importance of timing and quality of images used in teaching.
Section 10: Cartoon images can add humour
Description 10: Describe how cartoon images can be used to add humour and enhance learning.
Section 11: Images as concept maps
Description 11: Explain the use of images as concept maps to present information in a structured format.
Section 12: Images can 'brighten' a dull topic
Description 12: Discuss how images can brighten a dull topic and enhance memory retention.
Section 13: Images can be used to encourage student reflection on their learning
Description 13: Explain how images can be used to promote student reflection and creativity.
Section 14: Creating the images and permission for images
Description 14: Provide guidelines on creating, purchasing, and obtaining permission to use images in teaching.
Section 15: Conclusions
Description 15: Summarize the key findings and recommendations regarding the use of images in medical teaching. |
Partial volume effect modeling for segmentation and tissue classification of brain magnetic resonance images: A review | 13 | ---
paper_title: Automated segmentation and classification of multispectral magnetic resonance images of brain using artificial neural networks
paper_content:
Presents a fully automated process for segmentation and classification of multispectral magnetic resonance (MR) images. This hybrid neural network method uses a Kohonen self-organizing neural network for segmentation and a multilayer backpropagation neural network for classification. To separate different tissue types, this process uses the standard T1-, T2-, and PD-weighted MR images acquired in clinical examinations. Volumetric measurements of brain structures, relative to intracranial volume, were calculated for an index transverse section in 14 normal subjects (median age 25 years; 7 male, 7 female). This index slice was at the level of the basal ganglia, included both genu and splenium of the corpus callosum, and generally, showed the putamen and lateral ventricle. An intraclass correlation of this automated segmentation and classification of tissues with the accepted standard of radiologist identification for the index slice in the 14 volunteers demonstrated coefficients (r/sub i/) of 0.91, 0.95, and 0.98 for white matter, gray matter, and ventricular cerebrospinal fluid (CSF), respectively. An analysis of variance for estimates of brain parenchyma volumes in 5 volunteers imaged 5 times each demonstrated high intrasubject reproducibility with a significance of at least p<0.05 for white matter, gray matter, and white/gray partial volumes. The population variation, across 14 volunteers, demonstrated little deviation from the averages for gray and white matter, while partial volume classes exhibited a slightly higher degree of variability. This fully automated technique produces reliable and reproducible MR image segmentation and classification while eliminating intra- and interobserver variability.
---
paper_title: Comparison and validation of tissue modelization and statistical classification methods in T1-weighted MR brain images
paper_content:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
---
paper_title: Computerized Brain Tissue Classification of Magnetic Resonance Images: A New Approach to the Problem of Partial Volume Artifact
paper_content:
Abstract Due to the finite spatial resolution of digital magnetic resonance images of the brain, and the complexity of anatomical interfaces between brain regions of different tissue type, it is inevitable that some voxels will represent a mixture of two or three different tissue types. Outright assignment of such "bipartial" or "tripartial" voxels to one class or another is more problematic and less reliable than assignment of "full-volume" voxels, wholly representative of a single tissue type. We have developed a computerized system for brain tissue classification of dual echo MR data, which uses a polychotomous logistic model for discriminant analysis, combined with a Bayes allocation rule incorporating differential prior probabilities, and spatial connectivity tests, to assign each voxel in the image to one of four possible classes: gray matter, white matter, cerebrospinal fluid, or unclassified. The system supports automated volumetric analysis of segmented images, has low operational overheads, and compares favorably with previous multivariate or "multispectral" approaches to brain MR image segmentation in terms of both validity (bootstrap misclassification rate = 3.3%) and interoperator reliability (intra-class correlation coefficients for all three tissue classes >0.9). We argue that these improvements in performance stem from better methodological management of the related problems of non-Normality of MR signal intensity values and partial volume artifact.
---
paper_title: The burden of brain diseases in Europe
paper_content:
death and years of life lived with disability (YLD). In the present report, data from the GBD 2000 study and from the World Health Report 2001 on brain diseases is extracted for the territory of Europe. This territory corresponds roughly to the membership countries of the European Federation of Neurological Societies. The WHO’s Report has a category called neuropsychiatric diseases, which comprises the majority but not all the brain diseases. In order to gather all brain diseases, stroke, meningitis, half of the burden of injuries and half of the burden of congenital abnormalities are added. Throughout Europe, 23% of the years of healthy life is lost and 50% of YLD are caused by brain diseases. Regarding the key summary measure of lost health, DALY, 35% are because of brain diseases. The fact that approximately one-third of all burden of disease is caused by brain diseases should have an impact on resource allocation to teaching, reasearch, health care and prevention. Although other factors are also of importance, it seems reasonable that one-third of the curriculum at medical school should deal with the brain and that one-third of life science funding should go to basic and clinical neuroscience. In addition, resource allocation to prevention, diagnosis and treatment of brain diseases should be increased to approach, at least, one-third of health care expenditure. With the present data on hand, neurologists, neurosurgeons, psychiatrists, patient organizations and basic neuroscientists have a better possibility to increase the focus on the brain.
---
paper_title: A review of MRI findings in schizophrenia
paper_content:
Abstract After more than 100 years of research, the neuropathology of schizophrenia remains unknown and this is despite the fact that both Kraepelin (1919/1971 : Kraepelin, E., 1919/1971. Dementia praecox. Churchill Livingston Inc., New York) and Bleuler (1911/1950 : Bleuler, E., 1911/1950. Dementia praecox or the group of schizophrenias. International Universities Press, New York), who first described ‘dementia praecox’ and the ‘schizophrenias’, were convinced that schizophrenia would ultimately be linked to an organic brain disorder. Alzheimer (1897 : Alzheimer, A., 1897. Beitrage zur pathologischen anatomie der hirnrinde und zur anatomischen grundlage einiger psychosen. Monatsschrift fur Psychiarie und Neurologie. 2, 82–120) was the first to investigate the neuropathology of schizophrenia, though he went on to study more tractable brain diseases. The results of subsequent neuropathological studies were disappointing because of conflicting findings. Research interest thus waned and did not flourish again until 1976, following the pivotal computer assisted tomography (CT) finding of lateral ventricular enlargement in schizophrenia by Johnstone and colleagues. Since that time significant progress has been made in brain imaging, particularly with the advent of magnetic resonance imaging (MRI), beginning with the first MRI study of schizophrenia by Smith and coworkers in 1984 (Smith, R.C., Calderon, M., Ravichandran, G.K., et al. (1984). Nuclear magnetic resonance in schizophrenia: A preliminary study. Psychiatry Res. 12, 137–147). MR in vivo imaging of the brain now confirms brain abnormalities in schizophrenia. The 193 peer reviewed MRI studies reported in the current review span the period from 1988 to August, 2000. This 12 year period has witnessed a burgeoning of MRI studies and has led to more definitive findings of brain abnormalities in schizophrenia than any other time period in the history of schizophrenia research. Such progress in defining the neuropathology of schizophrenia is largely due to advances in in vivo MRI techniques. These advances have now led to the identification of a number of brain abnormalities in schizophrenia. Some of these abnormalities confirm earlier post-mortem findings, and most are small and subtle, rather than large, thus necessitating more advanced and accurate measurement tools. These findings include ventricular enlargement (80% of studies reviewed) and third ventricle enlargement (73% of studies reviewed). There is also preferential involvement of medial temporal lobe structures (74% of studies reviewed), which include the amygdala, hippocampus, and parahippocampal gyrus, and neocortical temporal lobe regions (superior temporal gyrus) (100% of studies reviewed). When gray and white matter of superior temporal gyrus was combined, 67% of studies reported abnormalities. There was also moderate evidence for frontal lobe abnormalities (59% of studies reviewed), particularly prefrontal gray matter and orbitofrontal regions. Similarly, there was moderate evidence for parietal lobe abnormalities (60% of studies reviewed), particularly of the inferior parietal lobule which includes both supramarginal and angular gyri. Additionally, there was strong to moderate evidence for subcortical abnormalities (i.e. cavum septi pellucidi—92% of studies reviewed, basal ganglia—68% of studies reviewed, corpus callosum—63% of studies reviewed, and thalamus—42% of studies reviewed), but more equivocal evidence for cerebellar abnormalities (31% of studies reviewed). The timing of such abnormalities has not yet been determined, although many are evident when a patient first becomes symptomatic. There is, however, also evidence that a subset of brain abnormalities may change over the course of the illness. The most parsimonious explanation is that some brain abnormalities are neurodevelopmental in origin but unfold later in development, thus setting the stage for the development of the symptoms of schizophrenia. Or there may be additional factors, such as stress or neurotoxicity, that occur during adolescence or early adulthood and are necessary for the development of schizophrenia, and may be associated with neurodegenerative changes. Importantly, as several different brain regions are involved in the neuropathology of schizophrenia, new models need to be developed and tested that explain neural circuitry abnormalities effecting brain regions not necessarily structurally proximal to each other but nonetheless functionally interrelated. Future studies will likely benefit from: (1) studying more homogeneous patient groups so that the relationship between MRI findings and clinical symptoms become more meaningful; (2) studying at risk populations such as family members of patients diagnosed with schizophrenia and subjects diagnosed with schizotypal personality disorder in order to define which abnormalities are specific to schizophrenia spectrum disorders, which are the result of epiphenomena such as medication effects and chronic institutionalization, and which are needed for the development of frank psychosis; (3) examining shape differences not detectable from measuring volume alone; (4) applying newer methods such as diffusion tensor imaging to investigate abnormalities in brain connectivity and white matter fiber tracts; and, (5) using methods that analyze brain function (fMRI) and structure simultaneously.
---
paper_title: Voxel-based morphometry—The methods
paper_content:
At its simplest, voxel-based morphometry (VBM) involves a voxel-wise comparison of the local concentration of gray matter between two groups of subjects. The procedure is relatively straightforward and involves spatially normalizing high-resolution images from all the subjects in the study into the same stereotactic space. This is followed by segmenting the gray matter from the spatially normalized images and smoothing the gray-matter segments. Voxel-wise parametric statistical tests which compare the smoothed gray-matter images from the two groups are performed. Corrections for multiple comparisons are made using the theory of Gaussian random fields. This paper describes the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with nonuniformity artifact. We provide evaluations of the assumptions that underpin the method, including the accuracy of the segmentation and the assumptions made about the statistical distribution of the data.
---
paper_title: The Alzheimer’s Disease Neuroimaging Initiative: A review of papers published since its inception
paper_content:
The Alzheimer's Disease Neuroimaging Initiative (ADNI) is an ongoing, longitudinal, multicenter study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer's disease (AD). The study aimed to enroll 400 subjects with early mild cognitive impairment (MCI), 200 subjects with early AD, and 200 normal control subjects; $67 million funding was provided by both the public and private sectors, including the National Institute on Aging, 13 pharmaceutical companies, and 2 foundations that provided support through the Foundation for the National Institutes of Health. This article reviews all papers published since the inception of the initiative and summarizes the results as of February 2011. The major accomplishments of ADNI have been as follows: (1) the development of standardized methods for clinical tests, magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF) biomarkers in a multicenter setting; (2) elucidation of the patterns and rates of change of imaging and CSF biomarker measurements in control subjects, MCI patients, and AD patients. CSF biomarkers are consistent with disease trajectories predicted by β-amyloid cascade (Hardy, J Alzheimers Dis 2006;9(Suppl 3):151-3) and tau-mediated neurodegeneration hypotheses for AD, whereas brain atrophy and hypometabolism levels show predicted patterns but exhibit differing rates of change depending on region and disease severity; (3) the assessment of alternative methods of diagnostic categorization. Currently, the best classifiers combine optimum features from multiple modalities, including MRI, [(18)F]-fluorodeoxyglucose-PET, CSF biomarkers, and clinical tests; (4) the development of methods for the early detection of AD. CSF biomarkers, β-amyloid 42 and tau, as well as amyloid PET may reflect the earliest steps in AD pathology in mildly symptomatic or even nonsymptomatic subjects, and are leading candidates for the detection of AD in its preclinical stages; (5) the improvement of clinical trial efficiency through the identification of subjects most likely to undergo imminent future clinical decline and the use of more sensitive outcome measures to reduce sample sizes. Baseline cognitive and/or MRI measures generally predicted future decline better than other modalities, whereas MRI measures of change were shown to be the most efficient outcome measures; (6) the confirmation of the AD risk loci CLU, CR1, and PICALM and the identification of novel candidate risk loci; (7) worldwide impact through the establishment of ADNI-like programs in Europe, Asia, and Australia; (8) understanding the biology and pathobiology of normal aging, MCI, and AD through integration of ADNI biomarker data with clinical data from ADNI to stimulate research that will resolve controversies about competing hypotheses on the etiopathogenesis of AD, thereby advancing efforts to find disease-modifying drugs for AD; and (9) the establishment of infrastructure to allow sharing of all raw and processed data without embargo to interested scientific investigators throughout the world. The ADNI study was extended by a 2-year Grand Opportunities grant in 2009 and a renewal of ADNI (ADNI-2) in October 2010 through to 2016, with enrollment of an additional 550 participants.
---
paper_title: Measuring the thickness of the human cerebral cortex from magnetic resonance images
paper_content:
Accurate and automated methods for measuring the thickness of human cerebral cortex could provide powerful tools for diagnosing and studying a variety of neurodegenerative and psychiatric disorders. Manual methods for estimating cortical thickness from neuroimaging data are labor intensive, requiring several days of effort by a trained anatomist. Furthermore, the highly folded nature of the cortex is problematic for manual techniques, frequently resulting in measurement errors in regions in which the cortical surface is not perpendicular to any of the cardinal axes. As a consequence, it has been impractical to obtain accurate thickness estimates for the entire cortex in individual subjects, or group statistics for patient or control populations. Here, we present an automated method for accurately measuring the thickness of the cerebral cortex across the entire brain and for generating cross-subject statistics in a coordinate system based on cortical anatomy. The intersubject standard deviation of the thickness measures is shown to be less than 0.5 mm, implying the ability to detect focal atrophy in small populations or even individual subjects. The reliability and accuracy of this new method are assessed by within-subject test–retest studies, as well as by comparison of cross-subject regional thickness measures with published values.
---
paper_title: Estimation of the partial volume effect in MRI
paper_content:
The partial volume effect (PVE) arises in volumetric images when more than one tissue type occurs in a voxel. In such cases, the voxel intensity depends not only on the imaging sequence and tissue properties, but also on the proportions of each tissue type present in the voxel. We have demonstrated in previous work that ignoring this effect by establishing binary voxel-based segmentations introduces significant errors in quantitative measurements, such as estimations of the volumes of brain structures. In this paper, we provide a statistical estimation framework to quantify PVE and to propagate voxel-based estimates in order to compute global magnitudes, such as volume, with associated estimates of uncertainty. Validation is performed on ground truth synthetic images and MRI phantoms, and a clinical study is reported. Results show that the method allows for robust morphometric studies and provides resolution unattainable to date.
---
paper_title: The clinical use of structural MRI in Alzheimer disease
paper_content:
Structural imaging based on magnetic resonance is an integral part of the clinical assessment of patients with suspected Alzheimer dementia. Prospective data on the natural history of change in structural markers from preclinical to overt stages of Alzheimer disease are radically changing how the disease is conceptualized, and will influence its future diagnosis and treatment. Atrophy of medial temporal structures is now considered to be a valid diagnostic marker at the mild cognitive impairment stage. Structural imaging is also included in diagnostic criteria for the most prevalent non-Alzheimer dementias, reflecting its value in differential diagnosis. In addition, rates of whole-brain and hippocampal atrophy are sensitive markers of neurodegeneration, and are increasingly used as outcome measures in trials of potentially disease-modifying therapies. Large multicenter studies are currently investigating the value of other imaging and nonimaging markers as adjuncts to clinical assessment in diagnosis and monitoring of progression. The utility of structural imaging and other markers will be increased by standardization of acquisition and analysis methods, and by development of robust algorithms for automated assessment.
---
paper_title: Segmentation and measurement of brain structures in MRI including confidence bounds
paper_content:
Abstract The advent of new and improved imaging devices has allowed an impressive increase in the accuracy and precision of MRI acquisitions. However, the volumetric nature of the image formation process implies an inherent uncertainty, known as the partial volume effect, which can be further affected by artifacts such as magnetic inhomogeneities and noise. These degradations seriously challenge the application to MRI of any segmentation method, especially on data sets where the size of the object or effect to be studied is small relative to the voxel size, as is the case in multiple sclerosis and schizophrenia. We develop an approach to this problem by estimating a set of bounds on the spatial location of each organ to be segmented. First, we describe a method for 3D segmentation from voxel data which combines statistical classification and geometry-driven segmentation; then we discuss how the partial volume effect is estimated and object measurements are obtained. A comprehensive validation study and a set of results on clinical applications are also described.
---
paper_title: Brain tissue classification of magnetic resonance images using partial volume modeling
paper_content:
This paper presents a fully automatic three-dimensional classification of brain tissues for Magnetic Resonance (MR) images. An MR image volume may be composed of a mixture of several tissue types due to partial volume effects. Therefore, we consider that in a brain dataset there are not only the three main types of brain tissue: gray matter, white matter, and cerebro spinal fluid, called pure classes, but also mixtures, called mixclasses. A statistical model of the mixtures is proposed and studied by means of simulations. It is shown that it can be approximated by a Gaussian function under some conditions. The D'Agostino-Pearson normality test is used to assess the risk alpha of the approximation. In order to classify a brain into three types of brain tissue and deal with the problem of partial volume effects, the proposed algorithm uses two steps: 1) segmentation of the brain into pure and mixclasses using the mixture model; 2) reclassification of the mixclasses into the pure classes using knowledge about the obtained pure classes. Both steps use Markov random field (MRF) models. The multifractal dimension, describing the topology of the brain, is added to the MRFs to improve discrimination of the mixclasses. The algorithm is evaluated using both simulated images and real MR images with different T1-weighted acquisition sequences.
---
paper_title: Brain MRI Tissue Classification Based on Local Markov Random Fields
paper_content:
A new method for tissue classification of brain magnetic resonance images (MRI) of the brain is proposed. The method is based on local image models where each models the image content in a subset of the image domain. With this local modeling approach, the assumption that tissue types have the same characteristics over the brain needs not to be evoked. This is important because tissue type characteristics, such as T1 and T2 relaxation times and proton density, vary across the individual brain and the proposed method offers improved protection against intensity non-uniformity artifacts that can hamper automatic tissue classification methods in brain MRI. A framework in which local models for tissue intensities and Markov Random Field priors are combined into a global probabilistic image model is introduced. This global model will be an inhomogeneous Markov Random Field and it can be solved by standard algorithms such as iterative conditional modes. The division of the whole image domain into local brain regions possibly having different intensity statistics is realized via sub-volume probabilistic atlases. Finally, the parameters for the local intensity models are obtained without supervision by maximizing the weighted likelihood of a certain finite mixture model. For the maximization task, a novel genetic algorithm almost free of initialization dependency is applied. The algorithm is tested on both simulated and real brain MR images. The experiments confirm that the new method offers a useful improvement of the tissue classification accuracy when the basic tissue characteristics vary across the brain and the noise level of the images is reasonable. The method also offers better protection against intensity non-uniformity artifact than the corresponding method based on a global (whole image) modeling scheme.
---
paper_title: Online Resource for Validation of Brain Segmentation Methods
paper_content:
Abstract One key issue that must be addressed during the development of image segmentation algorithms is the accuracy of the results they produce. Algorithm developers require this so they can see where methods need to be improved and see how new developments compare with existing ones. Users of algorithms also need to understand the characteristics of algorithms when they select and apply them to their neuroimaging analysis applications. Many metrics have been proposed to characterize error and success rates in segmentation, and several datasets have also been made public for evaluation. Still, the methodologies used in analyzing and reporting these results vary from study to study, so even when studies use the same metrics their numerical results may not necessarily be directly comparable. To address this problem, we developed a web-based resource for evaluating the performance of skull-stripping in T1-weighted MRI. The resource provides both the data to be segmented and an online application that performs a validation study on the data. Users may download the test dataset, segment it using whichever method they wish to assess, and upload their segmentation results to the server. The server computes a series of metrics, displays a detailed report of the validation results, and archives these for future browsing and analysis. We applied this framework to the evaluation of 3 popular skull-stripping algorithms — the Brain Extraction Tool [Smith, S.M., 2002. Fast robust automated brain extraction. Hum. Brain Mapp. 17 (3),143–155 (Nov)], the Hybrid Watershed Algorithm [Segonne, F., Dale, A.M., Busa, E., Glessner, M., Salat, D., Hahn, H.K., Fischl, B., 2004. A hybrid approach to the skull stripping problem in MRI. NeuroImage 22 (3), 1060–1075 (Jul)], and the Brain Surface Extractor [Shattuck, D.W., Sandor-Leahy, S.R., Schaper, K.A., Rottenberg, D.A., Leahy, R.M., 2001. Magnetic resonance image tissue classification using a partial volume model. NeuroImage 13 (5), 856–876 (May) under several different program settings. Our results show that with proper parameter selection, all 3 algorithms can achieve satisfactory skull-stripping on the test data.
---
paper_title: A Review of Methods for Correction of Intensity Inhomogeneity in MRI
paper_content:
Medical image acquisition devices provide a vast amount of anatomical and functional information, which facilitate and improve diagnosis and patient treatment, especially when supported by modern quantitative image analysis methods. However, modality specific image artifacts, such as the phenomena of intensity inhomogeneity in magnetic resonance images (MRI), are still prominent and can adversely affect quantitative image analysis. In this paper, numerous methods that have been developed to reduce or eliminate intensity inhomogeneities in MRI are reviewed. First, the methods are classified according to the inhomogeneity correction strategy. Next, different qualitative and quantitative evaluation approaches are reviewed. Third, 60 relevant publications are categorized according to several features and analyzed so as to reveal major trends, popularity, evaluation strategies and applications. Finally, key evaluation issues and future development of the inhomogeneity correction field, supported by the results of the analysis, are discussed
---
paper_title: Quantification of MR brain images by mixture density and partial volume modeling.
paper_content:
The problem of automatic quantification of brain tissue by utilizing single-valued (single echo) magnetic resonance imaging (MRI) brain scans is addressed. It is shown that this problem can be solved without classification or segmentation, a method that may be particularly useful in quantifying white matter lesions where the range of values associated with the lesions and the white matter may heavily overlap. The general technique utilizes a statistical model of the noise and partial volume effect together with a finite mixture density description of the tissues. The quantification is then formulated as a minimization problem of high order with up to six separate densities as part of the mixture. This problem is solved by tree annealing with and without partial volume utilized, the results compared, and the sensitivity of the tree annealing algorithm to various parameters is exhibited. The actual quantification is performed by two methods: a classification-based method called Bayes quantification, and parameter estimation. Results from each method are presented for synthetic and actual data.
---
paper_title: Partial volume tissue classification of multichannel magnetic resonance images-a mixel model
paper_content:
A single volume element (voxel) in a medical image may be composed of a mixture of multiple tissue types. The authors call voxels which contain multiple tissue classes mixels. A statistical mixel image model based on Markov random field (MRF) theory and an algorithm for the classification of mixels are presented. The authors concentrate on the classification of multichannel magnetic resonance (MR) images of the brain although the algorithm has other applications. The authors also present a method for compensating for the gray-level variation of MR images between different slices, which is primarily caused by the inhomogeneity of the RF field produced by the imaging coil. >
---
paper_title: Statistical models of partial volume effect
paper_content:
Statistical models of partial volume effect for systems with various types of noise or pixel value distributions are developed and probability density functions are derived. The models assume either Gaussian system sampling noise or intrinsic material variances with Gaussian or Poisson statistics. In particular, a material can be viewed as having a distinct value that has been corrupted by additive noise either before or after partial volume mixing, or the material could have nondistinct values with a Poisson distribution as might be the case in nuclear medicine images. General forms of the probability density functions are presented for the N material cases and particular forms for two- and three-material cases are derived. These models are incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values. Examples are presented using simulated histograms to demonstrate the efficacy of the models for quantification. Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component. >
---
paper_title: Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori probability.
paper_content:
Noise, partial volume (PV) effect and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the effects listed above. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level and PV effect are commonly encountered.
---
paper_title: On the statistical analysis of dirty pictures
paper_content:
may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or "pixels", each pixel having a particular "colour" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.
---
paper_title: Partial volume tissue classification of multichannel magnetic resonance images-a mixel model
paper_content:
A single volume element (voxel) in a medical image may be composed of a mixture of multiple tissue types. The authors call voxels which contain multiple tissue classes mixels. A statistical mixel image model based on Markov random field (MRF) theory and an algorithm for the classification of mixels are presented. The authors concentrate on the classification of multichannel magnetic resonance (MR) images of the brain although the algorithm has other applications. The authors also present a method for compensating for the gray-level variation of MR images between different slices, which is primarily caused by the inhomogeneity of the RF field produced by the imaging coil. >
---
paper_title: Magnetic resonance image tissue classification using a partial volume model
paper_content:
Abstract We describe a sequence of low-level operations to isolate and classify brain tissue within T1-weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for image nonuniformities due to magnetic field inhomogeneities by fitting a tricubic B-spline gain field to local estimates of the image nonuniformity spaced throughout the MRI volume. The local estimates are computed by fitting a partial volume tissue measurement model to histograms of neighborhoods about each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensity-normalized image are then classified into six tissue types using a maximum a posteriori classifier. This classifier combines the partial volume tissue measurement model with a Gibbs prior that models the spatial properties of the brain. We validate each stage of our algorithm on real and phantom data. Using data from the 20 normal MRI brain data sets of the Internet Brain Segmentation Repository, our method achieved average κ indices of κ = 0.746 ± 0.114 for gray matter (GM) and κ = 0.798 ± 0.089 for white matter (WM) compared to expert labeled data. Our method achieved average κ indices κ = 0.893 ± 0.041 for GM and κ = 0.928 ± 0.039 for WM compared to the ground truth labeling on 12 volumes from the Montreal Neurological Institute's BrainWeb phantom.
---
paper_title: Improved estimates of partial volume coefficients from noisy brain MRI using spatial context
paper_content:
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation.
---
paper_title: Segmentation of MRI brain scans using non-uniform partial volume densities
paper_content:
Abstract We present an algorithm that provides a partial volume segmentation of a T1-weighted image of the brain into gray matter, white matter and cerebrospinal fluid. The algorithm incorporates a non-uniform partial volume density that takes the curved nature of the cortex into account. The pure gray and white matter intensities are estimated from the image, using scanner noise and cortical partial volume effects. Expected tissue fractions are subsequently computed in each voxel. The algorithm has been tested for reliability, correct estimation of the pure tissue intensities on both real (repeated) MRI data and on simulated (brain) images. Intra-class correlation coefficients (ICCs) were above 0.93 for all volumes of the three tissue types for repeated scans from the same scanner, as well as for scans with different voxel sizes from different scanners with different field strengths. The implementation of our non-uniform partial volume density provided more reliable volumes and tissue fractions, compared to a uniform partial volume density. Applying the algorithm to simulated images showed that the pure tissue intensities were estimated accurately. Variations in cortical thickness did not influence the accuracy of the volume estimates, which is a valuable property when studying (possible) group differences. In conclusion, we have presented a new partial volume segmentation algorithm that allows for comparisons over scanners and voxel sizes.
---
paper_title: Quantification of MR brain images by mixture density and partial volume modeling.
paper_content:
The problem of automatic quantification of brain tissue by utilizing single-valued (single echo) magnetic resonance imaging (MRI) brain scans is addressed. It is shown that this problem can be solved without classification or segmentation, a method that may be particularly useful in quantifying white matter lesions where the range of values associated with the lesions and the white matter may heavily overlap. The general technique utilizes a statistical model of the noise and partial volume effect together with a finite mixture density description of the tissues. The quantification is then formulated as a minimization problem of high order with up to six separate densities as part of the mixture. This problem is solved by tree annealing with and without partial volume utilized, the results compared, and the sensitivity of the tree annealing algorithm to various parameters is exhibited. The actual quantification is performed by two methods: a classification-based method called Bayes quantification, and parameter estimation. Results from each method are presented for synthetic and actual data.
---
paper_title: Partial volume tissue classification of multichannel magnetic resonance images-a mixel model
paper_content:
A single volume element (voxel) in a medical image may be composed of a mixture of multiple tissue types. The authors call voxels which contain multiple tissue classes mixels. A statistical mixel image model based on Markov random field (MRF) theory and an algorithm for the classification of mixels are presented. The authors concentrate on the classification of multichannel magnetic resonance (MR) images of the brain although the algorithm has other applications. The authors also present a method for compensating for the gray-level variation of MR images between different slices, which is primarily caused by the inhomogeneity of the RF field produced by the imaging coil. >
---
paper_title: Statistical models of partial volume effect
paper_content:
Statistical models of partial volume effect for systems with various types of noise or pixel value distributions are developed and probability density functions are derived. The models assume either Gaussian system sampling noise or intrinsic material variances with Gaussian or Poisson statistics. In particular, a material can be viewed as having a distinct value that has been corrupted by additive noise either before or after partial volume mixing, or the material could have nondistinct values with a Poisson distribution as might be the case in nuclear medicine images. General forms of the probability density functions are presented for the N material cases and particular forms for two- and three-material cases are derived. These models are incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values. Examples are presented using simulated histograms to demonstrate the efficacy of the models for quantification. Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component. >
---
paper_title: Multivariate Tissue Classification of MRI Images for 3-D Volume Reconstruction - A Statistical Approach
paper_content:
One of the major problems in 3-D volume reconstruction from magnetic resonance imaging (MRI) is the difficulty in automating the classification of soft tissues. Because of the complicated soft tissue structures revealed by MRI, it is not easy to segment the images with simple algorithms. MRI can obtain multiple images from the same anatomical section with different pulse sequences, with each image having different response characteristics for each soft tissue. Using the gray level distributions of soft tissues, we have developed two statistical classifiers that utilize the image context information based on the Markov Random Field (MRF) image model. One of the classifiers classifies each voxel to a specific tissue type and the other estimates the partial volume of each tissue within each voxel. Since the voxel sizes of tomographic images are finite and the measurements from tissue boundaries represent the mixture of multiple tissue types, it is preferable that the classifier should not classify each voxel in all-or-none fashion; rather, it should be able to tell the percentage volume of each class in each voxel for the better visualization of the prepared 3-D dataset. The paper presents the theoretical basis of the algorithms and experimental evaluation results of the classifiers in terms of classification accuracy, as compared to the conventional maximum likelihood classifier.
---
paper_title: Fast and robust parameter estimation for statistical partial volume models in brain MRI
paper_content:
Due to the finite spatial resolution of imaging devices, a single voxel in a medical image may be composed of mixture of tissue types, an effect known as partial volume effect (PVE). Partial volume estimation, that is, the estimation of the amount of each tissue type within each voxel, has received considerable interest in recent years. Much of this work has been focused on the mixel model, a statistical model of PVE. We propose a novel trimmed minimum covariance determinant (TMCD) method for the estimation of the parameters of the mixel PVE model. In this method, each voxel is first labeled according to the most dominant tissue type. Voxels that are prone to PVE are removed from this labeled set, following which robust location estimators with high breakdown points are used to estimate the mean and the covariance of each tissue class. Comparisons between different methods for parameter estimation based on classified images as well as expectation--maximization-like (EM-like) procedure for simultaneous parameter and partial volume estimation are reported. The robust estimators based on a pruned classification as presented here are shown to perform well even if the initial classification is of poor quality. The results obtained are comparable to those obtained using the EM-like procedure, but require considerably less computation time. Segmentation results of real data based on partial volume estimation are also reported. In addition to considering the parameter estimation problem, we discuss differences between different approximations to the complete mixel model. In summary, the proposed TMCD method allows for the accurate, robust, and efficient estimation of partial volume model parameters, which is crucial to a variety of brain MRI data analysis procedures such as the accurate estimation of tissue volumes and the accurate delineation of the cortical surface.
---
paper_title: Estimation of the partial volume effect in MRI
paper_content:
The partial volume effect (PVE) arises in volumetric images when more than one tissue type occurs in a voxel. In such cases, the voxel intensity depends not only on the imaging sequence and tissue properties, but also on the proportions of each tissue type present in the voxel. We have demonstrated in previous work that ignoring this effect by establishing binary voxel-based segmentations introduces significant errors in quantitative measurements, such as estimations of the volumes of brain structures. In this paper, we provide a statistical estimation framework to quantify PVE and to propagate voxel-based estimates in order to compute global magnitudes, such as volume, with associated estimates of uncertainty. Validation is performed on ground truth synthetic images and MRI phantoms, and a clinical study is reported. Results show that the method allows for robust morphometric studies and provides resolution unattainable to date.
---
paper_title: Magnetic resonance image tissue classification using a partial volume model
paper_content:
Abstract We describe a sequence of low-level operations to isolate and classify brain tissue within T1-weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for image nonuniformities due to magnetic field inhomogeneities by fitting a tricubic B-spline gain field to local estimates of the image nonuniformity spaced throughout the MRI volume. The local estimates are computed by fitting a partial volume tissue measurement model to histograms of neighborhoods about each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensity-normalized image are then classified into six tissue types using a maximum a posteriori classifier. This classifier combines the partial volume tissue measurement model with a Gibbs prior that models the spatial properties of the brain. We validate each stage of our algorithm on real and phantom data. Using data from the 20 normal MRI brain data sets of the Internet Brain Segmentation Repository, our method achieved average κ indices of κ = 0.746 ± 0.114 for gray matter (GM) and κ = 0.798 ± 0.089 for white matter (WM) compared to expert labeled data. Our method achieved average κ indices κ = 0.893 ± 0.041 for GM and κ = 0.928 ± 0.039 for WM compared to the ground truth labeling on 12 volumes from the Montreal Neurological Institute's BrainWeb phantom.
---
paper_title: Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains
paper_content:
Abstract In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.
---
paper_title: Fuzzy Markovian Segmentation in Application of Magnetic Resonance Images
paper_content:
In this paper, we present a fuzzy Markovian method for brain tissue segmentation from magnetic resonance images. Generally, there are three main brain tissues in a brain dataset: gray matter, white matter, and cerebrospinal fluid. However, due to the limited resolution of the acquisition system, many voxels may be composed of multiple tissue types (partial volume effects). The proposed method aims at calculating a fuzzy membership in each voxel to indicate the partial volume degree, which is statistically modeled. Since our method is unsupervised, it first estimates the parameters of the fuzzy Markovian random field model using a stochastic gradient algorithm. The fuzzy Markovian segmentation is then performed automatically. The accuracy of the proposed method is quantitatively assessed on a digital phantom using an absolute average error and qualitatively tested on real MRI brain data. A comparison with the widely used fuzzy C-means algorithm is carried out to show numerous advantages of our method.
---
paper_title: A unifying framework for partial volume segmentation of brain MR images
paper_content:
Accurate brain tissue segmentation by intensity-based voxel classification of magnetic resonance (MR) images is complicated by partial volume (PV) voxels that contain a mixture of two or more tissue types. In this paper, we present a statistical framework for PV segmentation that encompasses and extends existing techniques. We start from a commonly used parametric statistical image model in which each voxel belongs to one single tissue type, and introduce an additional downsampling step that causes partial voluming along the borders between tissues. An expectation-maximization approach is used to simultaneously estimate the parameters of the resulting model and perform a PV classification. We present results on well-chosen simulated images and on real MR images of the brain, and demonstrate that the use of appropriate spatial prior knowledge not only improves the classifications, but is often indispensable for robust parameter estimation as well. We conclude that general robust PV segmentation of MR brain images requires statistical models that describe the spatial distribution of brain tissues more accurately than currently available models.
---
paper_title: Partial Volume Segmentation of Cerebral MRI Scans with Mixture Model Clustering
paper_content:
A mixture model clustering algorithm is presented for robust MRI brain image segmentation in the presence of partial volume averaging. The method uses additional classes to represent partial volume voxels of mixed tissue type in the data with their probability distributions modeled accordingly. The image model also allows for tissue-dependent variance values and voxel neighborhood information is taken into account in the clustering formulation. The final result is the estimated fractional amount of each tissue type present within a voxel in addition to the label assigned to the voxel. A multi-threaded implementation of the method is evaluated using both synthetic and real MRI data.
---
paper_title: Magnetic resonance image tissue classification using a partial volume model
paper_content:
Abstract We describe a sequence of low-level operations to isolate and classify brain tissue within T1-weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for image nonuniformities due to magnetic field inhomogeneities by fitting a tricubic B-spline gain field to local estimates of the image nonuniformity spaced throughout the MRI volume. The local estimates are computed by fitting a partial volume tissue measurement model to histograms of neighborhoods about each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensity-normalized image are then classified into six tissue types using a maximum a posteriori classifier. This classifier combines the partial volume tissue measurement model with a Gibbs prior that models the spatial properties of the brain. We validate each stage of our algorithm on real and phantom data. Using data from the 20 normal MRI brain data sets of the Internet Brain Segmentation Repository, our method achieved average κ indices of κ = 0.746 ± 0.114 for gray matter (GM) and κ = 0.798 ± 0.089 for white matter (WM) compared to expert labeled data. Our method achieved average κ indices κ = 0.893 ± 0.041 for GM and κ = 0.928 ± 0.039 for WM compared to the ground truth labeling on 12 volumes from the Montreal Neurological Institute's BrainWeb phantom.
---
paper_title: Quantification of MR brain images by mixture density and partial volume modeling.
paper_content:
The problem of automatic quantification of brain tissue by utilizing single-valued (single echo) magnetic resonance imaging (MRI) brain scans is addressed. It is shown that this problem can be solved without classification or segmentation, a method that may be particularly useful in quantifying white matter lesions where the range of values associated with the lesions and the white matter may heavily overlap. The general technique utilizes a statistical model of the noise and partial volume effect together with a finite mixture density description of the tissues. The quantification is then formulated as a minimization problem of high order with up to six separate densities as part of the mixture. This problem is solved by tree annealing with and without partial volume utilized, the results compared, and the sensitivity of the tree annealing algorithm to various parameters is exhibited. The actual quantification is performed by two methods: a classification-based method called Bayes quantification, and parameter estimation. Results from each method are presented for synthetic and actual data.
---
paper_title: MR image-based measurement of rates of change in volumes of brain structures. Part I: method and validation.
paper_content:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniformity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part II where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease.
---
paper_title: Fast and robust parameter estimation for statistical partial volume models in brain MRI
paper_content:
Due to the finite spatial resolution of imaging devices, a single voxel in a medical image may be composed of mixture of tissue types, an effect known as partial volume effect (PVE). Partial volume estimation, that is, the estimation of the amount of each tissue type within each voxel, has received considerable interest in recent years. Much of this work has been focused on the mixel model, a statistical model of PVE. We propose a novel trimmed minimum covariance determinant (TMCD) method for the estimation of the parameters of the mixel PVE model. In this method, each voxel is first labeled according to the most dominant tissue type. Voxels that are prone to PVE are removed from this labeled set, following which robust location estimators with high breakdown points are used to estimate the mean and the covariance of each tissue class. Comparisons between different methods for parameter estimation based on classified images as well as expectation--maximization-like (EM-like) procedure for simultaneous parameter and partial volume estimation are reported. The robust estimators based on a pruned classification as presented here are shown to perform well even if the initial classification is of poor quality. The results obtained are comparable to those obtained using the EM-like procedure, but require considerably less computation time. Segmentation results of real data based on partial volume estimation are also reported. In addition to considering the parameter estimation problem, we discuss differences between different approximations to the complete mixel model. In summary, the proposed TMCD method allows for the accurate, robust, and efficient estimation of partial volume model parameters, which is crucial to a variety of brain MRI data analysis procedures such as the accurate estimation of tissue volumes and the accurate delineation of the cortical surface.
---
paper_title: A segmentation-based and partial volume compensated method for accurate measurement of lateral ventricular volumes on TI-weighted magnetic resonance images
paper_content:
Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement.
---
paper_title: Genetic Algorithms for Finite Mixture Model Based Voxel Classification in Neuroimaging
paper_content:
Finite mixture models (FMMs) are an indispensable tool for unsupervised classification in brain imaging. Fitting an FMM to the data leads to a complex optimization problem. This optimization problem is difficult to solve by standard local optimization methods, such as the expectation-maximization (EM) algorithm, if a principled initialization is not available. In this paper, we propose a new global optimization algorithm for the FMM parameter estimation problem, which is based on real coded genetic algorithms. Our specific contributions are two-fold: 1) we propose to use blended crossover in order to reduce the premature convergence problem to its minimum and 2) we introduce a completely new permutation operator specifically meant for the FMM parameter estimation. In addition to improving the optimization results, the permutation operator allows for imposing biologically meaningful constraints to the FMM parameter values. We also introduce a hybrid of the genetic algorithm and the EM algorithm for efficient solution of multidimensional FMM fitting problems. We compare our algorithm to the self-annealing EM-algorithm and a standard real coded genetic algorithm with the voxel classification tasks within the brain imaging. The algorithms are tested on synthetic data as well as real three-dimensional image data from human magnetic resonance imaging, positron emission tomography, and mouse brain MRI. The tissue classification results by our method are shown to be consistently more reliable and accurate than with the competing parameter estimation methods
---
paper_title: Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains
paper_content:
Abstract In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.
---
paper_title: A unifying framework for partial volume segmentation of brain MR images
paper_content:
Accurate brain tissue segmentation by intensity-based voxel classification of magnetic resonance (MR) images is complicated by partial volume (PV) voxels that contain a mixture of two or more tissue types. In this paper, we present a statistical framework for PV segmentation that encompasses and extends existing techniques. We start from a commonly used parametric statistical image model in which each voxel belongs to one single tissue type, and introduce an additional downsampling step that causes partial voluming along the borders between tissues. An expectation-maximization approach is used to simultaneously estimate the parameters of the resulting model and perform a PV classification. We present results on well-chosen simulated images and on real MR images of the brain, and demonstrate that the use of appropriate spatial prior knowledge not only improves the classifications, but is often indispensable for robust parameter estimation as well. We conclude that general robust PV segmentation of MR brain images requires statistical models that describe the spatial distribution of brain tissues more accurately than currently available models.
---
paper_title: A modified fuzzy clustering algorithm for operator independent brain tissue classification of dual echo MR images
paper_content:
Methods for brain tissue classification or segmentation of structural magnetic resonance imaging (MRI) data should ideally be independent of human operators for reasons of reliability and tractability. An algorithm is described for fully automated segmentation of dual echo, fast spin-echo MRI data. The method is used to assign fuzzy-membership values for each of four tissue classes (gray matter, white matter, cerebrospinal fluid and dura) to each voxel based on partition of a two dimensional feature space. Fuzzy clustering is modified for this application in two ways. First, a two component normal mixture model is initially fitted to the thresholded feature space to identify exemplary gray and white matter voxels. These exemplary data protect subsequently estimated cluster means against the tendency of unmodified fuzzy clustering to equalize the number of voxels in each class. Second, fuzzy clustering is implemented in a moving window scheme that accommodates reduced image contrast at the axial extremes of the transmitting/receiving coil. MRI data acquired from 5 normal volunteers were used to identify stable values for three arbitrary parameters of the algorithm: feature space threshold, relative weight of exemplary gray and white matter voxels, and moving window size. The modified algorithm incorporating these parameter values was then used to classify data from simulated images of the brain, validating the use of fuzzy-membership values as estimates of partial volume. Gray:white matter ratios were estimated from 20 twenty normal volunteers (mean age 32.8 years). Processing time for each three-dimensional image was approximately 30 min on a 170 MHz workstation. Mean cerebral gray and white matter volumes estimated from these automatically segmented images were very similar to comparable results previously obtained by operator dependent methods, but without their inherent unreliability.
---
paper_title: Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors
paper_content:
Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD) systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE) and has been assessed using real brain images from the Internet Brain Image Repository (IBSR).
---
paper_title: Adaptable fuzzy C-Means for improved classification as a preprocessing procedure of brain parcellation
paper_content:
Parcellation, one of several brain analysis methods, is a procedure popular for subdividing the regions identified by segmentation into smaller topographically defined units. The fuzzy clustering algorithm is mainly used to preprocess parcellation into several segmentation methods, because it is very appropriate for the characteristics of magnetic resonance imaging (MRI), such as partial volume effect and intensity inhomogeneity. However, some gray matter, such as basal ganglia and thalamus, may be misclassified into the white matter class using the conventional fuzzy C-Means (FCM) algorithm. Parcellation has been nearly achieved through manual drawing, but it is a tedious and time-consuming process. We propose improved classification using successive fuzzy clustering and implementing the parcellation module with the modified graphic user interface (GUI) for the convenience of users.
---
paper_title: Improved estimates of partial volume coefficients from noisy brain MRI using spatial context
paper_content:
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation.
---
paper_title: Brain MRI Tissue Classification Based on Local Markov Random Fields
paper_content:
A new method for tissue classification of brain magnetic resonance images (MRI) of the brain is proposed. The method is based on local image models where each models the image content in a subset of the image domain. With this local modeling approach, the assumption that tissue types have the same characteristics over the brain needs not to be evoked. This is important because tissue type characteristics, such as T1 and T2 relaxation times and proton density, vary across the individual brain and the proposed method offers improved protection against intensity non-uniformity artifacts that can hamper automatic tissue classification methods in brain MRI. A framework in which local models for tissue intensities and Markov Random Field priors are combined into a global probabilistic image model is introduced. This global model will be an inhomogeneous Markov Random Field and it can be solved by standard algorithms such as iterative conditional modes. The division of the whole image domain into local brain regions possibly having different intensity statistics is realized via sub-volume probabilistic atlases. Finally, the parameters for the local intensity models are obtained without supervision by maximizing the weighted likelihood of a certain finite mixture model. For the maximization task, a novel genetic algorithm almost free of initialization dependency is applied. The algorithm is tested on both simulated and real brain MR images. The experiments confirm that the new method offers a useful improvement of the tissue classification accuracy when the basic tissue characteristics vary across the brain and the noise level of the images is reasonable. The method also offers better protection against intensity non-uniformity artifact than the corresponding method based on a global (whole image) modeling scheme.
---
paper_title: Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm
paper_content:
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation--no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, we show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation.
---
paper_title: Automated model-based tissue classification of MR images of the brain
paper_content:
Describes a fully automated method for model-based tissue classification of magnetic resonance (MR) images of the brain. The method interleaves classification with estimation of the model parameters, improving the classification at each iteration. The algorithm is able to segment single- and multi-spectral MR images, corrects for MR signal inhomogeneities, and incorporates contextual information by means of Markov random Fields (MRF's). A digital brain atlas containing prior expectations about the spatial location of tissue classes is used to initialize the algorithm. This makes the method fully automated and therefore it provides objective and reproducible segmentations. The authors have validated the technique on simulated as well as on real MR images of the brain.
---
paper_title: Voxel-based morphometry—The methods
paper_content:
At its simplest, voxel-based morphometry (VBM) involves a voxel-wise comparison of the local concentration of gray matter between two groups of subjects. The procedure is relatively straightforward and involves spatially normalizing high-resolution images from all the subjects in the study into the same stereotactic space. This is followed by segmenting the gray matter from the spatially normalized images and smoothing the gray-matter segments. Voxel-wise parametric statistical tests which compare the smoothed gray-matter images from the two groups are performed. Corrections for multiple comparisons are made using the theory of Gaussian random fields. This paper describes the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with nonuniformity artifact. We provide evaluations of the assumptions that underpin the method, including the accuracy of the segmentation and the assumptions made about the statistical distribution of the data.
---
paper_title: Cerebral atrophy and its relation to cognitive impairment in Parkinson disease
paper_content:
Objective:Voxel-based morphometry was used to compare the amounts of gray matter in the brains of patients with Parkinson disease (PD) and normal control subjects (NCs) and to identify the specific regions responsible for cognitive dysfunction in PD. Methods:Patients were classified into nondemented (ND) and demented (D) groups according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders (4th ed.), and a group comparison was performed. In the ND patients, a correlation was also performed between local gray matter density and the score on Raven Colored Progressive Matrices (RCPM), a test of executive and visuospatial function. Results:In patients with advanced ND-PD vs NCs, atrophic changes were observed in the limbic/paralimbic areas and the prefrontal cortex. In D vs ND patients, atrophic change was observed widely in the limbic/paralimbic system, including the anterior cingulate gyrus and hippocampus as well as the temporal lobe, dorsolateral prefrontal cortex, thalamus, and caudate nucleus. The RCPM score was positively correlated with the gray matter density in the dorsolateral prefrontal cortex and the parahippocampal gyrus. Conclusions:In patients with Parkinson disease (PD), atrophic changes occur mainly in the limbic/paralimbic and prefrontal areas. These atrophic changes may be related to the development of dementia in PD.
---
paper_title: Cerebral asymmetry and the effects of sex and handedness on brain structure: A voxel-based morphometric analysis of 465 normal adult human brains
paper_content:
We used voxel-based morphometry (VBM) to examine human brain asymmetry and the effects of sex and handedness on brain structure in 465 normal adults. We observed significant asymmetry of cerebral grey and white matter in the occipital, frontal, and temporal lobes (petalia), including Heschl's gyrus, planum temporale (PT) and the hippocampal formation. Males demonstrated increased leftward asymmetry within Heschl's gyrus and PT compared to females. There was no significant interaction between asymmetry and handedness and no main effect of handedness. There was a significant main effect of sex on brain morphology, even after accounting for the larger global volumes of grey and white matter in males. Females had increased grey matter volume adjacent to the depths of both central sulci and the left superior temporal sulcus, in right Heschl's gyrus and PT, in right inferior frontal and frontomarginal gyri and in the cingulate gyrus. Females had significantly increased grey matter concentration extensively and relatively symmetrically in the cortical mantle, parahippocampal gyri, and in the banks of the cingulate and calcarine sulci. Males had increased grey matter volume bilaterally in the mesial temporal lobes, entorhinal and perirhinal cortex, and in the anterior lobes of the cerebellum, but no regions of increased grey matter concentration.
---
paper_title: Regional impact of field strength on voxel‐based morphometry results
paper_content:
The objective of this study was to characterize the sensitivity of voxel-based morphometry (VBM) results to choice field strength. We chose to investigate the two most widespread acquisition sequences for VBM, FLASH and MP-RAGE, at 1.5 and 3 T. We first evaluated image quality of the four acquisition protocols in terms of SNR and image uniformity. We then performed a VBM study on eight subjects scanned twice using the four protocols to evaluate differences in grey matter (GM) density and corresponding scan-rescan variability, and a power analysis for each protocol in the context a longitudinal and cross-sectional VBM study. As expected, the SNR increased significantly at 3 T for both FLASH and MP-RAGE. Image non-uniformity increased as well, in particular for MP-RAGE. The differences in CNR and contrast non-uniformity cause regional biases between protocols in the VBM results, in particular between sequences at 3 T. The power analysis results show an overall decrease in the number of subjects required in a longitudinal study to detect a difference in GM density at 3 T for MP-RAGE, but an increase for FLASH. The number of subjects required in a cross-sectional VBM study is higher at 3 T for both sequences. Our results show that each protocol has a distinct regional sensitivity pattern to morphometric change, which goes against the classical view of VBM as an unbiased whole brain analysis technique, complicates the combination of data within a VBM study and the direct comparison of VBM studies based on different protocols. Hum Brain Mapp, 2010. © 2009 Wiley-Liss, Inc.
---
paper_title: Measurement of cortical thickness from MRI by minimum line integrals on soft‐classified tissue
paper_content:
Estimating the thickness of the cerebral cortex is a key step in many brain imaging studies, revealing valuable information on development or disease progression. In this work, we present a framework for measuring the cortical thickness, based on minimizing line integrals over the probability map of the gray matter in the MRI volume. We first prepare a probability map that contains the probability of each voxel belonging to the gray matter. Then, the thickness is basically defined for each voxel as the minimum line integral of the probability map on line segments centered at the point of interest. In contrast to our approach, previous methods often perform a binary-valued hard segmentation of the gray matter before measuring the cortical thickness. Due to image noise and partial volume effects, such a hard classification ignores the underlying tissue class probabilities assigned to each voxel, discarding potentially useful information. We describe our proposed method and demonstrate its performance on both artificial volumes and real 3D brain MRI data from subjects with Alzheimer’s disease and healthy individuals.
---
paper_title: Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification
paper_content:
Accurate reconstruction of the inner and outer cortical surfaces of the human cerebrum is a critical objective for a wide variety of neuroimaging analysis purposes, including visualization, morphometry, and brain mapping. The Anatomic Segmentation using Proximity (ASP) algorithm, previously developed by our group, provides a topology-preserving cortical surface deformation method that has been extensively used for the aforementioned purposes. However, constraints in the algorithm to ensure topology preservation occasionally produce incorrect thickness measurements due to a restriction in the range of allowable distances between the gray and white matter surfaces. This problem is particularly prominent in pediatric brain images with tightly folded gyri. This paper presents a novel method for improving the conventional ASP algorithm by making use of partial volume information through probabilistic classification in order to allow for topology preservation across a less restricted range of cortical thickness values. The new algorithm also corrects the classification of the insular cortex by masking out subcortical tissues. For 70 pediatric brains, validation experiments for the modified algorithm, Constrained Laplacian ASP (CLASP), were performed by three methods: (i) volume matching between surface-masked gray matter (GM) and conventional tissue-classified GM, (ii) surface matching between simulated and CLASP-extracted surfaces, and (iii) repeatability of the surface reconstruction among 16 MRI scans of the same subject. In the volume-based evaluation, the volume enclosed by the CLASP WM and GM surfaces matched the classified GM volume 13% more accurately than using conventional ASP. In the surface-based evaluation, using synthesized thick cortex, the average difference between simulated and extracted surfaces was 4.6 T 1.4 mm for conventional ASP and 0.5 T 0.4 mm for CLASP. In a repeatability study, CLASP produced a 30% lower RMS error for the GM surface and a 8% lower RMS error for the
---
paper_title: Voxel-based cortical thickness measurements in MRI
paper_content:
The thickness of the cerebral cortex can provide valuable information about normal and abnormal neuroanatomy. High resolution MRI together with powerful image processing techniques has made it possible to perform these measurements automatically over the whole brain. Here we present a method for automatically generating voxel-based cortical thickness (VBCT) maps. This technique results in maps where each voxel in the grey matter is assigned a thickness value. Sub-voxel measurements of thickness are possible using sub-sampling and interpolation of the image information. The method is applied to repeated MRI scans of a single subject from two MRI scanners to demonstrate its robustness and reproducibility. A simulated data set is used to show that small focal differences in thickness between two groups of subjects can be detected. We propose that the analysis of VBCT maps can provide results that are complementary to other anatomical analyses such as voxel-based morphometry.
---
paper_title: Cortical thickness analysis examined through power analysis and a population simulation
paper_content:
We have previously developed a procedure for measuring the thickness of cerebral cortex over the whole brain using 3-D MRI data and a fully automated surface-extraction (ASP) algorithm. This paper examines the precision of this algorithm, its optimal performance parameters, and the sensitivity of the method to subtle, focal changes in cortical thickness. The precision of cortical thickness measurements was studied using a simulated population study and single subject reproducibility metrics. Cortical thickness was shown to be a reliable method, reaching a sensitivity (probability of a true-positive) of 0.93. Six different cortical thickness metrics were compared. The simplest and most precise method measures the distance between corresponding vertices from the white matter to the gray matter surface. Given two groups of 25 subjects, a 0.6-mm (15%) change in thickness can be recovered after blurring with a 3-D Gaussian kernel (full-width half max = 30 mm). Smoothing across the 2-D surface manifold also improves precision; in this experiment, the optimal kernel size was 30 mm.
---
paper_title: Automated voxel-based 3D cortical thickness measurement in a combined Lagrangian–Eulerian PDE approach using partial volume maps
paper_content:
Accurate cortical thickness estimation is important for the study of many neurodegenerative diseases. Many approaches have been previously proposed, which can be broadly categorised as mesh-based and voxel-based. While the mesh-based approaches can potentially achieve subvoxel resolution, they usually lack the computational efficiency needed for clinical applications and large database studies. In contrast, voxel-based approaches, are computationally efficient, but lack accuracy. The aim of this paper is to propose a novel voxel-based method based upon the Laplacian definition of thickness that is both accurate and computationally efficient. A framework was developed to estimate and integrate the partial volume information within the thickness estimation process. Firstly, in a Lagrangian step, the boundaries are initialized using the partial volume information. Subsequently, in an Eulerian step, a pair of partial differential equations are solved on the remaining voxels to finally compute the thickness. Using partial volume information significantly improved the accuracy of the thickness estimation on synthetic phantoms, and improved reproducibility on real data. Significant differences in the hippocampus and temporal lobe between healthy controls (NC), mild cognitive impaired (MCI) and Alzheimer’s disease (AD) patients were found on clinical data from the ADNI database. We compared our method in terms of precision, computational speed and statistical power against the Eulerian approach. With a slight increase in computation time, accuracy and precision were greatly improved. Power analysis demonstrated the ability of our method to yield statistically significant results when comparing AD and NC. Overall, with our method the number of samples is reduced by 25% to find significant differences between the two groups.
---
paper_title: Fast and robust parameter estimation for statistical partial volume models in brain MRI
paper_content:
Due to the finite spatial resolution of imaging devices, a single voxel in a medical image may be composed of mixture of tissue types, an effect known as partial volume effect (PVE). Partial volume estimation, that is, the estimation of the amount of each tissue type within each voxel, has received considerable interest in recent years. Much of this work has been focused on the mixel model, a statistical model of PVE. We propose a novel trimmed minimum covariance determinant (TMCD) method for the estimation of the parameters of the mixel PVE model. In this method, each voxel is first labeled according to the most dominant tissue type. Voxels that are prone to PVE are removed from this labeled set, following which robust location estimators with high breakdown points are used to estimate the mean and the covariance of each tissue class. Comparisons between different methods for parameter estimation based on classified images as well as expectation--maximization-like (EM-like) procedure for simultaneous parameter and partial volume estimation are reported. The robust estimators based on a pruned classification as presented here are shown to perform well even if the initial classification is of poor quality. The results obtained are comparable to those obtained using the EM-like procedure, but require considerably less computation time. Segmentation results of real data based on partial volume estimation are also reported. In addition to considering the parameter estimation problem, we discuss differences between different approximations to the complete mixel model. In summary, the proposed TMCD method allows for the accurate, robust, and efficient estimation of partial volume model parameters, which is crucial to a variety of brain MRI data analysis procedures such as the accurate estimation of tissue volumes and the accurate delineation of the cortical surface.
---
paper_title: Measuring the thickness of the human cerebral cortex from magnetic resonance images
paper_content:
Accurate and automated methods for measuring the thickness of human cerebral cortex could provide powerful tools for diagnosing and studying a variety of neurodegenerative and psychiatric disorders. Manual methods for estimating cortical thickness from neuroimaging data are labor intensive, requiring several days of effort by a trained anatomist. Furthermore, the highly folded nature of the cortex is problematic for manual techniques, frequently resulting in measurement errors in regions in which the cortical surface is not perpendicular to any of the cardinal axes. As a consequence, it has been impractical to obtain accurate thickness estimates for the entire cortex in individual subjects, or group statistics for patient or control populations. Here, we present an automated method for accurately measuring the thickness of the cerebral cortex across the entire brain and for generating cross-subject statistics in a coordinate system based on cortical anatomy. The intersubject standard deviation of the thickness measures is shown to be less than 0.5 mm, implying the ability to detect focal atrophy in small populations or even individual subjects. The reliability and accuracy of this new method are assessed by within-subject test–retest studies, as well as by comparison of cross-subject regional thickness measures with published values.
---
paper_title: Automated 3-D Extraction of Inner and Outer Surfaces of Cerebral Cortex from MRI
paper_content:
Automatic computer processing of large multidimensional images such as those produced by magnetic resonance imaging (MRI) is greatly aided by deformable models, which are used to extract, identify, and quantify specific neuroanatomic structures. A general method of deforming polyhedra is presented here, with two novel features. First, explicit prevention of self-intersecting surface geometries is provided, unlike conventional deformable models, which use regularization constraints to discourage but not necessarily prevent such behavior. Second, deformation of multiple surfaces with intersurface proximity constraints allows each surface to help guide other surfaces into place using model-based constraints such as expected thickness of an anatomic surface. These two features are used advantageously to identify automatically the total surface of the outer and inner boundaries of cerebral cortical gray matter from normal human MR images, accurately locating the depths of the sulci, even where noise and partial volume artifacts in the image obscure the visibility of sulci. The extracted surfaces are enforced to be simple two-dimensional manifolds (having the topology of a sphere), even though the data may have topological holes. This automatic 3-D cortex segmentation technique has been applied to 150 normal subjects, simultaneously extracting both the gray/white and gray/cerebrospinal fluid interface from each individual. The collection of surfaces has been used to create a spatial map of the mean and standard deviation for the location and the thickness of cortical gray matter. Three alternative criteria for defining cortical thickness at each cortical location were developed and compared. These results are shown to corroborate published postmortem and in vivo measurements of cortical thickness.
---
paper_title: Atlas-Free Surface Reconstruction of the Cortical Grey-White Interface in Infants
paper_content:
Background ::: The segmentation of the cortical interface between grey and white matter in magnetic resonance images (MRI) is highly challenging during the first post-natal year. First, the heterogeneous brain maturation creates important intensity fluctuations across regions. Second, the cortical ribbon is highly folded creating complex shapes. Finally, the low tissue contrast and partial volume effects hamper cortex edge detection in parts of the brain. ::: Methods and Findings ::: We present an atlas-free method for segmenting the grey-white matter interface of infant brains in T2-weighted (T2w) images. We used a broad characterization of tissue using features based not only on local contrast but also on geometric properties. Furthermore, inaccuracies in localization were reduced by the convergence of two evolving surfaces located on each side of the inner cortical surface. Our method has been applied to eleven brains of one- to four-month-old infants. Both quantitative validations against manual segmentations and sulcal landmarks demonstrated good performance for infants younger than two months old. Inaccuracies in surface reconstruction increased with age in specific brain regions where the tissue contrast decreased with maturation, such as in the central region. ::: Conclusions ::: We presented a new segmentation method which achieved good to very good performance at the grey-white matter interface depending on the infant age. This method should reduce manual intervention and could be applied to pathological brains since it does not require any brain atlas.
---
paper_title: Automated segmentation of multiple sclerosis lesion subtypes with multichannel MRI
paper_content:
PURPOSE ::: To automatically segment multiple sclerosis (MS) lesions into three subtypes (i.e., enhancing lesions, T1 "black holes", T2 hyperintense lesions). ::: ::: ::: MATERIALS AND METHODS ::: Proton density-, T2- and contrast-enhanced T1-weighted brain images of 12 MR scans were pre-processed through intracranial cavity (IC) extraction, inhomogeneity correction and intensity normalization. Intensity-based statistical k-nearest neighbor (k-NN) classification was combined with template-driven segmentation and partial volume artifact correction (TDS+) for segmentation of MS lesions subtypes and brain tissue compartments. Operator-supervised tissue sampling and parameter calibration were performed on 2 randomly selected scans and were applied automatically to the remaining 10 scans. Results from this three-channel TDS+ (3ch-TDS+) were compared to those from a previously validated two-channel TDS+ (2ch-TDS+) method. The results of both the 3ch-TDS+ and 2ch-TDS+ were also compared to manual segmentation performed by experts. ::: ::: ::: RESULTS ::: Intra-class correlation coefficients (ICC) of 3ch-TDS+ for all three subtypes of lesions were higher (ICC between 0.95 and 0.96) than that of 2ch-TDS+ for T2 lesions (ICC = 0.82). The 3ch-TDS+ also identified the three lesion subtypes with high specificity (98.7-99.9%) and accuracy (98.5-99.9%). Sensitivity of 3ch-TDS+ for T2 lesions was 16% higher than with 2ch-TDS+. Enhancing lesions were segmented with the best sensitivity (81.9%). "Black holes" were segmented with the least sensitivity (62.3%). ::: ::: ::: CONCLUSION ::: 3ch-TDS+ is a promising method for automated segmentation of MS lesion subtypes.
---
paper_title: Automatic segmentation and reconstruction of the cortex from neonatal MRI
paper_content:
Abstract Segmentation and reconstruction of cortical surfaces from magnetic resonance (MR) images are more challenging for developing neonates than adults. This is mainly due to the dynamic changes in the contrast between gray matter (GM) and white matter (WM) in both T1- and T2-weighted images (T1w and T2w) during brain maturation. In particular in neonatal T2w images WM typically has higher signal intensity than GM. This causes mislabeled voxels during cortical segmentation, especially in the cortical regions of the brain and in particular at the interface between GM and cerebrospinal fluid (CSF). We propose an automatic segmentation algorithm detecting these mislabeled voxels and correcting errors caused by partial volume effects. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic expectation maximization (EM) scheme. Quantitative validation against manual segmentation demonstrates good performance (the mean Dice value: 0.758 ± 0.037 for GM and 0.794 ± 0.078 for WM). The inner, central and outer cortical surfaces are then reconstructed using implicit surface evolution. A landmark study is performed to verify the accuracy of the reconstructed cortex (the mean surface reconstruction error: 0.73 mm for inner surface and 0.63 mm for the outer). Both segmentation and reconstruction have been tested on 25 neonates with the gestational ages ranging from ∼ 27 to 45 weeks. This preliminary analysis confirms previous findings that cortical surface area and curvature increase with age, and that surface area scales to cerebral volume according to a power law, while cortical thickness is not related to age or brain growth.
---
paper_title: Time-series analysis of MRI intensity patterns in multiple sclerosis
paper_content:
Abstract In progressive neurological disorders, such as multiple sclerosis (MS), magnetic resonance imaging (MRI) follow-up is used to monitor disease activity and progression and to understand the underlying pathogenic mechanisms. This article presents image postprocessing methods and validation for integrating multiple serial MRI scans into a spatiotemporal volume for direct quantitative evaluation of the temporal intensity profiles. This temporal intensity signal and its dynamics have thus far not been exploited in the study of MS pathogenesis and the search for MRI surrogates of disease activity and progression. The integration into a four-dimensional data set comprises stages of tissue classification, followed by spatial and intensity normalization and partial volume filtering. Spatial normalization corrects for variations in head positioning and distortion artifacts via fully automated intensity-based registration algorithms, both rigid and nonrigid. Intensity normalization includes separate stages of correcting intra- and interscan variations based on the prior tissue class segmentation. Different approaches to image registration, partial volume correction, and intensity normalization were validated and compared. Validation included a scan–rescan experiment as well as a natural-history study on MS patients, imaged in weekly to monthly intervals over a 1-year follow-up. Significant error reduction was observed by applying tissue-specific intensity normalization and partial volume filtering. Example temporal profiles within evolving multiple sclerosis lesions are presented. An overall residual signal variance of 1.4% ± 0.5% was observed across multiple subjects and time points, indicating an overall sensitivity of 3% (for axial dual echo images with 3-mm slice thickness) for longitudinal study of signal dynamics from serial brain MRI.
---
paper_title: Automatic statistical shape analysis of cerebral asymmetry in 3D T1-weighted magnetic resonance images at vertex-level: application to neuroleptic-naïve schizophrenia.
paper_content:
The study of the structural asymmetries in the human brain can assist the early diagnosis and progression of various neuropsychiatric disorders, and give insights into the biological bases of several cognitive deficits. The high inter-subject variability in cortical morphology complicates the detection of abnormal asymmetries especially if only small samples are available. This work introduces a novel automatic method for the local (vertex-level) statistical shape analysis of gross cerebral hemispheric surface asymmetries which is robust to the individual cortical variations. After segmentation of the cerebral hemispheric volumes from three-dimensional (3D) T1-weighted magnetic resonance images (MRI) and their spatial normalization to a common space, the right hemispheric masks were reflected to match with the left ones. Cerebral hemispheric surfaces were extracted using a deformable model-based algorithm which extracted the salient morphological features while establishing the point correspondence between the surfaces. The interhemispheric asymmetry, quantified by customized measures of asymmetry, was evaluated in a few thousands of corresponding surface vertices and tested for statistical significance. The developed method was tested on scans obtained from a small sample of healthy volunteers and first-episode neuroleptic-naïve schizophrenics. A significant main effect of the disease on the local interhemispheric asymmetry was observed, both in females and males, at the frontal and temporal lobes, the latter being often linked to the cognitive, auditory, and memory deficits in schizophrenia. The findings of this study, although need further testing in larger samples, partially replicate previous studies supporting the hypothesis of schizophrenia as a neurodevelopmental disorder.
---
paper_title: Quantitative analysis of MRI signal abnormalities of brain white matter with high reproducibility and accuracy.
paper_content:
PURPOSE ::: To assess the reproducibility and accuracy compared to radiologists of three automated segmentation pipelines for quantitative magnetic resonance imaging (MRI) measurement of brain white matter signal abnormalities (WMSA). ::: ::: ::: MATERIALS AND METHODS ::: WMSA segmentation was performed on pairs of whole brain scans from 20 patients with multiple sclerosis (MS) and 10 older subjects who were positioned and imaged twice within 30 minutes. Radiologist outlines of WMSA on 20 sections from 16 patients were compared with the corresponding results of each segmentation method. ::: ::: ::: RESULTS ::: The segmentation method combining expectation-maximization (EM) tissue segmentation, template-driven segmentation (TDS), and partial volume effect correction (PVEC) demonstrated the highest accuracy (the absolute value of the Z-score was 0.99 for both groups of subjects), as well as high interscan reproducibility (repeatability coefficient was 0.68 mL in MS patients and 1.49 mL in aging subjects). ::: ::: ::: CONCLUSION ::: The addition of TDS to the EM segmentation and PVEC algorithms significantly improved the accuracy of WMSA volume measurements, while also improving measurement reproducibility.
---
paper_title: A Modified Probabilistic Neural Network for Partial Volume Segmentation in Brain MR Image
paper_content:
A modified probabilistic neural network (PNN) for brain tissue segmentation with magnetic resonance imaging (MRI) is proposed. In this approach, covariance matrices are used to replace the singular smoothing factor in the PNN's kernel function, and weighting factors are added in the pattern of summation layer. This weighted probabilistic neural network (WPNN) classifier can account for partial volume effects, which exist commonly in MRI, not only in the final result stage, but also in the modeling process. It adopts the self-organizing map (SOM) neural network to overly segment the input MR image, and yield reference vectors necessary for probabilistic density function (pdf) estimation. A supervised "soft" labeling mechanism based on Bayesian rule is developed, so that weighting factors can be generated along with corresponding SOM reference vectors. Tissue classification results from various algorithms are compared, and the effectiveness and robustness of the proposed approach are demonstrated.
---
paper_title: Automatic cerebral and cerebellar hemisphere segmentation in 3D MRI: Adaptive disconnection algorithm
paper_content:
This paper describes the automatic Adaptive Disconnection method to segment cerebral and cerebellar hemispheres of human brain in three-dimensional magnetic resonance imaging (MRI). Using the partial differential equations based shape bottlenecks algorithm cooperating with an information potential value clustering process, it detects and cuts, first, the compartmental connections between the cerebrum, the cerebellum and the brainstem in the white matter domain, and then, the interhemispheric connections of the extracted cerebrum and cerebellum volumes. As long as the subject orientation in the scanner is given, the variations in subject location and normal brain morphology in different images are accommodated automatically, thus no stereotaxic image registration is required. The modeling of partial volume effect is used to locate cerebrum, cerebellum and brainstem boundaries, and make the interhemispheric connections detectable. The Adaptive Disconnection method was tested with 10 simulated images from the BrainWeb database and 39 clinical images from the LONI Probabilistic Brain Atlas database. It obtained lower error rates than a traditional shape bottlenecks algorithm based segmentation technique (BrainVisa) and linear and nonlinear registration based brain hemisphere segmentation methods. Segmentation accuracies were evaluated against manual segmentations. The Adaptive Disconnection method was also confirmed not to be sensitive to the noise and intensity non-uniformity in the images. We also applied the Adaptive Disconnection method to clinical images of 22 healthy controls and 18 patients with schizophrenia. A preliminary cerebral volumetric asymmetry analysis based on these images demonstrated that the Adaptive Disconnection method is applicable to study abnormal brain asymmetry in schizophrenia.
---
paper_title: Rapid Communication Robust unsupervised segmentation of infarct lesion from diffusion tensor MR images using multiscale statistical classification and partial volume voxel reclassification
paper_content:
Manual region tracing method for segmentation of infarction lesions in images from diffusion tensor magnetic resonance imaging (DT-MRI) is usually used in clinical works, but it is time consuming. A new unsupervised method has been developed, which is a multistage procedure, involving image preprocessing, calculation of tensor field and measurement of diffusion anisotropy, segmentation of infarction volume based on adaptive multiscale statistical classification (MSSC), and partial volume voxel reclassification (PVVR). The method accounts for random noise, intensity overlapping, partial volume effect (PVE), and intensity shading artifacts, which always appear in DT-MR images. The proposed method was applied to 20 patients with clinically diagnosed brain infarction by DT-MRI scans. The accuracy and reproducibility in terms of identifying the infarction lesion have been confirmed by clinical experts. This automatic segmentation method is promising not only in detecting the location and the size of infarction lesion in stroke patient but also in quantitatively analyzing diffusion
---
paper_title: Anisotropic partial volume CSF modeling for EEG source localization
paper_content:
Abstract Electromagnetic source localization (ESL) provides non-invasive evaluation of brain electrical activity for neurology research and clinical evaluation of neurological disorders such as epilepsy. Accurate ESL results are dependent upon the use of patient specific models of bioelectric conductivity. While the effects of anisotropic conductivities in the skull and white matter have been previously studied, little attention has been paid to the accurate modeling of the highly conductive cerebrospinal fluid (CSF) region. This study examines the effect that partial volume errors in CSF segmentations have upon the ESL bioelectric model. These errors arise when segmenting sulcal channels whose widths are similar to the resolution of the magnetic resonance (MR) images used for segmentation, as some voxels containing both CSF and gray matter cannot be definitively assigned a single label. These problems, particularly prevalent in pediatric populations, make voxelwise segmentation of CSF compartments a difficult problem. Given the high conductivity of CSF, errors in modeling this region may result in large errors in the bioelectric model. We introduce here a new approach for using estimates of partial volume fractions in the construction of patient specific bioelectric models. In regions where partial volume errors are expected, we use a layered gray matter-CSF model to construct equivalent anisotropic conductivity tensors. This allows us to account for the inhomogeneity of the tissue within each voxel. Using this approach, we are able to reduce the error in the resulting bioelectric models, as evaluated against a known high resolution model. Additionally, this model permits us to evaluate the effects of sulci modeling errors and quantify the mean error as a function of the change in sulci width. Our results suggest that both under and over-estimation of the CSF region leads to significant errors in the bioelectric model. While a model with fixed partial volume fraction is able to reduce this error, we see the largest improvement when using voxel specific partial volume estimates. Our cross-model analyses suggest that an approximately linear relationship exists between sulci error and the error in the resulting bioelectric model. Given the difficulty of accurately segmenting narrow sulcal channels, this suggests that our approach may be capable of improving the accuracy of patient specific bioelectric models by several percent, while introducing only minimal additional computational requirements.
---
paper_title: Robust White Matter Lesion Segmentation in FLAIR MRI
paper_content:
This paper discusses a white matter lesion (WML) segmentation scheme for fluid attenuation inversion recovery (FLAIR) MRI. The method computes the volume of lesions with subvoxel precision by accounting for the partial volume averaging (PVA) artifact. As WMLs are related to stroke and carotid disease, accurate volume measurements are most important. Manual volume computation is laborious, subjective, time consuming, and error prone. Automated methods are a nice alternative since they quantify WML volumes in an objective, efficient, and reliable manner. PVA is initially modeled with a localized edge strength measure since PVA resides in the boundaries between tissues. This map is computed in 3-D and is transformed to a global representation to increase robustness to noise. Significant edges correspond to PVA voxels, which are used to find the PVA fraction α (amount of each tissue present in mixture voxels). Results on simulated and real FLAIR images show high WML segmentation performance compared to ground truth (98.9% and 83% overlap, respectively), which outperforms other methods. Lesion load studies are included that automatically analyze WML volumes for each brain hemisphere separately. This technique does not require any distributional assumptions/parameters or training samples and is applied on a single MR modality, which is a major advantage compared to the traditional methods.
---
paper_title: Adaptive neonate brain segmentation.
paper_content:
Babies born prematurely are at increased risk of adverse neurodevelopmental outcomes. Recent advances suggest that measurement of brain volumes can help in defining biomarkers for neurodevelopmental outcome. These techniques rely on an accurate segmentation of the MRI data. However, due to lack of contrast, partial volume (PV) effect, the existence of both hypo- and hyper-intensities and significant natural and pathological anatomical variability, the segmentation of neonatal brain MRI is challenging. We propose a pipeline for image segmentation that uses a novel multi-model Maximum a posteriori Expectation Maximisation (MAP-EM) segmentation algorithm with a prior over both intensities and the tissue proportions, a B0 inhomogeneity correction, and a spatial homogeneity term through the use of a Markov Random Field. This robust and adaptive technique enables the segmentation of images with high anatomical disparity from a normal population. Furthermore, the proposed method implicitly models Partial Volume, mitigating the problem of neonatal white/grey matter intensity inversion. Experiments performed on a clinical cohort show expected statistically significant correlations with gestational age at birth and birthweight. Furthermore, the proposed method obtains statistically significant improvements in Dice scores when compared to the a Maximum Likelihood EM algorithm.
---
paper_title: Novel whole brain segmentation and volume estimation using quantitative MRI
paper_content:
OBJECTIVES ::: Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R(1), the transverse relaxation rate R(2) and the proton density, PD. ::: ::: ::: METHODS ::: Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R(1), R(2) and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R(1)-R(2)-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. ::: ::: ::: RESULTS ::: Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. ::: ::: ::: CONCLUSION ::: We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. ::: ::: ::: KEY POINTS ::: • A method for segmenting the brain and estimating tissue volume is presented • This method measures white matter, grey matter, cerebrospinal fluid and remaining tissue • The method calculates tissue fractions in voxel, thus accounting for partial volume • Repeatability was 2.2% for total brain volume with imaging resolution <2.0 mm.
---
paper_title: Automatic brain segmentation using fractional signal modeling of a multiple flip angle, spoiled gradient-recalled echo acquisition
paper_content:
Object ::: The aim of this study was to demonstrate a new automatic brain segmentation method in magnetic resonance imaging (MRI).
---
paper_title: Automated Segmentation of Hippocampal Subfields From Ultra-High Resolution In Vivo MRI
paper_content:
Recent developments in MRI data acquisition technology are starting to yield images that show anatomical features of the hippo- campal formation at an unprecedented level of detail, providing the ba- sis for hippocampal subfield measurement. However, a fundamental bottleneck in MRI studies of the hippocampus at the subfield level is that they currently depend on manual segmentation, a laborious process that severely limits the amount of data that can be analyzed. In this article, we present a computational method for segmenting the hippo- campal subfields in ultra-high resolution MRI data in a fully automated fashion. Using Bayesian inference, we use a statistical model of image formation around the hippocampal area to obtain automated segmenta- tions. We validate the proposed technique by comparing its segmenta- tions to corresponding manual delineations in ultra-high resolution MRI scans of 10 individuals, and show that automated volume measurements of the larger subfields correlate well with manual volume estimates. Unlike manual segmentations, our automated technique is fully repro- ducible, and fast enough to enable routine analysis of the hippocampal subfields in large imaging studies. V C 2009 Wiley-Liss, Inc.
---
paper_title: The future of ultra-high field MRI and fMRI for study of the human brain
paper_content:
Abstract MRI and fMRI have been used for about three and two decades respectively and much has changed over this time period, both in the quality of the data and in the range of applications for studying the brain. Apart from resolution improvements from around 4 mm in the early days to below 0.5 mm with modern technology, novel uses of contrast have led to the ability to sensitize images to some of the brain's structural properties at the cellular scale as well as study the localization and organization of brain function at the level of cortical columns. These developments have in part been facilitated by a continuing drive to increase the magnetic field strength. Will the next few decades see similar improvements? Here we will discuss current state of high field MRI, expected further increases in field strength, and improvements expected with these increases.
---
| Title: Partial Volume Effect Modeling for Segmentation and Tissue Classification of Brain Magnetic Resonance Images: A Review
Section 1: Introduction
Description 1: Provide an overview of the importance and challenges of analyzing brain MR images, including the phenomena of partial volume effects (PVE) and its impact on tissue classification and brain structure quantification.
Section 2: Image Pre-processing
Description 2: Discuss the necessary image pre-processing steps required prior to partial volume estimation, including intensity non-uniformity correction, brain extraction, and registration to stereotactic space.
Section 3: Definition and Approximations
Description 3: Define the partial volume effect and the mixel model used to statistically represent it, including necessary notations and assumptions about the tissue types and image intensities.
Section 4: Direct Solution via Penalized Least Squares
Description 4: Explain the direct solution approach for PVCs using penalized least squares, including constraints and regularization techniques to handle noise and variability.
Section 5: Two-step Algorithms
Description 5: Describe the two-step algorithms for partial volume estimation, detailing the initial classification of tissue configurations followed by the restricted PVC estimation.
Section 6: Discretization Approaches
Description 6: Introduce discretization methods for solving the PVC problem, explaining how these methods restrict PVC values to discrete sets and use MRF approaches for spatial modeling.
Section 7: Parameter Estimation
Description 7: Discuss the methods for estimating model parameters, such as mean intensities and covariances, which are crucial for accurate partial volume estimation.
Section 8: Fuzzy C-means
Description 8: Explore the fuzzy C-means algorithm and its modifications as applied to tissue classification, comparing its outcomes with those from the mixel model.
Section 9: Bayesian Tissue Classifiers
Description 9: Review Bayesian decision theory-based classifiers for tissue classification, emphasizing their use of posterior probability maps and prior information.
Section 10: Voxel-based Morphometry
Description 10: Discuss the application of partial volume estimation in voxel-based morphometry (VBM) for comparing gray matter densities between subject groups.
Section 11: Cortical Thickness
Description 11: Detail the measurement of cortical thickness using both mesh-based and voxel-based techniques, highlighting how PVE impacts these measurements.
Section 12: Other Applications
Description 12: Present additional applications of PVE modeling in brain MRI, such as neonatal brain segmentation, hemisphere segmentation, EEG source localization, and lesion load computations.
Section 13: Future Perspectives
Description 13: Outline potential future directions in PVE modeling and its applications, including the use of quantitative tissue mapping and high-field MRI for improved segmentation accuracy. |
Recent Advances on Singlemodal and Multimodal Face Recognition: A Survey | 8 | ---
paper_title: 3D Face Recognition by Local Shape Difference Boosting
paper_content:
A new approach, called Collective Shape Difference Classifier(CSDC), is proposed to improve the accuracy and computational efficiency of 3D face recognition. The CSDC learns the most discriminative local areas from the Pure Shape Difference Map(PSDM) and trains them as weak classifiers for assembling a collective strong classifier using the real-boosting approach. The PSDM is established between two 3D face models aligned by a posture normalization procedure based on facial features. The model alignment is self-dependent, which avoids registering the probe face against every different gallery face during the recognition, so that a high computational speed is obtained. The experiments, carried out on the FRGC v2 and BU-3DFE databases, yield rank-1 recognition rates better than 98%. Each recognition against a gallery with 1000 faces only needs about 3.05seconds. These two experimental results together with the high performance recognition on partial faces demonstrate that our algorithm is not only effective but also efficient.
---
paper_title: 2D and 3D face recognition: A survey
paper_content:
Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today's safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what's socially acceptable and what's reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ''ex cursus'' of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions.
---
paper_title: Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach
paper_content:
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality
---
paper_title: A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition
paper_content:
This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology.
---
paper_title: Facial Recognition Technology: A Survey of Policy and Implementation Issues
paper_content:
Facial recognition technology (FRT) has emerged as an attractive solution to address many contemporary needs for identification and the verification of identity claims. It brings together the promise of other biometric systems, which attempt to tie identity to individually distinctive features of the body, and the more familiar functionality of visual surveillance systems. This report develops a socio-political analysis that bridges the technical and social-scientific literatures on FRT and addresses the unique challenges and concerns that attend its development, evaluation, and specific operational uses, contexts, and goals. It highlights the potential and limitations of the technology, noting those tasks for which it seems ready for deployment, those areas where performance obstacles may be overcome by future technological developments or sound operating procedures, and still other issues which appear intractable. Its concern with efficacy extends to ethical considerations. For the purposes of this summary, the main findings and recommendations of the report are broken down into five broad categories: performance, evaluation, operation, policy concerns, and moral and political considerations. These findings and recommendations employ certain technical concepts and language that are explained and explored in the body of the report and glossary, to which you should turn for further elaboration.
---
paper_title: Thermal face recognition in an operational scenario
paper_content:
We present results on the latest advances in thermal infrared face recognition, and its use in combination with visible imagery. Previous research by the authors has shown high performance under very controlled conditions, or questionable performance under a wider range of conditions. This paper shows results on the use of thermal infrared and visible imagery for face recognition in operational scenarios. In particular, we show performance statistics for outdoor face recognition and recognition across multiple sessions. Our results support the conclusion that face recognition performance with thermal infrared imagery is stable over multiple sessions, and that fusion of modalities increases performance. As measured by the number of images and number of subjects, this is the largest ever reported study on thermal face recognition.
---
paper_title: 2D and 3D face recognition: A survey
paper_content:
Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today's safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what's socially acceptable and what's reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ''ex cursus'' of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions.
---
paper_title: Face recognition with visible and thermal infrared imagery
paper_content:
We present a comprehensive performance study of multiple appearance-based face recognition methodologies, on visible and thermal infrared imagery. We compare algorithms within the same imaging modality as well as between them. Both identification and verification scenarios are considered, and appropriate performance statistics reported for each case. Our experimental design is aimed at gaining full understanding of algorithm performance under varying conditions, and is based on Monte Carlo analysis of performance measures. This analysis reveals that under many circumstances, using thermal infrared imagery yields higher performance, while in other cases performance in both modalities is equivalent. Performance increases further when algorithms on visible and thermal infrared imagery are fused. Our study also provides a partial explanation for the multiple contradictory claims in the literature regarding performance of various algorithms on visible data sets.
---
paper_title: A Survey of Face Recognition Techniques
paper_content:
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
---
paper_title: 2D and 3D face recognition: A survey
paper_content:
Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today's safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what's socially acceptable and what's reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ''ex cursus'' of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions.
---
paper_title: Discriminative common vectors for face recognition
paper_content:
In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the linear discriminant analysis (LDA) method cannot be applied directly. This problem is known as the "small sample size" problem. In this paper, we propose a new face recognition method called the discriminative common vector method based on a variation of Fisher's linear discriminant analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the within-class scatter matrix of the samples in the training set while the other uses the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the discriminative common vectors. Then, the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher's linear discriminant criterion given in the paper. Our test results show that the discriminative common vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.
---
paper_title: Facial Recognition Technology: A Survey of Policy and Implementation Issues
paper_content:
Facial recognition technology (FRT) has emerged as an attractive solution to address many contemporary needs for identification and the verification of identity claims. It brings together the promise of other biometric systems, which attempt to tie identity to individually distinctive features of the body, and the more familiar functionality of visual surveillance systems. This report develops a socio-political analysis that bridges the technical and social-scientific literatures on FRT and addresses the unique challenges and concerns that attend its development, evaluation, and specific operational uses, contexts, and goals. It highlights the potential and limitations of the technology, noting those tasks for which it seems ready for deployment, those areas where performance obstacles may be overcome by future technological developments or sound operating procedures, and still other issues which appear intractable. Its concern with efficacy extends to ethical considerations. For the purposes of this summary, the main findings and recommendations of the report are broken down into five broad categories: performance, evaluation, operation, policy concerns, and moral and political considerations. These findings and recommendations employ certain technical concepts and language that are explained and explored in the body of the report and glossary, to which you should turn for further elaboration.
---
paper_title: A method for registration of 3-D shapes
paper_content:
The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >
---
paper_title: 3D Face Recognition by Local Shape Difference Boosting
paper_content:
A new approach, called Collective Shape Difference Classifier(CSDC), is proposed to improve the accuracy and computational efficiency of 3D face recognition. The CSDC learns the most discriminative local areas from the Pure Shape Difference Map(PSDM) and trains them as weak classifiers for assembling a collective strong classifier using the real-boosting approach. The PSDM is established between two 3D face models aligned by a posture normalization procedure based on facial features. The model alignment is self-dependent, which avoids registering the probe face against every different gallery face during the recognition, so that a high computational speed is obtained. The experiments, carried out on the FRGC v2 and BU-3DFE databases, yield rank-1 recognition rates better than 98%. Each recognition against a gallery with 1000 faces only needs about 3.05seconds. These two experimental results together with the high performance recognition on partial faces demonstrate that our algorithm is not only effective but also efficient.
---
paper_title: Automatic 3D reconstruction for face recognition
paper_content:
An analysis-by-synthesis framework for face recognition with variant pose, illumination and expression (PIE) is proposed in this paper. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) the proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with variant PIE.
---
paper_title: Iterative Closest Normal Point for 3D Face Recognition
paper_content:
The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.
---
paper_title: Expression-Invariant Representations of Faces
paper_content:
Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition.
---
paper_title: Face recognition based on frontal views generated from non-frontal images
paper_content:
This paper presents a method for face recognition across large changes in viewpoint. Our method is based on a morphable model of 3D faces that represents face-specific information extracted from a dataset of 3D scans. For non-frontal face recognition in 2D still images, the morphable model can be incorporated in two different approaches: in the first, it serves as a preprocessing step by estimating the 3D shape of novel faces from the non-frontal input images, and generating frontal views of the reconstructed faces at a standard illumination using 3D computer graphics. The transformed images are then fed into state-of-the-art face recognition systems that are optimized for frontal views. This method was shown to be extremely effective in the Face Recognition Vendor Test FRVT 2002. In the process of estimating the 3D shape of a face from an image, a set of model coefficients are estimated. In the second method, face recognition is performed directly from these coefficients. In this paper we explain the algorithm used to preprocess the images in FRVT 2002, present additional FRVT 2002 results, and compare these results to recognition from the model coefficients.
---
paper_title: Overview of the face recognition grand challenge
paper_content:
Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.
---
paper_title: Point Signatures: A New Representation for 3D Object Recognition
paper_content:
Few systems capable of recognizing complex objects with free-form (sculptured) surfaces have been developed. The apparent lack of success is mainly due to the lack of a competent modelling scheme for representing such complex objects. In this paper, a new form of point representation for describing 3D free-form surfaces is proposed. This representation, which we call the point signature, serves to describe the structural neighbourhood of a point in a more complete manner than just using the 3D coordinates of the point. Being invariant to rotation and translation, the point signature can be used directly to hypothesize the correspondence to model points with similar signatures. Recognition is achieved by matching the signatures of data points representing the sensed surface to the signatures of data points representing the model surface. ::: ::: The use of point signatures is not restricted to the recognition of a single-object scene to a small library of models. Instead, it can be extended naturally to the recognition of scenes containing multiple partially-overlapping objects (which may also be juxtaposed with each other) against a large model library. No preliminary phase of segmenting the scene into the component objects is required. In searching for the appropriate candidate model, recognition need not proceed in a linear order which can become prohibitive for a large model library. For a given scene, signatures are extracted at arbitrarily spaced seed points. Each of these signatures is used to vote for models that contain points having similar signatures. Inappropriate models with low votes can be rejected while the remaining candidate models are ordered according to the votes they received. In this way, efficient verification of the hypothesized candidates can proceed by testing the most likely model first. Experiments using real data obtained from a range finder have shown fast recognition from a library of fifteen models whose complexities vary from that of simple piecewise quadric shapes to complicated face masks. Results from the recognition of both single-object and multiple-object scenes are presented.
---
paper_title: Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression
paper_content:
An algorithm is proposed for 3D face recognition in the presence of varied facial expressions. It is based on combining the match scores from matching multiple overlapping regions around the nose. Experimental results are presented using the largest database employed to date in 3D face recognition studies, over 4,000 scans of 449 subjects. Results show substantial improvement over matching the shape of a single larger frontal face region. This is the first approach to use multiple overlapping regions around the nose to handle the problem of expression variation
---
paper_title: 3D Face Recognition Using Isogeodesic Stripes
paper_content:
In this paper, we present a novel approach to 3D face matching that shows high effectiveness in distinguishing facial differences between distinct individuals from differences induced by nonneutral expressions within the same individual. The approach takes into account geometrical information of the 3D face and encodes the relevant information into a compact representation in the form of a graph. Nodes of the graph represent equal width isogeodesic facial stripes. Arcs between pairs of nodes are labeled with descriptors, referred to as 3D Weighted Walkthroughs (3DWWs), that capture the mutual relative spatial displacement between all the pairs of points of the corresponding stripes. Face partitioning into isogeodesic stripes and 3DWWs together provide an approximate representation of local morphology of faces that exhibits smooth variations for changes induced by facial expressions. The graph-based representation permits very efficient matching for face recognition and is also suited to being employed for face identification in very large data sets with the support of appropriate index structures. The method obtained the best ranking at the SHREC 2008 contest for 3D face recognition. We present an extensive comparative evaluation of the performance with the FRGC v2.0 data set and the SHREC08 data set.
---
paper_title: Face Recognition with 3D Model-Based Synthesis
paper_content:
Current appearance-based face recognition system encounters the difficulty to recognize faces with appearance variations, while only a small number of training images are available. We present a scheme based on the analysis by synthesis framework. A 3D generic face model is aligned onto a given frontal face image. A number of synthetic face images are generated with appearance variations from the aligned 3D face model. These synthesized images are used to construct an affine subspace for each subject. Training and test images for each subject are represented in the same way in such a subspace. Face recognition is achieved by minimizing the distance between the subspace of a test subject and that of each subject in the database. Only a single face image of each subject is available for training in our experiments. Preliminary experimental results are promising.
---
paper_title: 3D Face Recognition Using 3D Alignment for PCA
paper_content:
This paper presents a 3D approach for recognizing faces based on Principal Component Analysis (PCA). The approach addresses the issue of proper 3D face alignment required by PCA for maximum data compression and good generalization performance for new untrained faces. This issue has traditionally been addressed by 2D data normalization, a step that eliminates 3D object size information important for the recognition process. We achieve correspondence of facial points by registering a 3D face to a scaled generic 3D reference face and subsequently perform a surface normal search algorithm. 3D scaling of the generic reference face is performed to enable better alignment of facial points while preserving important 3D size information in the input face. The benefits of this approach for 3D face recognition and dimensionality reduction have been demonstrated on components of the Face Recognition Grand Challenge (FRGC) database versions 1 and 2.
---
paper_title: Fusion of Summation Invariants in 3D Human Face Recognition
paper_content:
A novel family of 2D and 3D geometrically invariant features, called summation invariants is proposed for the recognition of the 3D surface of human faces. Focusing on a rectangular region surrounding the nose of a 3D facial depth map, a subset of the so called semi-local summation invariant features is extracted. Then the similarity between a pair of 3D facial depth maps is computed to determine whether they belong to the same person. Out of many possible combinations of these set of features, we select, through careful experimentation, a subset of features that yields best combined performance. Tested with the 3D facial data from the on-going Face Recognition Grand Challenge v1.0 dataset, the proposed new features exhibit significant performance improvement over the baseline algorithm distributed with the datase
---
paper_title: Three-Dimensional Face Recognition
paper_content:
An expression-invariant 3D face recognition approach is presented. Our basic assumption is that facial expressions can be modelled as isometries of the facial surface. This allows to construct expression-invariant representations of faces using the bending-invariant canonical forms approach. The result is an efficient and accurate face recognition algorithm, robust to facial expressions, that can distinguish between identical twins (the first two authors). We demonstrate a prototype system based on the proposed algorithm and compare its performance to classical face recognition methods.The numerical methods employed by our approach do not require the facial surface explicitly. The surface gradients field, or the surface metric, are sufficient for constructing the expression-invariant representation of any given face. It allows us to perform the 3D face recognition task while avoiding the surface reconstruction stage.
---
paper_title: An Expression Deformation Approach to Non-rigid 3D Face Recognition
paper_content:
The accuracy of non-rigid 3D face recognition approaches is highly influenced by their capacity to differentiate between the deformations caused by facial expressions from the distinctive geometric attributes that uniquely characterize a 3D face, interpersonal disparities. We present an automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression. The patterns of expression deformations are first learnt from training data in PCA eigenvectors. These patterns are then used to morph out the expression deformations. Similarity measures are extracted by matching the morphed 3D faces. PCA is performed in such a way it models only the facial expressions leaving out the interpersonal disparities. The approach was applied on the FRGC v2.0 dataset and superior recognition performance was achieved. The verification rates at 0.001 FAR were 98.35% and 97.73% for scans under neutral and non-neutral expressions, respectively.
---
paper_title: 3D FACE RECOGNITION BY POINT SIGNATURES AND ISO-CONTOURS
paper_content:
The paper addresses the problem of face recognition from range images. A novel technique based on matching of level contours is proposed and compared with a variant of the point signatures algorithm. Their efficiency is investigated on conditions of changes in expression and pose, and presence of glasses. Using a large database of range images, comparative experimental results are presented, showing that iso-contours outperformpoint signatures both in computational efficiency and in recognition rates.
---
paper_title: Three-Dimensional Face Recognition Using Shapes of Facial Curves
paper_content:
We study shapes of facial surfaces for the purpose of face recognition. The main idea is to 1) represent surfaces by unions of level curves, called facial curves, of the depth function and 2) compare shapes of surfaces implicitly using shapes of facial curves. The latter is performed using a differential geometric approach that computes geodesic lengths between closed curves on a shape manifold. These ideas are demonstrated using a nearest-neighbor classifier on two 3D face databases: Florida State University and Notre Dame, highlighting a good recognition performance
---
paper_title: Deformation Modeling for Robust 3D Face Matching
paper_content:
Face recognition based on 3D surface matching is promising for overcoming some of the limitations of current 2D image-based face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the nonrigid facial movement such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surface modeling and matching scheme to match 2.5D facial scans in the presence of both nonrigid deformations and pose changes (multiview) to a stored 3D face model with neutral expression. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built for each subject in the gallery with respect to the control group by combining the templates with synthesized deformations. By fitting this generative deformable model to a test scan, the proposed approach is able to handle expressions and pose changes simultaneously. A fully automatic and prototypic deformable model based 3D face matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D face matching accuracy in comparison to matching with 3D neutral models by 7 and 10 percentage points, respectively, on a subset of the FRGC v2.0 3D benchmark and the MSU multiview 3D face database with expression variations.
---
paper_title: Matching 2.5D face scans to 3D models
paper_content:
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.
---
paper_title: 2D and 3D face recognition: A survey
paper_content:
Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today's safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what's socially acceptable and what's reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ''ex cursus'' of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions.
---
paper_title: 3D Face Recognition using Mapped Depth Images
paper_content:
This paper addresses 3D face recognition from facial shape. Firstly, we present an effective method to automatically extract ROI of facial surface, which mainly depends on automatic detection of facial bilateral symmetry plane and localization of nose tip. Then we build a reference plane through the nose tip for calculating the relative depth values. Considering the non-rigid property of facial surface, the ROI is triangulated and parameterized into an isomorphic 2D planar circle, attempting to preserve the intrinsic geometric properties. At the same time the relative depth values are also mapped. Finally we perform eigenface on the mapped relative depth image. The entire scheme is insensitive to pose variance. The experiment using FRGC database v1.0 obtains the rank-1 identification score of 95%, which outperforms the result of the PCA base-line method by 4%, which demonstrates the effectiveness of our algorithm.
---
paper_title: Three-dimensional face recognition: an eigensurface approach
paper_content:
We evaluate a new approach to face recognition using a variety of surface representations of three-dimensional facial structure. Applying principal component analysis (PCA), we show that high levels of recognition accuracy can be achieved on a large database of 3D face models, captured under conditions that present typical difficulties to more conventional two-dimensional approaches. Applying a range of image processing techniques we identify the most effective surface representation for use in such application areas as security, surveillance, data compression and archive searching.
---
paper_title: 3 D Face recognition by ICP-based shape matching
paper_content:
In this paper, we propose a novel face recognition approach based on 2.5D/3D shape matching. While most of existing methods use facial intensity image, we aim to develop a method using three-dimensional information of the human face. This is the main innovation of our technology. In our approach, the 3D dimensional information is introduced in order to overcome classical face recognition problems which are pose, illumination and facial expression variations. The paradigm is to build a 3D face gallery using a laser-based scanner: the off-line stage. In the on-line stage, the recognition, we capture one 2.5D face model at any view point and with any facial expressions. Our processing allows the identification of the presented person by performing the captured model with all faces from the database. Here, the Iterative Closest Point-based matching algorithm provides the pose of the probe whereas the region-based metric provides a spatial deviation between the probe and each face from the gallery. In this metric, we calculate the global recognition score as a weighted sum of region-based distances already labelled as mimic or static regions. For automatic 3D face segmentation, we use an immersion version of watershed segmentation algorithm. This paper also presents some experiments in order to shown illumination, pose and facial expression compensations.
---
paper_title: Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach
paper_content:
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality
---
paper_title: Robust 3D Face Recognition by Local Shape Difference Boosting
paper_content:
This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient.
---
paper_title: An efficient 3D face recognition approach based on the fusion of novel local low-level features
paper_content:
We present a novel 3D face recognition approach based on low-level geometric features that are collected from the eyes-forehead and the nose regions. These regions are relatively less influenced by the deformations that are caused by facial expressions. The extracted features revealed to be efficient and robust in the presence of facial expressions. A region-based histogram descriptor computed from these features is used to uniquely represent a 3D face. A Support Vector Machine (SVM) is then trained as a classifier based on the proposed histogram descriptors to recognize any test face. In order to combine the contributions of the two facial regions (eyes-forehead and nose), both feature-level and score-level fusion schemes have been tested and compared. The proposed approach has been tested on FRGC v2.0 and BU-3DFE datasets through a number of experiments and a high recognition performance was achieved. Based on the results of ''neutral vs. non-neutral'' experiment of FRGC v2.0 and ''low-intensity vs. high-intensity'' experiment of BU-3DFE, the feature-level fusion scheme achieved verification rates of 97.6% and 98.2% at 0.1% False Acceptance Rate (FAR) and identification rates of 95.6% and 97.7% on the two datasets respectively. The experimental results also have shown that the feature-level fusion scheme outperformed the score-level fusion one.
---
paper_title: Face Recognition Based on Depth and Curvature Features
paper_content:
Face recognition from a representation based on features extracted from range images is explored. Depth and curvature features have several advantages over more traditional intensity-based features. Specifically, curvature descriptors have the potential for higher accuracy in describing surface-based events, are better suited to describe properties of the face in areas such as the cheeks, forehead, and chin, and are viewpoint invariant. Faces are represented in terms of a vector of feature descriptors. Comparisons between two faces is made based on their relationship in the feature space. The author provides a detailed analysis of the accuracy and discrimination of the particular features extracted, and the effectiveness of the recognition system for a test database of 24 faces. Recognition rates are in the range of 80% to 100%. In many cases, feature accuracy is limited more by surface resolution than by the extraction process. >
---
paper_title: 3D Face Recognition under Expressions, Occlusions, and Pose Variations
paper_content:
We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both-empirical and theoretical-perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.
---
paper_title: Three-dimensional model based face recognition
paper_content:
The performance of face recognition systems that use two-dimensional (2D) images is dependent on consistent conditions such as lighting, pose and facial expression. We are developing a multi-view face recognition system that utilizes three-dimensional (3D) information about the face to make the system more robust to these variations. This work describes a procedure for constructing a database of 3D face models and matching this database to 2.5D face scans which are captured from different views, using coordinate system invariant properties of the facial surface. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. A robust similarity metric is defined for matching, based on an iterative closest point (ICP) registration process. Results are given for matching a database of 18 3D face models with 113 2.5D face scans.
---
paper_title: Three-Dimensional Face Reconstruction From a Single Image by a Coupled RBF Network
paper_content:
Reconstruction of a 3-D face model from a single 2-D face image is fundamentally important for face recognition and animation because the 3-D face model is invariant to changes of viewpoint, illumination, background clutter, and occlusions. Given a coupled training set that contains pairs of 2-D faces and the corresponding 3-D faces, we train a novel coupled radial basis function network (C-RBF) to recover the 3-D face model from a single 2-D face image. The C-RBF network explores: 1) the intrinsic representations of 3-D face models and those of 2-D face images; 2) mappings between a 3-D face model and its intrinsic representation; and 3) mappings between a 2-D face image and its intrinsic representation. Since a particular face can be reconstructed by its nearest neighbors, we can assume that the linear combination coefficients for a particular 2-D face image reconstruction are identical to those for the corresponding 3-D face model reconstruction. Therefore, we can reconstruct a 3-D face model by using a single 2-D face image based on the C-RBF network. Extensive experimental results on the BU3D database indicate the effectiveness of the proposed C-RBF network for recovering the 3-D face model from a single 2-D face image.
---
paper_title: Two-dimensional PCA: a new approach to appearance-based face representation and recognition
paper_content:
In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.
---
paper_title: Summation invariant and its applications to shape recognition
paper_content:
A novel summation invariant of curves under transformation group action is proposed. This new invariant is less sensitive to noise than the differential invariant and does not require an analytical expression for the curve as the integral invariant does. We exploit this summation invariant to define a shape descriptor called a semi-local summation invariant and use it as a new feature for shape recognition. Tested on a database of noisy shapes of fish, it was observed that the summation invariant feature exhibited superior discriminating power compared to that of wavelet-based invariant features.
---
paper_title: 3D human face recognition using point signature
paper_content:
We present a novel face recognition algorithm based on the point signature-a representation for free-form surfaces. We treat the face recognition problem as a non-rigid object recognition problem. The rigid parts of the face of one person are extracted after registering the range data sets of faces having different facial expressions. These rigid parts are used to create a model library for efficient indexing. For a test face, models are indexed from the library and the most appropriate models are ranked according to their similarity with the test face. Verification of each model face can be quickly and efficiently identified. Experimental results with range data involving six human subjects, each with four different facial expressions, have demonstrated the validity and effectiveness of our algorithm.
---
paper_title: Face Recognition Using Optimal Linear Components Of Range Images
paper_content:
This paper investigates the use of range images of faces for recognizing people. 3D scans of faces lead to range images that are linearly projected to low-dimensional subspaces for use in a classifier, say a nearest neighbor classifier or a support vector machine, to label people. Learning of subspaces is performed using an optimal component analysis, i.e. a stochastic optimization algorithm (on a Grassmann manifold) to find a subspace that maximizes classifier performance on the training image set. Results are presented for face recognition using FSU face database, and are compared with standard component anlyses such as PCA and ICA. This provides an efficient tool for analyzing certain aspects of facial shapes while avoiding a difficult task of geometric surface modeling.
---
paper_title: Curvature based human face recognition using depth weighted Hausdorff distance
paper_content:
In this paper, we propose a novel implementation of a person verification system based on depth-weighted Hausdorff distance (DWHD) using the surface curvatures of the human face. This new method incorporates the depth information and curvatures of local facial features. The weighting function used in this paper is based on depth values, which have differential properties of a face according to the people, so that the distance of this extracted edge maps are emphasized. Experimental results based on the combination of the maximum, minimum, and Gaussian curvature according to threshold values show that DWHD achieves recognition rate of 92.8%, 97.6% and 92.8% of the cases for 5 ranked candidates, respectively, and the proposed method of combined recognition rate for each curvature shows the best.
---
paper_title: Fast 3D face recognition based on normal map
paper_content:
This paper presents a 3D face recognition method aimed to biometric applications. The proposed method compares any two faces represented as 3D polygonal surfaces through their corresponding normal map, a bidimensional array which stores local curvature (mesh normals) as the pixel's RGB components of a color image. The recognition approach, based on the computation of a difference map resulting from the comparison of normal maps, is simple yet fast and accurate. A weighting mask, automatically generated for each subject using a set of expression variations, improves the robustness to a broad range of facial expressions. First results show the effectiveness of the method on a database of 3D faces featuring different genders, ages and expressions.
---
paper_title: Face recognition in hyperspectral images
paper_content:
Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods. We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter. Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure, which is significantly different from person to person but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression, which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose and expression.
---
paper_title: Face Recognition in the Dark
paper_content:
Previous research has established thermal infrared imagery of faces as a valid biometric and has shown high recognition performance in a wide range of scenarios. However, all these results have been obtained using eye locations that were either manually marked, or automatically detected in a coregistered visible image, making the realistic use of thermal infrared imagery alone impossible. In this paper we present the results of an eye detector on thermal infrared imagery and we analyze its impact on recognition performance. Our experiments show that although eyes cannot be detected as reliably in thermal images as in visible ones, some face recognition algorithms can still achieve adequate performance.
---
paper_title: Illumination Invariant Face Recognition Using Near-Infrared Images
paper_content:
Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups
---
paper_title: Short Wavelength Infrared Face Recognition for Personalization
paper_content:
The paper describes an application of practical technologies to implement a low cost, consumer grade, single chip biometric system based on face recognition using infra-red imaging. The paper presents a system that consists of three stages that contribute in the face detection and recognition process. Each stage is explained with its individual contribution alongside results of tests performed for that stage. The system shows a high recognition rate when full frontal face images are fed to the system. The paper further discusses the application based approach in the automotive world with plans for further study. Recognition rates of the overall system are also presented.
---
paper_title: Discriminant Analysis for Recognition of Human Face Images
paper_content:
In this paper the discriminatory power of various human facial features is studied and a new scheme for Automatic Face Recognition (AFR) is proposed. Using Linear Discriminant Analysis (LDA) of different aspects of human faces in spatial domain, we first evaluate the significance of visual information in different parts/features of the face for identifying the human subject. The LDA of faces also provides us with a small set of features that carry the most relevant information for classification purposes. The features are obtained through eigenvector analysis of scatter matrices with the objective of maximizing between-class and minimizing within-class variations. The result is an efficient projection-based feature extraction and classification scheme for AFR. Soft decisions made based on each of the projections are combined, using probabilistic or evidential approaches to multisource data analysis. For medium-sized databases of human faces, good classification accuracy is achieved using very low-dimensional feature vectors.
---
paper_title: Infrared Face Recognition Using Distance Transforms
paper_content:
In this work we present an efficient approach for face recognition in the infrared spectrum. In the proposed approach physiological features are extracted from thermal images in order to build a unique thermal faceprint. Then, a distance transform is used to get an invariant representation for face recognition. The obtained physiological features are related to the distribution of blood vessels under the face skin. This blood network is unique to each individual and can be used in infrared face recognition. The obtained results are promising and show the effectiveness of the proposed scheme. Keywords—Face recognition, biometrics, infrared imaging. I. INTRODUCTION
---
paper_title: Illumination invariant face recognition using thermal infrared imagery
paper_content:
A key problem for face recognition has been accurate identification under variable illumination conditions. Conventional video cameras sense reflected light so that image grayvalues are a product of both intrinsic skin reflectivity and external incident illumination, thus obfuscating the intrinsic reflectivity of skin. Thermal emission from skin, on the other hand, is an intrinsic measurement that can be isolated from external illumination. We examine the invariance of Long-Wave InfraRed (LWIR) imagery with respect to different illumination conditions from the viewpoint of performance comparisons of two well-known face recognition algorithms applied to LWIR and visible imagery. We develop rigourous data collection protocols that formalize face recognition analysis for computer vision in the thermal IR.
---
paper_title: Multimodal face recognition: combination of geometry with physiological information
paper_content:
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a three-dimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
---
paper_title: Face recognition with visible and thermal infrared imagery
paper_content:
We present a comprehensive performance study of multiple appearance-based face recognition methodologies, on visible and thermal infrared imagery. We compare algorithms within the same imaging modality as well as between them. Both identification and verification scenarios are considered, and appropriate performance statistics reported for each case. Our experimental design is aimed at gaining full understanding of algorithm performance under varying conditions, and is based on Monte Carlo analysis of performance measures. This analysis reveals that under many circumstances, using thermal infrared imagery yields higher performance, while in other cases performance in both modalities is equivalent. Performance increases further when algorithms on visible and thermal infrared imagery are fused. Our study also provides a partial explanation for the multiple contradictory claims in the literature regarding performance of various algorithms on visible data sets.
---
paper_title: Infrared face recognition by using blood perfusion data
paper_content:
This paper presents a blood perfusion model of human faces based on thermodynamics and thermal physiology. The target is to convert the facial temperature data which are liable to ambient temperature into consistent blood perfusion data in order to improve the performance of infrared (IR) face recognition. Our large number of experiments has demonstrated that the blood perfusion data are less sensitive to ambient temperature if the human bodies are in steady state, and the real data testing demonstrated that the performance by means of blood perfusion data is significantly superior to that via temperature data in terms of recognition rate.
---
paper_title: Physiology-Based Face Recognition in the Thermal Infrared Spectrum
paper_content:
The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area
---
paper_title: Highly Accurate and Fast Face Recognition Using Near Infrared Images
paper_content:
In this paper, we present a highly accurate, realtime face recognition system for cooperative user applications. The novelties are: (1) a novel design of camera hardware, and (2) a learning based procedure for effective face and eye detection and recognition with the resulting imagery. The hardware minimizes environmental lighting and delivers face images with frontal lighting. This avoids many problems in subsequent face processing to a great extent. The face detection and recognition algorithms are based on a local feature representation. Statistical learning is applied to learn most effective features and classifiers for building face detection and recognition engines. The novel imaging system and the detection and recognition engines are integrated into a powerful face recognition system. Evaluated in real-world user scenario, a condition that is harder than a technology evaluation such as Face Recognition Vendor Tests (FRVT), the system has demonstrated excellent accuracy, speed and usability.
---
paper_title: Face Recognition in the Thermal Infrared Spectrum
paper_content:
We present a two-stage face recognition method based on infrared imaging and statistical modeling. In the first stage we reduce the search space by finding highly likely candidates before arriving at a singular conclusion during the second stage. Previous work has shown that Bessel forms model accurately the marginal densities of filtered components and can be used to find likely matches but not a unique solution. We present an enhancement to this approach by applying Bessel modeling on the facial region only rather than the entire image and by pipelining a classification algorithm to produce a unique solution. The detailed steps of our method are as follows: First, the faces are separated from the background using adaptive fuzzy connectedness segmentation. Second, Gabor filtering is used as a spectral analysis tool. Third, the derivative filtered images are modeled using two-parameter Bessel forms. Fourth, high probability subjects are short-listed by applying the L^2 -norm on the Bessel models. Finally, the resulting set of highly likely matches is fed to a Bayesian classifier to find the exact match. We show experimentally that segmentation of the facial regions results in better hypothesis pruning and classification performance. We also present comparative experimental results with an eigenface approach to highlight the potential of our method.
---
paper_title: Automatic Feature Localization in Thermal Images for Facial Expression Recognition
paper_content:
We propose an unsupervised Local and Global feature extraction paradigm to approach the problem of facial expression recognition in thermal images. Starting from local, low-level features computed at interest point locations, our approach combines the localization of facial features with the holistic approach. The detailed steps are as follows: First, face localization using bi-modal thresholding is accomplished in order to localize facial features by way of a novel interest point detection and clustering approach. Second, we compute representative Eigenfeatures for feature extraction. Third, facial expression classification is made with a Support Vector Machine Committiee. Finally, the experiments over the IRIS data-set show that automation was achieved with good feature localization and classification performance.
---
paper_title: An Efficient Multimodal 2D-3D Hybrid Approach to Automatic Face Recognition
paper_content:
We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.
---
paper_title: An evaluation of multimodal 2D+3D face biometrics
paper_content:
We report on the largest experimental study to date in multimodal 2D+3D face recognition, involving 198 persons in the gallery and either 198 or 670 time-lapse probe images. PCA-based methods are used separately for each modality and match scores in the separate face spaces are combined for multimodal recognition. Major conclusions are: 1) 2D and 3D have similar recognition performance when considered individually, 2) combining 2D and 3D results using a simple weighting scheme outperforms either 2D or 3D alone, 3) combining results from two or more 2D images using a similar weighting scheme also outperforms a single 2D image, and 4) combined 2D+3D outperforms the multi-image 2D result. This is the first (so far, only) work to present such an experimental control to substantiate multimodal performance improvement.
---
paper_title: Evaluation of automatic 4D face recognition using surface and texture registration
paper_content:
We introduce a novel technique for face recognition by using 4D face data that has been reconstructed from a stereo camera system. The 4D face data consists of a dense 3D mesh of vertices describing the facial geometry as well as a 2D texture map describing the facial appearance of each subject. The combination of geometry and texture information produces a complete photo-realistic model of each face. We propose a recognition algorithm based on two steps: The first step involves a 3D or 4D rigid registration of the faces. In the second step we introduce and evaluate different similarity metrics that measure the distance between pairs of closest points on two faces. A key advantage of the proposed technique is the fact that it can capture facial variations irrespective of the posture of the subject. We use this technique on 3D surface and texture data comprising 62 subjects at various postures and emotional expressions. Our results demonstrate that for subjects that look straight into the camera the recognition rate significantly increases when texture and geometry are combined in a 4D similarity metric.
---
paper_title: Facial feature detection and face recognition from 2D and 3D images
paper_content:
Abstract This paper presents a feature-based face recognition system based on both 3D range data as well as 2D gray-level facial images. Feature points are described by Gabor filter responses in the 2D domain and Point Signature in the 3D domain. Extracted shape features from 3D feature points and texture features from 2D feature points are first projected into their own subspace using PCA. In subspace, the corresponding normalized shape and texture weight vectors are then integrated to form an augmented vector which is used to represent each facial image. For a given test facial image, the best match in the model library is identified according to similarity function or Support Vector Machine (SVM). Experimental results involving 50 persons with different facial expressions and extracted from different viewpoints have demonstrated the efficiency of our algorithm.
---
paper_title: Spatially Optimized Data-Level Fusion of Texture and Shape for Face Recognition
paper_content:
Data-level fusion is believed to have the potential for enhancing human face recognition. However, due to a number of challenges, current techniques have failed to achieve its full potential. We propose spatially optimized data/pixel-level fusion of 3-D shape and texture for face recognition. Fusion functions are objectively optimized to model expression and illumination variations in linear subspaces for invariant face recognition. Parameters of adjacent functions are constrained to smoothly vary for effective numerical regularization. In addition to spatial optimization, multiple nonlinear fusion models are combined to enhance their learning capabilities. Experiments on the FRGC v2 data set show that spatial optimization, higher order fusion functions, and the combination of multiple such functions systematically improve performance, which is, for the first time, higher than score-level fusion in a similar experimental setup.
---
paper_title: Automatic 3D face recognition from depth and intensity Gabor features
paper_content:
As is well known, traditional 2D face recognition based on optical (intensity or color) images faces many challenges, such as illumination, expression, and pose variation. In fact, the human face generates not only 2D texture information but also 3D shape information. In this paper, we investigate what contributions depth and intensity information makes to face recognition when expression and pose variations are taken into account, and we propose a novel system for combining depth and intensity information to improve face recognition systems. In our system, local features described by Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in linear discriminant analysis (LDA) and AdaBoost learning is proposed to select the most effective and most robust features and to construct a strong classifier. Experiments are performed on the CASIA 3D face database and the FRGC V2.0 database, two data sets with complex variations, including expressions, poses and long time lapses between two scans. Experimental results demonstrate the promising performance of the proposed method. In our system, all processes are performed automatically, thus providing a prototype of automatic face recognition combining depth and intensity information.
---
paper_title: Matching 2.5D face scans to 3D models
paper_content:
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.
---
paper_title: Use of depth and colour eigenfaces for face recognition
paper_content:
In the present paper a face recognition technique is developed based on depth and colour information. The main objective of the paper is to evaluate three different approaches (colour, depth, combination of colour and depth) for face recognition and quantify the contribution of depth. The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces. Experimental results show significant gains attained with the addition of depth information.
---
paper_title: Face localization and authentication using color and depth images
paper_content:
This paper presents a complete face authentication system integrating both two-dimensional (color or intensity) and three-dimensional (3-D) range data, based on a low-cost 3-D sensor, capable of real-time acquisition of 3-D and color images. Novel algorithms are proposed that exploit depth information to achieve robust face detection and localization under conditions of background clutter, occlusion, face pose alteration, and harsh illumination. The well-known embedded hidden Markov model technique for face authentication is applied to depth maps and color images. To cope with pose and illumination variations, the enrichment of face databases with synthetically generated views is proposed. The performance of the proposed authentication scheme is tested thoroughly on two distinct face databases of significant size. Experimental results demonstrate significant gains resulting from the combined use of depth and color or intensity information.
---
paper_title: Face Recognition Using 2D and 3D Facial Data
paper_content:
Results are presented for the largest experimental study to date that investigates the comparison and combination of 2D and 3D face recognition. To our knowledge, this is also the only such study to incorporate signicant time lapse between gallery and probe image acquisition, and to look at the effect of depth resolution. Recognition results are obtained in (1) single gallery and a single probe study, and (2) a single gallery and multiple probe study. A total of 275 subjects participated in one or more data acquisition sessions. Results are presented for gallery and probe datasets of 200 subjects imaged in both 2D and 3D, with one to thirteen weeks time lapse between gallery and probe images of a given subject yielding 951 pairs of 2D and 3D images. Using a PCA-based approach tuned separately for 2D and for 3D, we nd that 3D outperforms 2D. However, we also nd a multi-modal rank-one recognition rate of 98.5% in a single probe study and 98.8% in multi-probe study, which is statistically signicantly greater than either 2D or 3D alone.
---
paper_title: Passive Multimodal 2-D+3-D Face Recognition Using Gabor Features and Landmark Distances
paper_content:
We introduce a novel multimodal framework for face recognition based on local attributes calculated from range and portrait image pairs. Gabor coefficients are computed at automatically detected landmark locations and combined with powerful anthropometric features defined in the form of geodesic and Euclidean distances between pairs of fiducial points. We make the pragmatic assumption that the 2-D and 3-D data is acquired passively (e.g., via stereo ranging) with perfect registration between the portrait data and the range data. Statistical learning approaches are evaluated independently to reduce the dimensionality of the 2-D and 3-D Gabor coefficients and the anthropometric distances. Three parallel face recognizers that result from applying the best performing statistical learning schemes are fused at the match score-level to construct a unified multimodal (2-D+3-D) face recognition system with boosted performance. Performance of the proposed algorithm is evaluated on a large public database of range and portrait image pairs and found to perform quite well.
---
paper_title: 2D and 3D multimodal hybrid face recognition
paper_content:
We present a 2D and 3D multimodal hybrid face recognition algorithm and demonstrate its performance on the FRGC v1.0 data. We use hybrid (feature-based and holistic) matching for the 3D faces and a holistic matching approach on the 2D faces. Feature-based matching is performed by offline segmenting each 3D face in the gallery into three regions, namely the eyes-forehead, the nose and the cheeks. The cheeks are discarded to avoid facial expressions and hair. During recognition, each feature in the gallery is automatically matched, using a modified ICP algorithm, with a complete probe face. The holistic 3D and 2D face matching is performed using PCA. Individual matching scores are fused after normalization and the results are compared to the BEE baseline performances in order to provide some answers to the first three conjectures of the FRGC. Our multimodal hybrid algorithm substantially outperformed others by achieving 100% verification rate at 0.0006 FAR.
---
paper_title: Fuzzy fusion for face recognition
paper_content:
Face recognition based only on the visual spectrum is not accurate or robust enough to be used in uncontrolled environments. This paper describes a fusion of visible and infrared (IR) imagery for face recognition. In this paper, a scheme based on membership function and fuzzy integral is proposed to fuse information from the two modalities. Recognition rate is used to evaluate the fusion scheme. Experimental results show the scheme improves recognition performance substantially.
---
paper_title: Optimized Visual and Thermal Image Fusion for Efficient Face Recognition
paper_content:
Data fusion of thermal and visual images is a solution to overcome the drawbacks present in individual thermal and visual images. Data fusion using different approach is discussed and results are presented in this paper. Traditional fusion approaches don't produce useful results for face recognition. An optimized approach for face data fusion is developed which works for face data fusion equally well as for non-face images. This paper presents the implementation of Human face recognition system using proposed optimized data fusion of visual and thermal images. Gabor filtering technique, which extracts facial features, is used as a face recognition technique to test the effectiveness of the fusion techniques. It has been found that by using the proposed fusion technique Gabor filter can recognize face even with variable expressions and light intensities.
---
paper_title: PCA-based face recognition in infrared imagery: baseline and comparative studies
paper_content:
Techniques for face recognition generally fall into global and local approaches, with the principal component analysis (PCA) being the most prominent global approach. We use the PCA algorithm to study the comparison and combination of infrared and typical visible-light images for face recognition. We examine the effects of lighting change, facial expression change and passage of time between the gallery image and probe image. Experimental results indicate that when there is substantial passage of time (greater than one week) between the gallery and probe images, recognition from typical visible-light images may outperform that from infrared images. Experimental results also indicate that the combination of the two generally outperforms either one alone. This is the only study that we know of to focus on the issue of how passage of time affects infrared face recognition.
---
paper_title: Thermal face recognition in an operational scenario
paper_content:
We present results on the latest advances in thermal infrared face recognition, and its use in combination with visible imagery. Previous research by the authors has shown high performance under very controlled conditions, or questionable performance under a wider range of conditions. This paper shows results on the use of thermal infrared and visible imagery for face recognition in operational scenarios. In particular, we show performance statistics for outdoor face recognition and recognition across multiple sessions. Our results support the conclusion that face recognition performance with thermal infrared imagery is stable over multiple sessions, and that fusion of modalities increases performance. As measured by the number of images and number of subjects, this is the largest ever reported study on thermal face recognition.
---
paper_title: Fusion based approach for thermal and visible face recognition under pose and expresivity variation
paper_content:
Many existing works in face recognition are based solely on visible images. The use of bimodal systems based on visible and thermal images is seldom reported in face recognition, despite its advantage of combining the discriminative power of both modalities, under expressions or pose variations. In this paper, we investigate the combined advantages of thermal and visible face recognition on a Principal Component Analysis (PCA) induced feature space, with PCA applied on each spectrum, on a relatively new thermal/visible face database - OTCBVS, for large pose and expression variations. The recognition is done through k-nearest neighbors classification. Our findings confirm that the recognition results are improved by the aid of thermal images over the classical approaches on visible images alone, when a suitably chosen classifier score fusion is employed. We also propose a validation scheme for deriving the optimal fusion score between the two recognition modalities.
---
paper_title: Infrared and visible image fusion for face recognition
paper_content:
Considerable progress has been made in face recognition research ::: over the last decade especially with the development of powerful ::: models of face appearance (i.e., eigenfaces). Despite the variety ::: of approaches and tools studied, however, face recognition is not ::: accurate or robust enough to be deployed in uncontrolled ::: environments. Recently, a number of studies have shown that ::: infrared (IR) imagery offers a promising alternative to visible ::: imagery due to its relative insensitive to illumination changes. ::: However, IR has other limitations including that it is opaque to ::: glass. As a result, IR imagery is very sensitive to facial ::: occlusion caused by eyeglasses. In this paper, we propose fusing ::: IR with visible images, exploiting the relatively lower ::: sensitivity of visible imagery to occlusions caused by eyeglasses. ::: Two different fusion schemes have been investigated in this study: ::: (1) image-based fusion performed in the wavelet domain and, (2) ::: feature-based fusion performed in the eigenspace domain. In both ::: cases, we employ Genetic Algorithms (GAs) to find an optimum ::: strategy to perform the fusion. To evaluate and compare the ::: proposed fusion schemes, we have performed extensive recognition ::: experiments using the Equinox face dataset and the popular method ::: of eigenfaces. Our results show substantial improvements in ::: recognition performance overall, suggesting that the idea of ::: fusing IR with visible images for face recognition deserves ::: further consideration.
---
paper_title: Multimodal face recognition: combination of geometry with physiological information
paper_content:
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a three-dimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
---
paper_title: IR and visible light face recognition
paper_content:
This paper presents the results of several large-scale studies of face recognition employing visible-light and infrared (IR) imagery in the context of principal component analysis. We find that in a scenario involving time lapse between gallery and probe, and relatively controlled lighting, (1) PCA-based recognition using visible-light images outperforms PCA-based recognition using infrared images, (2) the combination of PCA-based recognition using visible-light and infrared imagery substantially outperforms either one individually. In a same-session scenario (i.e., near-simultaneous acquisition of gallery and probe images) neither modality is significantly better than the other. These experimental results reinforce prior research that employed a smaller data set, presenting a convincing argument that, even across a broad experimental spectrum, the behaviors enumerated above are valid and consistent.
---
paper_title: Optimum fusion of visual and thermal face images for recognition
paper_content:
In this paper one investigation has been done to find the optimum level of fusion to find a fused image from visual as well as thermal images. Because of the use of face recognition system in critical areas like, authenticating an authorized person in highly secured areas, investigation of criminals, online monitoring etc, face recognition system should be very robust and accurate one. This work is an attempt to fuse visual and thermal face images at optimum level to extract the advantages of visual as well as thermal images. In our work, Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database has been used for the visual and thermal images. Among all the experiments a maximum recognition result obtained is 93%.
---
paper_title: Physiology-Based Face Recognition in the Thermal Infrared Spectrum
paper_content:
The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area
---
paper_title: Fusion of Visual and Thermal Signatures with Eyeglass Removal for Robust Face Recognition
paper_content:
This paper describes a fusion of visual and thermal infrared (IR) images for robust face recognition. Two types of fusion methods are discussed: data fusion and decision fusion. Data fusion produces an illumination-invariant face image by adaptively integrating registered visual and thermal face images. Decision fusion combines matching scores of individual face recognition modules. In the data fusion process, eyeglasses, which block thermal energy, are detected from thermal images and replaced with an eye template. Three fusion-based face recognition techniques are implemented and tested: Data fusion of visual and thermal images (Df), Decision fusion with highest matching score (Fh), and Decision fusion with average matching score (Fa). A commercial face recognition software FaceIt® is used as an individual recognition module. Comparison results show that fusion-based face recognition techniques outperformed individual visual and thermal face recognizers under illumination variations and facial expressions.
---
paper_title: Fusion of Thermal and Visual Images for efficient Face Recognition using Gabor Filter
paper_content:
Face recognition from visual images is difficult task due to illumination problem and in thermal imaging the main problem is of glasses. The solution of both these problem is data fusion of thermal and visual images. This paper presents the implementation of Human face recognition system using data fusion of visual and thermal images. Gabor filtering technique which extracts facial features is used in the proposed face recognition. . To our knowledge, this is the first visual and thermal data fusion recognition system which utilizes Gabor filter. Paper also discusses the performance improvement of face recognition along with issues of memory requirements.
---
paper_title: Facial Recognition Using Multisensor Images Based on Localized Kernel Eigen Spaces
paper_content:
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
---
paper_title: Overview of the face recognition grand challenge
paper_content:
Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.
---
paper_title: Multimodal face recognition: combination of geometry with physiological information
paper_content:
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a three-dimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
---
paper_title: Fuzzy fusion for face recognition
paper_content:
Face recognition based only on the visual spectrum is not accurate or robust enough to be used in uncontrolled environments. This paper describes a fusion of visible and infrared (IR) imagery for face recognition. In this paper, a scheme based on membership function and fuzzy integral is proposed to fuse information from the two modalities. Recognition rate is used to evaluate the fusion scheme. Experimental results show the scheme improves recognition performance substantially.
---
paper_title: Evaluation of automatic 4D face recognition using surface and texture registration
paper_content:
We introduce a novel technique for face recognition by using 4D face data that has been reconstructed from a stereo camera system. The 4D face data consists of a dense 3D mesh of vertices describing the facial geometry as well as a 2D texture map describing the facial appearance of each subject. The combination of geometry and texture information produces a complete photo-realistic model of each face. We propose a recognition algorithm based on two steps: The first step involves a 3D or 4D rigid registration of the faces. In the second step we introduce and evaluate different similarity metrics that measure the distance between pairs of closest points on two faces. A key advantage of the proposed technique is the fact that it can capture facial variations irrespective of the posture of the subject. We use this technique on 3D surface and texture data comprising 62 subjects at various postures and emotional expressions. Our results demonstrate that for subjects that look straight into the camera the recognition rate significantly increases when texture and geometry are combined in a 4D similarity metric.
---
paper_title: Fusion based approach for thermal and visible face recognition under pose and expresivity variation
paper_content:
Many existing works in face recognition are based solely on visible images. The use of bimodal systems based on visible and thermal images is seldom reported in face recognition, despite its advantage of combining the discriminative power of both modalities, under expressions or pose variations. In this paper, we investigate the combined advantages of thermal and visible face recognition on a Principal Component Analysis (PCA) induced feature space, with PCA applied on each spectrum, on a relatively new thermal/visible face database - OTCBVS, for large pose and expression variations. The recognition is done through k-nearest neighbors classification. Our findings confirm that the recognition results are improved by the aid of thermal images over the classical approaches on visible images alone, when a suitably chosen classifier score fusion is employed. We also propose a validation scheme for deriving the optimal fusion score between the two recognition modalities.
---
paper_title: Optimum fusion of visual and thermal face images for recognition
paper_content:
In this paper one investigation has been done to find the optimum level of fusion to find a fused image from visual as well as thermal images. Because of the use of face recognition system in critical areas like, authenticating an authorized person in highly secured areas, investigation of criminals, online monitoring etc, face recognition system should be very robust and accurate one. This work is an attempt to fuse visual and thermal face images at optimum level to extract the advantages of visual as well as thermal images. In our work, Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database has been used for the visual and thermal images. Among all the experiments a maximum recognition result obtained is 93%.
---
paper_title: A 3D Face Model for Pose and Illumination Invariant Face Recognition
paper_content:
Generative 3D face models are a powerful tool in computer vision. They provide pose and illumination invariance by modeling the space of 3D faces and the imaging process. The power of these models comes at the cost of an expensive and tedious construction process, which has led the community to focus on more easily constructed but less powerful models. With this paper we publish a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrate its application to several face recognition task. We improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm. The same 3D face model can be fit to 2D or 3D images acquired under different situations and with different sensors using an analysis by synthesis method. The resulting model parameters separate pose, lighting, imaging and identity parameters, which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters. We hope that the availability of this registered face model will spur research in generative models. Together with the model we publish a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.
---
paper_title: Multimodal 2D, 2.5D & 3D Face Verification
paper_content:
A multimodal face verification process is presented for standard 2D color images, 2.5D range images and 3D meshes. A normalization in orientation and position is essential for 2.5D and 3D images to obtain a corrected frontal image. This is achieved using the spin images of the nose tip and both eyes, which feed an SVM classifier. First, a traditional principal component analysis followed by an SVM classifier are applied to both 2D and 2.5D images. Second, an iterative closest point algorithm is used to match 3D meshes. In all cases, the equal error rate is computed for different kinds of images in the training and test phases. In general, 2.5D range images show the best results (0.1% EER for frontal images). A special improvement in success rate for turned faces has been obtained for normalized 2.5D and 3D images compared to standard 2D images.
---
paper_title: SCface – surveillance cameras face database
paper_content:
In this paper we describe a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. Database contains 4,160 static images (in visible and infrared spectrum) of 130 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at www.scface.org .
---
paper_title: A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference
paper_content:
To date, most facial expression analysis has been based on visible and posed expression databases. Visible images, however, are easily affected by illumination variations, while posed expressions differ in appearance and timing from natural ones. In this paper, we propose and establish a natural visible and infrared facial expression database, which contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions. The posed database includes the apex expressional images with and without glasses. As an elementary assessment of the usability of our spontaneous database for expression recognition and emotion inference, we conduct visible facial expression recognition using four typical methods, including the eigenface approach [principle component analysis (PCA)], the fisherface approach [PCA + linear discriminant analysis (LDA)], the Active Appearance Model (AAM), and the AAM-based + LDA. We also use PCA and PCA+LDA to recognize expressions from infrared thermal images. In addition, we analyze the relationship between facial temperature and emotion through statistical analysis. Our database is available for research purposes.
---
paper_title: Representation Plurality and Fusion for 3-D Face Recognition
paper_content:
In this paper, we present an extensive study of 3D face recognition algorithms and examine the benefits of various score-, rank-, and decision-level fusion rules. We investigate face recognizers from two perspectives: the data representation techniques used and the feature extraction algorithms that match best each representation type. We also consider novel applications of various feature extraction techniques such as discrete Fourier transform, discrete cosine transform, nonnegative matrix factorization, and principal curvature directions to the shape modality. We discuss and compare various classifier combination methods such as fixed rules and voting- and rank-based fusion schemes. We also present a dynamic confidence estimation algorithm to boost fusion performance. In identification experiments performed on FRGC v1.0 and FRGC v2.0 face databases, we have tried to find the answers to the following questions: 1) the relative importance of the face representation techniques vis-a-vis the types of features extracted; 2) the impact of the gallery size; 3) the conditions, under which subspace methods are preferable, and the compression factor; 4) the most advantageous fusion level and fusion methods; 5) the role of confidence votes in improving fusion and the style of selecting experts in the fusion; and 6) the consistency of the conclusions across different databases.
---
paper_title: An Efficient Multimodal 2D-3D Hybrid Approach to Automatic Face Recognition
paper_content:
We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.
---
paper_title: Optimized Visual and Thermal Image Fusion for Efficient Face Recognition
paper_content:
Data fusion of thermal and visual images is a solution to overcome the drawbacks present in individual thermal and visual images. Data fusion using different approach is discussed and results are presented in this paper. Traditional fusion approaches don't produce useful results for face recognition. An optimized approach for face data fusion is developed which works for face data fusion equally well as for non-face images. This paper presents the implementation of Human face recognition system using proposed optimized data fusion of visual and thermal images. Gabor filtering technique, which extracts facial features, is used as a face recognition technique to test the effectiveness of the fusion techniques. It has been found that by using the proposed fusion technique Gabor filter can recognize face even with variable expressions and light intensities.
---
paper_title: Iterative Closest Normal Point for 3D Face Recognition
paper_content:
The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.
---
paper_title: Illumination Invariant Face Recognition Using Near-Infrared Images
paper_content:
Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups
---
paper_title: 3D Face Recognition Using 3D Alignment for PCA
paper_content:
This paper presents a 3D approach for recognizing faces based on Principal Component Analysis (PCA). The approach addresses the issue of proper 3D face alignment required by PCA for maximum data compression and good generalization performance for new untrained faces. This issue has traditionally been addressed by 2D data normalization, a step that eliminates 3D object size information important for the recognition process. We achieve correspondence of facial points by registering a 3D face to a scaled generic 3D reference face and subsequently perform a surface normal search algorithm. 3D scaling of the generic reference face is performed to enable better alignment of facial points while preserving important 3D size information in the input face. The benefits of this approach for 3D face recognition and dimensionality reduction have been demonstrated on components of the Face Recognition Grand Challenge (FRGC) database versions 1 and 2.
---
paper_title: An Expression Deformation Approach to Non-rigid 3D Face Recognition
paper_content:
The accuracy of non-rigid 3D face recognition approaches is highly influenced by their capacity to differentiate between the deformations caused by facial expressions from the distinctive geometric attributes that uniquely characterize a 3D face, interpersonal disparities. We present an automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression. The patterns of expression deformations are first learnt from training data in PCA eigenvectors. These patterns are then used to morph out the expression deformations. Similarity measures are extracted by matching the morphed 3D faces. PCA is performed in such a way it models only the facial expressions leaving out the interpersonal disparities. The approach was applied on the FRGC v2.0 dataset and superior recognition performance was achieved. The verification rates at 0.001 FAR were 98.35% and 97.73% for scans under neutral and non-neutral expressions, respectively.
---
paper_title: Spatially Optimized Data-Level Fusion of Texture and Shape for Face Recognition
paper_content:
Data-level fusion is believed to have the potential for enhancing human face recognition. However, due to a number of challenges, current techniques have failed to achieve its full potential. We propose spatially optimized data/pixel-level fusion of 3-D shape and texture for face recognition. Fusion functions are objectively optimized to model expression and illumination variations in linear subspaces for invariant face recognition. Parameters of adjacent functions are constrained to smoothly vary for effective numerical regularization. In addition to spatial optimization, multiple nonlinear fusion models are combined to enhance their learning capabilities. Experiments on the FRGC v2 data set show that spatial optimization, higher order fusion functions, and the combination of multiple such functions systematically improve performance, which is, for the first time, higher than score-level fusion in a similar experimental setup.
---
paper_title: Automatic 3D face recognition from depth and intensity Gabor features
paper_content:
As is well known, traditional 2D face recognition based on optical (intensity or color) images faces many challenges, such as illumination, expression, and pose variation. In fact, the human face generates not only 2D texture information but also 3D shape information. In this paper, we investigate what contributions depth and intensity information makes to face recognition when expression and pose variations are taken into account, and we propose a novel system for combining depth and intensity information to improve face recognition systems. In our system, local features described by Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in linear discriminant analysis (LDA) and AdaBoost learning is proposed to select the most effective and most robust features and to construct a strong classifier. Experiments are performed on the CASIA 3D face database and the FRGC V2.0 database, two data sets with complex variations, including expressions, poses and long time lapses between two scans. Experimental results demonstrate the promising performance of the proposed method. In our system, all processes are performed automatically, thus providing a prototype of automatic face recognition combining depth and intensity information.
---
paper_title: 3D Face Recognition using Mapped Depth Images
paper_content:
This paper addresses 3D face recognition from facial shape. Firstly, we present an effective method to automatically extract ROI of facial surface, which mainly depends on automatic detection of facial bilateral symmetry plane and localization of nose tip. Then we build a reference plane through the nose tip for calculating the relative depth values. Considering the non-rigid property of facial surface, the ROI is triangulated and parameterized into an isomorphic 2D planar circle, attempting to preserve the intrinsic geometric properties. At the same time the relative depth values are also mapped. Finally we perform eigenface on the mapped relative depth image. The entire scheme is insensitive to pose variance. The experiment using FRGC database v1.0 obtains the rank-1 identification score of 95%, which outperforms the result of the PCA base-line method by 4%, which demonstrates the effectiveness of our algorithm.
---
paper_title: Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach
paper_content:
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality
---
paper_title: Face recognition with visible and thermal infrared imagery
paper_content:
We present a comprehensive performance study of multiple appearance-based face recognition methodologies, on visible and thermal infrared imagery. We compare algorithms within the same imaging modality as well as between them. Both identification and verification scenarios are considered, and appropriate performance statistics reported for each case. Our experimental design is aimed at gaining full understanding of algorithm performance under varying conditions, and is based on Monte Carlo analysis of performance measures. This analysis reveals that under many circumstances, using thermal infrared imagery yields higher performance, while in other cases performance in both modalities is equivalent. Performance increases further when algorithms on visible and thermal infrared imagery are fused. Our study also provides a partial explanation for the multiple contradictory claims in the literature regarding performance of various algorithms on visible data sets.
---
paper_title: An efficient 3D face recognition approach based on the fusion of novel local low-level features
paper_content:
We present a novel 3D face recognition approach based on low-level geometric features that are collected from the eyes-forehead and the nose regions. These regions are relatively less influenced by the deformations that are caused by facial expressions. The extracted features revealed to be efficient and robust in the presence of facial expressions. A region-based histogram descriptor computed from these features is used to uniquely represent a 3D face. A Support Vector Machine (SVM) is then trained as a classifier based on the proposed histogram descriptors to recognize any test face. In order to combine the contributions of the two facial regions (eyes-forehead and nose), both feature-level and score-level fusion schemes have been tested and compared. The proposed approach has been tested on FRGC v2.0 and BU-3DFE datasets through a number of experiments and a high recognition performance was achieved. Based on the results of ''neutral vs. non-neutral'' experiment of FRGC v2.0 and ''low-intensity vs. high-intensity'' experiment of BU-3DFE, the feature-level fusion scheme achieved verification rates of 97.6% and 98.2% at 0.1% False Acceptance Rate (FAR) and identification rates of 95.6% and 97.7% on the two datasets respectively. The experimental results also have shown that the feature-level fusion scheme outperformed the score-level fusion one.
---
paper_title: A Comparative Study of 3-D Face Recognition Under Expression Variations
paper_content:
Research in face recognition has continuously been challenged by extrinsic (head pose, lighting conditions) and intrinsic (facial expression, aging) sources of variability. While many survey papers on face recognition exist, in this paper, we focus on a comparative study of 3-D face recognition under expression variations. As a first contribution, 3-D face databases with expressions are listed, and the most important ones are briefly presented and their complexity is quantified using the iterative closest point (ICP) baseline recognition algorithm. This allows to rank the databases according to their inherent difficulty for face-recognition tasks. This analysis reveals that the FRGC v2 database can be considered as the most challenging because of its size, the presence of expressions and outliers, and the time lapse between the recordings. Therefore, we recommend to use this database as a reference database to evaluate (expression-invariant) 3-D face-recognition algorithms. We also determine and quantify the most important factors that influence the performance. It appears that performance decreases 1) with the degree of nonfrontal pose, 2) for certain expression types, 3) with the magnitude of the expressions, 4) with an increasing number of expressions, and 5) for a higher number of gallery subjects. Future 3-D face-recognition algorithms should be evaluated on the basis of all these factors. As the second contribution, a survey of published 3-D face-recognition methods that deal with expression variations is given. These methods are subdivided into three classes depending on the way the expressions are handled. Region-based methods use expression-stable regions only, while other methods model the expressions either using an isometric or a statistical model. Isometric models assume the deformation because of expression variation to be (locally) isometric, meaning that the deformation preserves lengths along the surface. Statistical models learn how the facial soft tissue deforms during expressions based on a training database with expression labels. Algorithmic performances are evaluated by the comparison of recognition rates for identification and verification. No statistical significant differences in class performance are found between any pair of classes.
---
paper_title: Face localization and authentication using color and depth images
paper_content:
This paper presents a complete face authentication system integrating both two-dimensional (color or intensity) and three-dimensional (3-D) range data, based on a low-cost 3-D sensor, capable of real-time acquisition of 3-D and color images. Novel algorithms are proposed that exploit depth information to achieve robust face detection and localization under conditions of background clutter, occlusion, face pose alteration, and harsh illumination. The well-known embedded hidden Markov model technique for face authentication is applied to depth maps and color images. To cope with pose and illumination variations, the enrichment of face databases with synthetically generated views is proposed. The performance of the proposed authentication scheme is tested thoroughly on two distinct face databases of significant size. Experimental results demonstrate significant gains resulting from the combined use of depth and color or intensity information.
---
paper_title: Fusion of Thermal and Visual Images for efficient Face Recognition using Gabor Filter
paper_content:
Face recognition from visual images is difficult task due to illumination problem and in thermal imaging the main problem is of glasses. The solution of both these problem is data fusion of thermal and visual images. This paper presents the implementation of Human face recognition system using data fusion of visual and thermal images. Gabor filtering technique which extracts facial features is used in the proposed face recognition. . To our knowledge, this is the first visual and thermal data fusion recognition system which utilizes Gabor filter. Paper also discusses the performance improvement of face recognition along with issues of memory requirements.
---
paper_title: Facial Recognition Using Multisensor Images Based on Localized Kernel Eigen Spaces
paper_content:
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
---
paper_title: An Efficient Multimodal 2D-3D Hybrid Approach to Automatic Face Recognition
paper_content:
We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.
---
paper_title: Thermal face recognition in an operational scenario
paper_content:
We present results on the latest advances in thermal infrared face recognition, and its use in combination with visible imagery. Previous research by the authors has shown high performance under very controlled conditions, or questionable performance under a wider range of conditions. This paper shows results on the use of thermal infrared and visible imagery for face recognition in operational scenarios. In particular, we show performance statistics for outdoor face recognition and recognition across multiple sessions. Our results support the conclusion that face recognition performance with thermal infrared imagery is stable over multiple sessions, and that fusion of modalities increases performance. As measured by the number of images and number of subjects, this is the largest ever reported study on thermal face recognition.
---
paper_title: An evaluation of multimodal 2D+3D face biometrics
paper_content:
We report on the largest experimental study to date in multimodal 2D+3D face recognition, involving 198 persons in the gallery and either 198 or 670 time-lapse probe images. PCA-based methods are used separately for each modality and match scores in the separate face spaces are combined for multimodal recognition. Major conclusions are: 1) 2D and 3D have similar recognition performance when considered individually, 2) combining 2D and 3D results using a simple weighting scheme outperforms either 2D or 3D alone, 3) combining results from two or more 2D images using a similar weighting scheme also outperforms a single 2D image, and 4) combined 2D+3D outperforms the multi-image 2D result. This is the first (so far, only) work to present such an experimental control to substantiate multimodal performance improvement.
---
paper_title: Illumination Invariant Face Recognition Using Near-Infrared Images
paper_content:
Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups
---
paper_title: An Expression Deformation Approach to Non-rigid 3D Face Recognition
paper_content:
The accuracy of non-rigid 3D face recognition approaches is highly influenced by their capacity to differentiate between the deformations caused by facial expressions from the distinctive geometric attributes that uniquely characterize a 3D face, interpersonal disparities. We present an automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression. The patterns of expression deformations are first learnt from training data in PCA eigenvectors. These patterns are then used to morph out the expression deformations. Similarity measures are extracted by matching the morphed 3D faces. PCA is performed in such a way it models only the facial expressions leaving out the interpersonal disparities. The approach was applied on the FRGC v2.0 dataset and superior recognition performance was achieved. The verification rates at 0.001 FAR were 98.35% and 97.73% for scans under neutral and non-neutral expressions, respectively.
---
paper_title: Infrared and visible image fusion for face recognition
paper_content:
Considerable progress has been made in face recognition research ::: over the last decade especially with the development of powerful ::: models of face appearance (i.e., eigenfaces). Despite the variety ::: of approaches and tools studied, however, face recognition is not ::: accurate or robust enough to be deployed in uncontrolled ::: environments. Recently, a number of studies have shown that ::: infrared (IR) imagery offers a promising alternative to visible ::: imagery due to its relative insensitive to illumination changes. ::: However, IR has other limitations including that it is opaque to ::: glass. As a result, IR imagery is very sensitive to facial ::: occlusion caused by eyeglasses. In this paper, we propose fusing ::: IR with visible images, exploiting the relatively lower ::: sensitivity of visible imagery to occlusions caused by eyeglasses. ::: Two different fusion schemes have been investigated in this study: ::: (1) image-based fusion performed in the wavelet domain and, (2) ::: feature-based fusion performed in the eigenspace domain. In both ::: cases, we employ Genetic Algorithms (GAs) to find an optimum ::: strategy to perform the fusion. To evaluate and compare the ::: proposed fusion schemes, we have performed extensive recognition ::: experiments using the Equinox face dataset and the popular method ::: of eigenfaces. Our results show substantial improvements in ::: recognition performance overall, suggesting that the idea of ::: fusing IR with visible images for face recognition deserves ::: further consideration.
---
paper_title: Face recognition with temporal invariance: A 3D aging model
paper_content:
The variation caused by aging has not received adequate attention compared with pose, lighting, and expression variations. Aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). While the facial age modeling has been widely studied in computer graphics community, only a few studies have been reported in computer vision literature on age-invariant face recognition. We propose an automatic aging simulation technique that can assist any existing face recognition engine for aging-invariant face recognition. We learn the aging patterns of shape and the corresponding texture in 3D domain by adapting a 3D morphable model to the 2D aging database (public domain FG-NET). At recognition time, each probe and all gallery images are modified to compensate for the age-induced variation using an intermediate 3D model deformation and a texture modification, prior to matching. The proposed approach is evaluated on a set of age-separated probe and gallery data using a state-of-the-art commercial face recognition engine, FaceVACS. Use of 3D aging model improves the rank-1 matching accuracy on FG-NET database from 28.0% to 37.8%, on average.
---
paper_title: Multimodal face recognition: combination of geometry with physiological information
paper_content:
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a three-dimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
---
paper_title: IR and visible light face recognition
paper_content:
This paper presents the results of several large-scale studies of face recognition employing visible-light and infrared (IR) imagery in the context of principal component analysis. We find that in a scenario involving time lapse between gallery and probe, and relatively controlled lighting, (1) PCA-based recognition using visible-light images outperforms PCA-based recognition using infrared images, (2) the combination of PCA-based recognition using visible-light and infrared imagery substantially outperforms either one individually. In a same-session scenario (i.e., near-simultaneous acquisition of gallery and probe images) neither modality is significantly better than the other. These experimental results reinforce prior research that employed a smaller data set, presenting a convincing argument that, even across a broad experimental spectrum, the behaviors enumerated above are valid and consistent.
---
paper_title: 3-D Face Recognition Using eLBP-Based Facial Description and Local Feature Hybrid Matching
paper_content:
This paper presents an effective method for 3-D face recognition using a novel geometric facial representation along with a local feature hybrid matching scheme. The proposed facial surface description is based on a set of facial depth maps extracted by multiscale extended Local Binary Patterns (eLBP) and enables an efficient and accurate description of local shape changes; it thus enhances the distinctiveness of smooth and similar facial range images generated by preprocessing steps. The following matching strategy is SIFT-based and performs in a hybrid way that combines local and holistic analysis, robustly associating the keypoints between two facial representations of the same subject. As a result, the proposed approach proves robust to facial expression variations, partial occlusions, and moderate pose changes, and the last property makes our system registration-free for nearly frontal face models. The proposed method was experimented on three public datasets, i.e. FRGC v2.0, Gavab, and Bosphorus. It displays a rank-one recognition rate of 97.6% and a verification rate of 98.4% at a 0.001 FAR on the FRGC v2.0 database without any face alignment. Additional experiments on the Bosphorus dataset further highlight the advantages of the proposed method with regard to expression changes and external partial occlusions. The last experiment carried out on the Gavab database demonstrates that the entire system can also deal with faces under large pose variations and even partially occluded ones, when only aided by a coarse alignment process.
---
paper_title: Subject-Specific and Pose-Oriented Facial Features for Face Recognition Across Poses
paper_content:
Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.
---
paper_title: Physiology-Based Face Recognition in the Thermal Infrared Spectrum
paper_content:
The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area
---
paper_title: A Comparative Study of 3-D Face Recognition Under Expression Variations
paper_content:
Research in face recognition has continuously been challenged by extrinsic (head pose, lighting conditions) and intrinsic (facial expression, aging) sources of variability. While many survey papers on face recognition exist, in this paper, we focus on a comparative study of 3-D face recognition under expression variations. As a first contribution, 3-D face databases with expressions are listed, and the most important ones are briefly presented and their complexity is quantified using the iterative closest point (ICP) baseline recognition algorithm. This allows to rank the databases according to their inherent difficulty for face-recognition tasks. This analysis reveals that the FRGC v2 database can be considered as the most challenging because of its size, the presence of expressions and outliers, and the time lapse between the recordings. Therefore, we recommend to use this database as a reference database to evaluate (expression-invariant) 3-D face-recognition algorithms. We also determine and quantify the most important factors that influence the performance. It appears that performance decreases 1) with the degree of nonfrontal pose, 2) for certain expression types, 3) with the magnitude of the expressions, 4) with an increasing number of expressions, and 5) for a higher number of gallery subjects. Future 3-D face-recognition algorithms should be evaluated on the basis of all these factors. As the second contribution, a survey of published 3-D face-recognition methods that deal with expression variations is given. These methods are subdivided into three classes depending on the way the expressions are handled. Region-based methods use expression-stable regions only, while other methods model the expressions either using an isometric or a statistical model. Isometric models assume the deformation because of expression variation to be (locally) isometric, meaning that the deformation preserves lengths along the surface. Statistical models learn how the facial soft tissue deforms during expressions based on a training database with expression labels. Algorithmic performances are evaluated by the comparison of recognition rates for identification and verification. No statistical significant differences in class performance are found between any pair of classes.
---
paper_title: 3D Face Recognition under Expressions, Occlusions, and Pose Variations
paper_content:
We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both-empirical and theoretical-perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.
---
paper_title: A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition
paper_content:
This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology.
---
| Title: Recent Advances on Singlemodal and Multimodal Face Recognition: A Survey
Section 1: INTRODUCTION
Description 1: Introduce the goals and challenges of face recognition, and provide an overview of visual, 3D, and infrared face recognition methodologies.
Section 2: Visual Face Recognition
Description 2: Discuss the techniques and challenges associated with visual face recognition, highlighting the high performance achieved in controlled environments and the issues posed by varying factors such as illumination, pose, and expression.
Section 3: Three-Dimensional Face Recognition
Description 3: Explore the methods and benefits of 3D face recognition, including its robustness to pose and illumination variations, various approaches (feature-based and holistic), and the challenges posed by expression variations and data capture devices.
Section 4: Infrared Face Recognition
Description 4: Review the recent techniques of IR-based face recognition, categorized into statistical approaches and feature-based methods, and discuss the invariance of IR images to makeup and illumination changes.
Section 5: MULTIMODAL FACE RECOGNITION
Description 5: Examine the various multimodal face recognition techniques, including the fusion of visual, 3D, and infrared modalities, and discuss the benefits and challenges of combining multiple sensors and data types for improved recognition accuracy.
Section 6: FACE DATABASES AND PERFORMANCE EVALUATION
Description 6: Characterize the datasets used for training and testing face recognition methods, and summarize the performance of different methodologies across various databases, highlighting the importance of gallery and probe dataset sizes and variations.
Section 7: Discussion
Description 7: Provide a comprehensive discussion on the remaining challenges in face recognition, such as varying expressions, poses, and illumination, and the benefits and limitations of 3D and IR face recognition in addressing these issues.
Section 8: CONCLUSION
Description 8: Summarize the advances in face recognition, emphasizing the potential of multimodal techniques to overcome the limitations of individual modalities and offering insights on the future directions of face recognition research. |
Interfacial Structures and Properties of Organic Materials for Biosensors: An Overview | 12 | ---
paper_title: Oleophobic monolayers: I. Films adsorbed from solution in non-polar liquids☆
paper_content:
It was found that certain types of polar organic molecules are adsorbed from solutions in non-polar solvents to form well-oriented monolayers on polished solid surfaces. Such monolayers imparted both hydrophobic and oleophobic properties to the polished surfaces of a variety of metallic and non-metallic solids and could be formed from a large variety of solvents, those used ranging from hydrocarbons like hexadecane, mineral oils, benzene, methylnaphthalene and dicyclohexyl to other solvents such as carbon disulfide, carbon tetrachloride, bromobenzene and diphenyloxide. ::: ::: It is shown that the mechanism of the formation of these films on platinum was reversible adsorption from solution and that it definitely was not the accumulation of insoluble films floating at the liquid-air interface as in the Langmuir-Blodgett method. It is concluded that the adsorbed films were made up of almost vertically oriented molecules which were nearly close-packed and attached to the surface through a surface active or polar group. Attention is called to the possibility that such films do not conform to the shape of the solid surface but might bridge over those surface depressions whose areas are not too great compared to the cross-sectional area of the molecules. ::: ::: In order that a compound be able to adsorb as an oleophobic monolayer it appears necessary that its molecular structure be in keeping with the following requirements. First, the molecules of the compound must be capable of approximating close-packed orientation in a monolayer. Second, the surface active or polar group must be located at one extremity of the molecule and one or more methyl groups must be located at the opposite extremity. Finally, the molecules must adsorb to a fiat solid surface with sufficient close-packing so that the outermost portion of the film is essentially a plane surface, densely populated with methyl groups. ::: ::: It was found that compounds whose molecular configurations resemble a long rod or a flat plate, with a polar group attached to one end of the rod or rim of the plate and one or more methyl groups at the opposite end of the rod or rim of the plate, readily satisfy these requirements. Exceptions were found in the case of aliphatic, unbranched polar molecules containing one or more unsaturated bonds, for these did not form oleophobic films. It is suggested that such molecules adsorb on solids at both the polar end group and the unsaturated bonds so that, instead of orienting with their axes vertical to the surface, they are arranged more nearly horizontally. ::: ::: Observations were made to determine the smallest concentrations of each of the various types of oleophobic compounds which would permit the formation of oleophobic monolayers on platinum and pyrex. While weight concentrations of only 10−7 were required for primary aliphatic amines and monocarboxylic acids, roughly 1000 times more was needed for the aliphatic alcohols, esters and ketones and for cholesterol. ::: ::: Some of the relations of these findings to past theoretical and applied research are discussed.
---
paper_title: Electrochemical biosensor based on supported planar lipid bilayers for fast detection of pathogenic bacteria
paper_content:
This paper presents a new ion-channel biosensor based on supported bilayer lipid membrane for direct and fast detection of Campylobacter species. The sensing element of a biosensor is composed of a stainless-steel working electrode, which is covered by artificial bilayer lipid membrane (BLM). Antibodies to bacteria embedded into the BLM are used as channel forming proteins. The biosensor has a strong signal amplification effect, which is defined as the total number of ions transported across the BLM. The total number of (univalent) ions flowing through the channels is 1010 ions s−1. The biosensor showed a very good sensitivity and selectivity to Campylobacter species.
---
paper_title: Electrochemical impedance spectroscopy as a platform for reagentless bioaffinity sensing
paper_content:
Abstract Simple reagentless immunosensor formats have been difficult to achieve, particularly for electrochemical devices, since antigen/hapten recognition by an antibody does not directly lead to a reaction cascade. A direct reading electrochemical immunosensor would have major advantages with respect to speed, de-skilled analysis and the development of multi-analyte sensors. Electrically conducting polymers, such as poly(pyrrole), allow for the intimate association between a biological recognition element and a potential reporter polymeric chain. We have polymerised poly(pyrrole), loaded with avidin or antibody to luteinising hormone (LH), on gold interdigitated electrodes (IDE) and employed two-electrode electrochemical impedance spectroscopy (EIS) to “visualise” charge transfer through the polymer as the basis for a reagentless protocol. We investigated bulk redox processes in the poly(pyrrole) films by cyclic voltammetry in order to ascertain the redox state of the film prior to EIS. The redox process of the immobilised protein molecule was identified and allowed the focusing of the EIS studies on polymer associated with the immobilised bioaffinity molecule. The polymer displayed both polaronic and electronic charge transfer during EIS studies. A possible binding-dependent response, observed as a decrease in peak polaronic phase angle, occurred when a redox cycle was performed on the film following exposure to the appropriate analyte. The response of poly(pyrrole) films loaded with avidin to d -biotin and two derivatives was assessed, which was shown to be sensitive to electrode pre-treatment. Poly(pyrrole) films loaded with antibody to LH allowed a calibration for LH to be constructed between 1 and 800 IU/l. Importantly, the films were responsive to LH within the clinically relevant range of 1–10 IU/l.
---
paper_title: An optical biosensor using a fluorescent, swelling sensing element
paper_content:
Abstract An optical sensor based on coupling the swelling of a polymer gel to a change in fluorescence intensity is discussed. A fluorophore, an amine functional group and the enzyme glucose oxidase were each incorporated into a crosslinked polymer gel, which was formed on the end of a fibre optic rod. While the amount of fluorophore remains constant, the gel volume changes in response to a change in the ionization state of the amine moiety. This change is related to glucose concentration. These effects were examined along with subsequent changes in the fluorescence of the gel.
---
paper_title: Transducer aspects of biosensors
paper_content:
Abstract A biosensor is a device which converts biological activity into a quantifiable signal. The basic principles, operation and applications of some typical electrochemical, piezo-electric and optical devices are reviewed. These sensors promise to provide an analytically powerful and inexpensive alternative to conventional technologies by enabling the identification of target substances in the presence of a number of interfering species. In the extreme cases where this is not possible, an integrated micro-separation and detection system on a silicon wafer is proposed. Finally, a summary of future markets and prospects for the developing technology of biosensing is given, with particular reference to growth opportunities in the healthcare sector.
---
paper_title: Nanowire labeled direct-charge transfer biosensor for detecting Bacillus species.
paper_content:
A direct-charge transfer (DCT) biosensor was developed for the detection of the foodborne pathogen, Bacillus cereus. The biosensor was fabricated using antibodies as the sensing element and polyaniline nanowire as the molecular electrical transducer. The sensor design consisted of four membrane pads, namely, sample application, conjugate, capture and absorption pads. Two sets of polyclonal antibodies, secondary antibodies conjugated with polyaniline nanowires and capture antibodies were applied to the conjugate and the capture pads of the biosensor, respectively. The detection technique was based on capillary flow action which allowed the liquid sample to move from one membrane to another. The working principle involved antigen-antibody interaction and direct electron charge flow to generate a resistance signal that was being recorded. Detection from sample application to final results was completed in 6 min in a reagentless process. Experiments were conducted to find the best performance of the biosensors by varying polyaniline types and concentrations. Polyaniline protonated with hydrochloric acid, emeraldine salt and polyaniline protonated with perchloric acid were the three kinds of polyaniline used in this study. The biosensor sensitivity in pure cultures of B. cereus was found to be 10(1) to 10(2)CFU/ml. Results indicated that using emeraldine salt at a concentration of 0.25 g/ml gave the best biosensor performance in terms of sensitivity. The biosensor was also found to be specific in detecting the presence of B. cereus in a mixed culture of different Bacillus species and other foodborne pathogens. The speed, sensitivity and ease-of-use of this biosensor make it a promising device for rapid field-based diagnosis towards the protection of our food supply chain. The phenotypic and genotypic similarities between B. cereus and Bacillus anthracis will also allow this biosensor to serve as an excellent model for the detection of B. anthracis.
---
paper_title: Fibroblast cells: a sensing bioelement for glucose detection by impedance spectroscopy.
paper_content:
Modifying the electrical properties of fibroblasts against various glucose concentrations can serve as a basis for a new, original sensing device. The aim of the present study is to test a new biosensor based on impedancemetry measurement using eukaryote cells. Fibroblast cells were grown on a small optically transparent indium tin oxide semiconductor electrode. Electrochemical impedance spectroscopy (EIS) was used to measure the effect of D-glucose on the electrical properties of fibroblast cells. Further analyses of the EIS results were performed using equivalent circuits in order to model the electrical flow through the interface. The linear calibration curve was established in the range 0-14 mM. The specification of the biosensors was verified using cytochalasin B as an inhibitor agent of the glucose transporters. The nonreactivity to sugars other than glucose was demonstrated. Such a biosensor could be applied to a more fundamental study of cell metabolism.
---
paper_title: Impedance Spectroscopy: A Powerful Tool for Rapid Biomolecular Screening and Cell Culture Monitoring
paper_content:
Dielectric spectroscopy or Electrochemical impedance spectroscopy (EIS) is traditionally used in corrosion monitoring, coatings evaluation, batteries, and electrodeposition and semiconductor characterization. However, in recent years, it is gaining widespread application in biotechnology, tissue engineering, and characterization of biological cells, disease diagnosis and cell culture monitoring. This article discusses the principles and implementation of dielectric spectroscopy in these bioanalytical applications. It provides examples of EIS as label-free, mediator-free strategies for rapid screening of biocompatible surfaces, monitoring pathogenic bacteria, as well as the analysis of heterogeneous systems, especially biological cells and tissues. Descriptions are given of the application of nanoparticles to improve the analytical sensitivities in EIS. Specific examples are given of the detection of base pair mismatches in the DNA sequence of Hepatitis B disease, TaySach's disease and Microcystis spp. Others include the EIS detection of viable pathogenic bacteria and the influence of nanomaterials in enhancing biosensor performance. Expanding applications in tissue engineering such as adsorption of proteins onto thiolated hexa(ethylene glycol)-terminated (EG6) self-assembled monolayer (SAM) are discussed.
---
paper_title: Glucose biosensors based on organic light-emitting devices structurally integrated with a luminescent sensing element
paper_content:
A platform for photoluminescence (PL) based biosensing is demonstrated for glucose. The sensor is structurally integrated, i.e., individually addressable organic light-emitting device (OLED) pixels (serving as the light source) and the sensing element are fabricated on glass or plastic substrates attached back-to-back. This results in a very compact, potentially miniaturizable sensor, which should strongly impact PL-based biosensor technology. The sensing element is an oxygen-sensitive dye coembedded with glucose oxidase in a thin film or dissolved in solution. The glucose biosensor is demonstrated for two OLED∕dye pairs: [blue OLED]∕[Ru dye] and [green OLED]∕[Pt dye]. Both PL-intensity and PL-lifetime modes are demonstrated for each pair; the lifetime mode eliminates the need for frequent sensor calibration. The sensor performance is evaluated in terms of design, dynamic range, limit of detection, and stability. The use of the glucose biosensor in conjunction with an oxygen sensor is also discussed.
---
paper_title: Aptamer biosensor for label-free impedance spectroscopy detection of proteins based on recognition-induced switching of the surface charge.
paper_content:
The recognition of proteins by aptamer-modified electrode transducers reverses the surface charge and leads to a novel label-free impedance spectroscopy bioelectronic detection protocol based on a decrease in the electron transfer resistance.
---
paper_title: Fibre-optic biosensor based on luminescence and immobilized enzymes: Microdetermination of sorbitol, ethanol and oxaloacetate
paper_content:
We have investigated highly selective and ultrasensitive biosensors based on luminescent enzyme systems linked to optical transducers. A fibre-optic sensor with immobilized enzymes was designed; the solid-phase bioreagent was maintained in close contact contact with the tip of a glass fibre bundle connected to the photomultiplier tube of a luminometer. A bacterial luminescence fibre-optic sensor was used for the microdetermination of NADH. Various NAD(P)-dependent enzymes, sorbitol dehydrogenase, alcohol dehydrogenase and malate dehydrogenase, were co-immobilized on preactivated polyamide membranes with the bacterial system and used for the microdetermination of sorbitol, ethanol and oxaloacetate at the nanomolar level with a good precision.
---
paper_title: A high sensitivity amperometric biosensor using laccase as biorecognition element.
paper_content:
An amperometric flow biosensor, using laccase from Rigidoporus lignosus as bioelement was developed. The laccase was kinetically characterized towards various phenolics both in solution and immobilized to a hydrophilic matrix by carbodiimide chemistry. A bioreactor connected to an amperometric flow cell by a FIA system was filled with the immobilized enzyme and the operational conditions of this biosensor were optimized as regards pH. Under the adopted experimental conditions, the immobilized enzyme oxidizes all the substrate molecules avoiding the need of cumbersome calibration procedures. The biosensor sensitivity, which was found to be 100 nA/μM for some of the tested substrates, resulted to be constant for more than 100 working days. This biosensor permits the detection of phenolics in aqueous solutions at concentrations in the nanomolar range and was successfully used to detect phenolics in wastewaters from olive oil mill without sample preparation.
---
paper_title: Acoustic wave biosensors
paper_content:
Abstract Acoustic waves excited in a piezoelectric medium provide an attractive technology for realizing a family of biosensors that are sensitive, portable, cheap and small. In this paper a wide range of bulk and surface-generated acoustic waves are described and prototype sensing-element geometries are presented. Results obtained using several candidate acoustic wave biosensors are also discussed.
---
paper_title: Microbial biosensor array with transport mutants of Escherichia coli K12 for the simultaneous determination of mono-and disaccharides.
paper_content:
An automated flow-injection system with an integrated biosensor array using bacterial cells for the selective and simultaneous determination various mono- and disaccharides is described. The selectivity of the individually addressable sensors of the array was achieved by the combination of the metabolic response, measured as the O(2) consumption, of bacterial mutants of Escherichia coli K12 lacking different transport systems for individual carbohydrates. Kappa-carrageenan was used as immobilization matrix for entrapment of the bacterial cells in front of 6 individually addressable working electrodes of a screen-printed sensor array. The local consumption of molecular oxygen caused by the metabolic activity of the immobilized cells was amperometrically determined at the underlying screen-printed gold electrodes at a working potential of -600 mV vs. Ag/AgCl. Addition of mono- or disaccharides for which functional transport systems exist in the used transport mutant strains of E. coli K12 leads to an enhanced metabolic activity of the immobilized bacterial cells and to a concomitant depletion of oxygen at the electrode. Parallel determination of fructose, glucose, and sucrose was performed demonstrating the high selectivity of the proposed analytical system.
---
paper_title: Detection of a Biomarker for Alzheimer's Disease from Synthetic and Clinical Samples Using a Nanoscale Optical Biosensor
paper_content:
A nanoscale optical biosensor based on localized surface plasmon resonance (LSPR) spectroscopy has been developed to monitor the interaction between the antigen, amyloid-β derived diffusible ligands (ADDLs), and specific anti-ADDL antibodies. Using the sandwich assay format, this nanosensor provides quantitative binding information for both antigen and second antibody detection that permits the determination of ADDL concentration and offers the unique analysis of the aggregation mechanisms of this putative Alzheimer's disease pathogen at physiologically relevant monomer concentrations. Monitoring the LSPR-induced shifts from both ADDLs and a second polyclonal anti-ADDL antibody as a function of ADDL concentration reveals two ADDL epitopes that have binding constants to the specific anti-ADDL antibodies of 7.3 × 1012 M-1 and 9.5 × 108 M-1. The analysis of human brain extract and cerebrospinal fluid samples from control and Alzheimer's disease patients reveals that the LSPR nanosensor provides new informati...
---
paper_title: Interdigitated array microelectrodes based impedance biosensors for detection of bacterial cells.
paper_content:
Impedance spectroscopy is a sensitive technique to characterize the chemical and physical properties of solid, liquid, and gas phase materials. In recent years this technique has gained widespread use in developing biosensors for monitoring the catalyzed reaction of enzymes; the bio-molecular recognition events of specific proteins, nucleic acids, whole cells, antibodies or antibody-related substances; growth of bacterial cells; or the presence of bacterial cells in the aqueous medium. Interdigitated array microelectrodes (IDAM) have been integrated with impedance detection in order to miniaturize the conventional electrodes, enhance the sensitivity, and use the flexibility of electrode fabrication to suit the conventional electrochemical cell format or microfluidic devices for variety of applications in chemistry and life sciences. This article limits its discussion to IDAM based impedance biosensors for their applications in the detection of bacterial cells. It elaborates on different IDAM geometries their fabrication materials and design parameters, and types of detection techniques. Additionally, the shortcomings of the current techniques and some upcoming trends in this area are also mentioned.
---
paper_title: Adapting selected nucleic acid ligands (aptamers) to biosensors.
paper_content:
A flexible biosensor has been developed that utilizes immobilized nucleic acid aptamers to specifically detect free nonlabeled non-nucleic acid targets such as proteins. In a model system, an anti-thrombin DNA aptamer was fluorescently labeled and covalently attached to a glass support. Thrombin in solution was selectively detected by following changes in the evanescent-wave-induced fluorescence anisotropy of the immobilized aptamer. The new biosensor can detect as little as 0.7 amol of thrombin in a 140-pL interrogated volume, has a dynamic range of 3 orders of magnitude, has an inter-sensing-element measurement precision of better than 4% RSD over the range 0-200 nM, and requires less than 10 min for sample analysis. The aptamer-sensor format is generalizable and should allow sensitive, selective, and fast determination of a wide range of analytes.
---
paper_title: Theory of elasticity
paper_content:
A walking beam pumping unit is provided for pumping liquid from wells having gas pressure therein. The unit is driven by gas pressure from the well reciprocating the piston of a pneumatic cylinder up and down to swing the walking beam correspondingly and pump the liquid from the well. Gas under pressure is directed from the wellhead through a two-way valve to the opposite ends of the hydraulic cylinder in alternating fashion, so the piston has power strokes in opposite directions. Each power stroke, besides moving the walking beam, serves to recompress the gas used for the preceding power stroke sufficiently to inject it into the sales line. The setting of the two-way valve is controlled by a pneumatic actuator supplied with gas under pressure from the wellhead and having a thimble valve responsive to the up and down movement of the walking beam by means of adjustable stops carried thereon. The horsehead is counter-balanced by weights or by a pneumatic cylinder attached to the horsehead and actuated by movement thereof to provide counter-balancing gas pressure. In one form of the invention, the counter-balancing gas pressure is stored in a hollow skid assembly.
---
paper_title: A magnetoelastic bioaffinity-based sensor for avidin.
paper_content:
Abstract A magnetoelastic bioaffinity sensor coupled with biocatalytic precipitation is described for avidin detection. The non-specific adsorption characteristics of streptavidin on different functionalized sensor surfaces are examined. It is found that a biotinylated poly(ethylene glycol) (PEG) interface can effectively block non-specific adsorption of proteins. Coupled with the PEG immobilized sensor surface, alkaline phosphatase (AP) labeled streptavidin is used to track specific binding on the sensor. This mass-change-based signal is amplified by the accumulation on the sensor of insoluble products of 5-bromo-4-chloro-3-indolyl phosphate catalyzed by AP. The resulting mass loading on the sensor surface in turn shifts the resonance frequency of the magnetoelastic sensors, with an avidin detection limit of approximately 200 ng/ml.
---
paper_title: Fibroblast cells: a sensing bioelement for glucose detection by impedance spectroscopy.
paper_content:
Modifying the electrical properties of fibroblasts against various glucose concentrations can serve as a basis for a new, original sensing device. The aim of the present study is to test a new biosensor based on impedancemetry measurement using eukaryote cells. Fibroblast cells were grown on a small optically transparent indium tin oxide semiconductor electrode. Electrochemical impedance spectroscopy (EIS) was used to measure the effect of D-glucose on the electrical properties of fibroblast cells. Further analyses of the EIS results were performed using equivalent circuits in order to model the electrical flow through the interface. The linear calibration curve was established in the range 0-14 mM. The specification of the biosensors was verified using cytochalasin B as an inhibitor agent of the glucose transporters. The nonreactivity to sugars other than glucose was demonstrated. Such a biosensor could be applied to a more fundamental study of cell metabolism.
---
paper_title: Acoustic sensors based on surface-localized HPSWs for measurements in liquids
paper_content:
Abstract Acoustic wave probes based on horizontally polarized shear waves (HPSWs) in quartz plates are presented. These probes can be applied as physical or chemical sensors for measurements in liquids and to characterize thin solid films mechanically. An appropriate and comprehensive simulation tool has been developed and is used to work out the specific advantages of the proposed probing system, to interpret the experimental data and to optimize the probing devices for special measurement purposes. The use of additional coatings of known consistency is introduced, which leads to an enhanced sensitivity and a reduction of cross-sensitivities, in particular for surface-localized HPSWs in quartz, which have not yet been the subject of sensor research. Theoretical and experimental results confirm the huge potential of HPSW devices for sensing applications in liquids.
---
paper_title: A staphylococcal enterotoxin B magnetoelastic immunosensor.
paper_content:
A magnetoelastic immunosensor for detection of staphylococcal enterotoxin B (SEB) is described. The magnetoelastic sensor is a newly developed mass/elasticity-based transducer of high sensitivity having a material cost of approximately $0.001/sensor. Affinity-purified rabbit anti-SEB antibody was covalently immobilized on magnetoelastic sensors, of dimensions 6 mm x 2 mm x 28 microm. The affinity reaction of biotin-avidin and biocatalytic precipitation are used to amplify antigen-antibody binding events on the sensor surface. Horseradish peroxidase (HRP) and alkaline phosphatase were examined as the labeled enzymes to induce biocatalytic precipitation. The alkaline phosphatase substrate, 5-bromo-4-chloro-3-indolyl phosphate (BCIP) produces a dimer, which binds tightly to the sensor surface, inducing a change in sensor resonance frequency. The biosensor demonstrates a linear shift in resonance frequency with staphylococcal enterotoxin B concentration between 0.5 and 5 ng/ml, with a detection limit of 0.5 ng/ml.
---
paper_title: Theoretical comparison of sensitivities of acoustic shear wave modes for (bio)chemical sensing in liquids
paper_content:
A theoretical comparison of the sensitivities of various acoustic shear wave modes applied in (bio)chemical sensing in a liquid environment is presented. The sensitivity which is defined as the relative change of oscillation frequency due to mass adsorption in a (bio)chemical interface is obtained from perturbation theory. It is shown that the application of a Love wave mode for chemical compound sensing in liquids is very promising because of its high sensitivity.
---
paper_title: Mass sensitivity of two-layer shear horizontal plate wave sensors
paper_content:
Abstract Velocity and mass sensitivity formulae in explicit form for shear-horizontal (SH) plate wave sensors are presented. The sensor geometry consists of an isotropic plate of thickness b and an isotropic thin layer of thickness h. The mass loading layer is assumed to be acoustically thin (h ≪ λ), where λ is the acoustic wavelength. The mass loading sensitivity, Smv, of the sensors used in velocity measurements decreases by a factor of (1−C2) for all the SH modes due to the effects of the elasticity of the mass loading layer, where C2 = Vs22/Vs12, Vs1 and Vs2 are the shear wave velocities of the two materials respectively. Because of the inertial effect of the mass loading layer, Smv decreases by a factor of (1 + ρ2h/ρ1b)−1 for the lowest order mode and by a factor of (1 + 2ρ2h/ρ1b)−1 for the other modes, where ρ1 and ρ2 are the density of the plate and the loaded mass layer, respectively. We also show that the relation Smv = (V/Vg)Smf is valid for the composite case, where Smf is the sensitivity formula for sensors configured in resonant frequency measurements, and V and Vg are the phase and group velocity of the SH plate modes respectively.
---
paper_title: Acoustic wave biosensors
paper_content:
Abstract Acoustic waves excited in a piezoelectric medium provide an attractive technology for realizing a family of biosensors that are sensitive, portable, cheap and small. In this paper a wide range of bulk and surface-generated acoustic waves are described and prototype sensing-element geometries are presented. Results obtained using several candidate acoustic wave biosensors are also discussed.
---
paper_title: Surface acoustic wave biosensors: a review
paper_content:
This review presents an overview of 20 years of worldwide development in the field of biosensors based on special types of surface acoustic wave (SAW) devices that permit the highly sensitive detection of biorelevant molecules in liquid media (such as water or aqueous buffer solutions). 1987 saw the first approaches, which used either horizontally polarized shear waves (HPSW) in a delay line configuration on lithium tantalate (LiTaO(3)) substrates or SAW resonator structures on quartz or LiTaO(3) with periodic mass gratings. The latter are termed "surface transverse waves" (STW), and they have comparatively low attenuation values when operated in liquids. Later Love wave devices were developed, which used a film resonance effect to significantly reduce attenuation. All of these sensor approaches were accompanied by the development of appropriate sensing films. First attempts used simple layers of adsorbed antibodies. Later approaches used various types of covalently bound layers, for example those utilizing intermediate hydrogel layers. Recent approaches involve SAW biosensor devices inserted into compact systems with integrated fluidics for sample handling. To achieve this, the SAW biosensors can be embedded into micromachined polymer housings. Combining these two features will extend the system to create versatile biosensor arrays for generic lab use or for diagnostic purposes.
---
paper_title: Theory, Instrumentation and Applications of Magnetoelastic Resonance Sensors: A Review
paper_content:
Thick-film magnetoelastic sensors vibrate mechanically in response to a time varying magnetic excitation field. The mechanical vibrations of the magnetostrictive magnetoelastic material launch, in turn, a magnetic field by which the sensor can be monitored. Magnetic field telemetry enables contact-less, remote-query operation that has enabled many practical uses of the sensor platform. This paper builds upon a review paper we published in Sensors in 2002 (Grimes, C.A.; et al. Sensors 2002, 2, 294–313), presenting a comprehensive review on the theory, operating principles, instrumentation and key applications of magnetoelastic sensing technology.
---
paper_title: Microbial biosensor array with transport mutants of Escherichia coli K12 for the simultaneous determination of mono-and disaccharides.
paper_content:
An automated flow-injection system with an integrated biosensor array using bacterial cells for the selective and simultaneous determination various mono- and disaccharides is described. The selectivity of the individually addressable sensors of the array was achieved by the combination of the metabolic response, measured as the O(2) consumption, of bacterial mutants of Escherichia coli K12 lacking different transport systems for individual carbohydrates. Kappa-carrageenan was used as immobilization matrix for entrapment of the bacterial cells in front of 6 individually addressable working electrodes of a screen-printed sensor array. The local consumption of molecular oxygen caused by the metabolic activity of the immobilized cells was amperometrically determined at the underlying screen-printed gold electrodes at a working potential of -600 mV vs. Ag/AgCl. Addition of mono- or disaccharides for which functional transport systems exist in the used transport mutant strains of E. coli K12 leads to an enhanced metabolic activity of the immobilized bacterial cells and to a concomitant depletion of oxygen at the electrode. Parallel determination of fructose, glucose, and sucrose was performed demonstrating the high selectivity of the proposed analytical system.
---
paper_title: A remote query magnetostrictive viscosity sensor.
paper_content:
Magnetically soft, magnetostrictive metallic glass ribbons are used as in-situ remote query viscosity sensors. When immersed in a liquid, changes in the resonant frequency of the ribbon-like sensors are shown to correlate with the square root of the liquid viscosity and density product. An elastic wave model is presented that describes the sensor response as a function of the frictional forces acting upon the sensor surface.
---
paper_title: A novel Love-plate acoustic sensor utilizing polymer overlayers
paper_content:
A Love-plate sensor, consisting of a surface skimming bulk wave (SSBW) device coated with a polymer layer, was found to increase the acoustic signal through coupling of the SSBW wave to a Love wave. Insertion loss, phase and frequency measurements were used to assess the optimum thickness of the polymer layer and the sensitivity of the device to mass-loading and viscous coupling.
---
paper_title: Surface plasmon resonance for gas detection and biosensing
paper_content:
Abstract Surface plasmon resonance is a new optical technique in the field of chemical sensing. Under proper conditions the reflectivity of a thin metal film is extremely sensitive to optical variations in the medium on one side of it. This is due to the fact that surface plasmons are sensitive probes of the boundary conditions. The effect can be utilized in many ways. A description of how it can be used for gas detection is given, together with results from exploratory experiments with relevance to biosensing.
---
paper_title: Quantification of bacterial cells based on autofluorescence on a microfluidic platform.
paper_content:
Bacterial counts provide important information during the processes such as pathogen detection and hygiene inspection and these processes are critical for public health and food/pharmaceutical production. In this study, we demonstrate the quantification of the number of bacterial cells based on the autofluorescence from the cell lysate on a microfluidic chip. We tested three model pathogenic bacteria (Listeria monocytogenes F4244, Salmonella Enteritidis PT1 and Escherichia coli O157:H7 EDL 933). In the experiment, a plug of approximately 150 pL containing lysate from 240 to 4100 cells was injected into a microfluidic channel with downstream laser-induced fluorescence detection under electrophoresis conditions. We found that the autofluorescence intensity increased with the number of cells almost linearly for all three bacteria. The autofluorescence remained a single peak when the cell lysate contained a mixture of different bacterial species. We also demonstrate a simple microfluidic device that integrates entrapment and electrical lysis of bacterial cells with fluorescence detection. Such a device can carry out the quantification of bacterial cells based on lysate autofluorescence without off-chip procedures. This study offers a simple and fast solution to on-chip quantification of bacterial cells without labeling. We believe that the method can be extended to other bacterial species.
---
paper_title: Evanescent sensing of biomolecules and cells
paper_content:
A technique using the evanescent field of tapered fibers is developed for rapid, convenient, and accurate sensing of biomolecules and cells using small volumes of analytes in the range of 150 μl. A tapered optical fiber was fabricated by heat pulling with a flame. A simple fiber-mounting device was developed to accommodate the optical fiber and provide a reaction chamber for analytes to interact with the tapered region. Using an analytical grade spectrofluorometer nicotinamide adenine dinucleotide (NADH), nicotinamide adenine dinucleotide phosphate (NADPH), and Chinese Hamster Ovary (CHO) cells at various concentrations were measured. A parameter, namely, the product of extinction coefficient and light path, is used to characterize detection sensitivity. Results from biomolecules and cells show that the sensitivity of the tapered fiber is at least an order of magnitude higher than that obtained in a cuvette arrangement.
---
paper_title: An Optical Biosensor for Rapid and Label-Free Detection of Cells
paper_content:
We report a broadly applicable optical method for rapid and label-free detection of as few as 45 cells. In this method, bacterial cells are detected by measuring the amount of laser light transmitted through a small glass well functionalized with antibodies which specifically recognize and capture the cells. The described approach is simple, rapid, economical, and promising for portable and high-throughput detection of a wide variety of pathogenic and infectious cells.
---
paper_title: Surface plasmon resonance on gratings as a novel means for gas sensing
paper_content:
Abstract A novel technique for sensing small changes in dielectric constant using the phenomenon of surface plasmon resonance has been developed. The principle of this technique is based on the measurement of a resonance maximum on a background of weak signal, giving improved signal-to-noise ratios over previously reported surface plasmon techniques. The sensing of the condensation of several organic vapours onto a silver surface is used to demonstrate the effectiveness of the technique. Its potential as a hand-held in-field gas sensor is discussed.
---
paper_title: Label-free fiber optic biosensor based on evanescent wave absorbance at 280 nm
paper_content:
Abstract Several analytes of interest such as bacteria, virus and some of the clinically important proteins and marker molecules absorb light in the ultra violet region (UV). In this study, we have investigated the possibility to develop a label-free fiber optic biosensor based on evanescent wave absorbance (EWA) at 280 nm to detect the presence of such analytes. A light emitting diode (LED) in UV with peak emission at 280 nm and span of ±10 nm was chosen as a light source to limit the solarization of the fiber probes. Numerical simulations were performed to investigate the effect of fiber parameters and wavelength of operation on EWA and its sensitivity. Experimental verifications proved the validity of the simulations. The absorbance behavior of fiber sensor probes in the visible region was studied using FITC as absorbing molecule. Goat anti-human IgG (GaHIgG) was chosen as a model analyte. Human IgG immobilized fiber probes were subjected to goat anti-human IgG to test the absorbance response of the probes at 280 nm. These studies demonstrate that intrinsic absorbance properties of biomolecules may be utilized for development of absorbance based label-free biosensors. The sensitivity, which is a limiting factor, can be improved with better optics.
---
paper_title: Optical chemical sensing employing surface plasmon resonance
paper_content:
An optical sensor based on the phenomenon of light-excited surface plasmon resonance has been investigated to measure liquid chemical concentrations. A white light source is used to excite surface plasmon waves at a metal/analyte interface. The wavelength of maximum absorption in the reflected light depends uniquely on the refractive index of the analyte.
---
paper_title: Growth of Ultrasmooth Octadecyltrichlorosilane Self-Assembled Monolayers on SiO2
paper_content:
Ultrasmooth octadecyltrichlorosilane (OTS) monolayers (2.6 ± 0.2 nm thick, RMS roughness ∼1.0 A) can be obtained reproducibly by exposing clean native SiO2 surfaces to a dry solution of OTS in Isopar-G. A clean room is not required. Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS), contact angle data, and ellipsometry show that film formation occurs through a “patch expansion” process and terminates once a single monolayer is formed, after about 2 days. These monolayers are suitable as substrates for high-resolution electron beam and AFM or STM lithography. Further observations highlight the importance of controlling water content during deposition of siloxane self-assembled monolayers. OTS covers the surface much faster when there is a little water in the OTS solution; contact angle and ellipsometry data indicate formation of a hydrophobic, 2.6 nm thick film after about 2 h. However, these OTS films have a totally different growth mechanism than films grown from dry solutions and are...
---
paper_title: Easy and Efficient Bonding of Biomolecules to an Oxide Surface of Silicon
paper_content:
A new method is described to attach biological molecules to the surface of silicon. Semiconductors such as Si modified with surface-bound capture molecules have enormous potential for use in biosensors for which an ideal detection platform should be inexpensive, recognize targets rapidly with high sensitivity and specificity, and possess superior stability. In this process, a self-assembled film of an organophosphonic acid is bonded to the native or synthesized oxide-coated Si surface as a film of the corresponding phosphonate. The phosphonate film is functionalized to enable covalently coupling biological molecules, ranging in size from small peptides to large multi-subunit proteins, to the Si surface. Surface modification and biomolecule coupling procedures are easily accomplished: all reactions can proceed in air, and most take place under ambient conditions. The biomolecule-modified surfaces are stable under physiological conditions, are selective for adhesion of specific cells types, and are reusable.
---
paper_title: Potassium ion-sensitive field effect transistor
paper_content:
The construction and theory of operation of a potassiumsensitive field effect transistor Is described, and Its performance is characterized both as a solid-state field-effect device and as an electrochemical sensor. The performance of this device is comparable with the correspondlng PVC-type ion selective electrodes. The transistor operates satisfactorliy in the presence of proteins and it has been used for determination of potassium ion concentration in blood serum. A new type of electrochemical sensor, an ion-sensitive field-effect transistor (ISFET), was introduced when Bergveld removed the metal gate from a metal oxide semiconductor field-effect transistor (MOSFET) and exposed the silicon oxide gate insulator to a measured solution (I). A similar approach was followed later by Matsuo and Wise (Z), and this new subject area has been recently reviewed by Zemel (3). In the broader sense of chemically sensitive field-effect transistors, one sensitive to molecular hydrogen has also been reported (4). The ISFET is a result of the integration of two technologies: ion-selective electrodes and solid state microelectronics. This development opens several new possibilities, such as miniaturization, development of multiprobes, all solidstate design and in situ signal processing. Because of its small size, it presents a difficult encapsulation and packaging problem which is, however, amply offset by the elimination of electrical pick-up noise by in situ impedance conversion and on site signal amplification. Bergveld did not modify the ion-sensitive layer in any way although he considered introducing impurities in order to render the device ion selective. In this paper, we introduce a class of devices having a chemically-sensitive layer placed over the gate region, and we report our results with valinomycin/plasticizer/poly(vinylchloride) membrane
---
paper_title: Organic functionalization of group IV semiconductor surfaces: principles, examples, applications, and prospects
paper_content:
Organic functionalization is emerging as an important area in the development of new semiconductor-based materials and devices. Direct, covalent attachment of organic layers to a semiconductor interface provides for the incorporation of many new properties, including lubrication, optical response, chemical sensing, or biocompatibility. Methods by which to incorporate organic functionality to the surfaces of semiconductors have seen immense progress in recent years, and in this article several of these approaches are reviewed. Examples are included from both dry and wet processing environments. The focus of the article is on attachment strategies that demonstrate the molecular nature of the semiconductor surface. In many cases, the surfaces mimic the reactivity of their molecular carbon or organosilane counterparts, and examples of functionalization reactions are described in which direct analogies to textbook organic and inorganic chemistry can be applied. This article addresses the expected impact of these functionalization strategies on emerging technologies in nanotechnology, sensing, and bioengineering.
---
paper_title: Enzyme monolayer-functionalized field-effect transistors for biosensor applications
paper_content:
Abstract A gate surface of an ion-selective field-effect transistor was modified with a monolayer enzyme array that stimulates biocatalytic reactions that control the gate potential. Stepwise assemblage of the biocatalytic layer included primary silanization of the Al 2 O 3 -gate with 3-aminopropyltriethoxysilane, subsequent activation of the amino groups with glutaric dialdehyde and the covalent attachment of the enzyme to the functionalized gate surface. Urease, glucose oxidase, acetylcholine esterase and α-chymotrypsin were used to organize the biocatalytic matrices onto the chip gate. The resulting enzyme-based field-effect transistors, ENFETs, demonstrated capability to sense urea, glucose, acetylcholine and N -acetyl- l -tyrosine ethyl ester, respectively. The mechanism of the biosensing involves the alteration of the pH in the sensing layer by the biocatalytic reactions and the detection of the pH change by the ENFET. The major advantage of the enzyme-thin-layered FET devices as biosensors is the fast response-time (several tens of seconds) of these bioelectronic devices. This advantage over traditional thick-polymer-based ENFETs results from the low diffusion barrier for the substrate penetration to the biocatalytic active sites and minute isolation of the pH-sensitive gate surface from the bulk solution.
---
paper_title: Surface Characterization of a Silicon-Chip-Based DNA Microarray
paper_content:
The immobilization of DNA (deoxyribonucleic acid) on solid supports is a crucial step for any application in the field of DNA microarrays. It determines the efficacy of the hybridization and influences the signal strength for the detection. We used solid supports made from silicon wafers as an alternative substrate to the commonly used microscope glass slides. The covalent immobilization of thiol-terminated DNA oligonucleotides on self-assembled layers of (3-mercaptopropyl)trimethoxysilane (MPTS) by disulfide bond formation was investigated. Contact angle measurement, variable angle spectral ellipsometry (VASE), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM) were used to characterize the changing properties of the surface during the DNA array fabrication. During wafer processing the contact angle changed from 3° for the hydroxylated surface to 48.5° after deposition of MPTS. XPS data demonstrated that all sulfur in the MPTS layer was present in the form of reduced SH or S−S grou...
---
paper_title: The ion sensitive field effect transistor (ISFET) pH electrode: a new sensor for long term ambulatory pH monitoring.
paper_content:
Intraluminal pH monitoring in man should be performed with disposable multichannel assemblies that allow recordings at multiple sites and prevent transmission of infection. Currently available glass electrodes are unsuitable for this purpose because of their size and price. We have thus constructed and tested a small, combined ion sensitive field effect transistor (ISFET) pH electrode incorporating an integral reference electrode. In vitro studies showed that both ISFET and glass electrodes (440-M4, Ingold, Switzerland) have a linear response over the pH range 1.3-8.0 and that they are comparable with regard to response time and 24 hour drift. Twenty one hour intragastric pH recordings were performed simultaneously in eight healthy volunteers using a glass electrode and an ISFET electrode, placed no more than 2 mm apart in a combined assembly. This was located in the gastric corpus under fluoroscopic control. The 21 hour pH curves recorded by each electrode type showed identical patterns: an early morning rise in pH with three meal-associated pH peaks lasting for about two to three hours. The means of the 21 hour pH medians were 2.09 and 2.07 as measured by the glass and the ISFET electrodes respectively. Thus, ISFETs are suitable for the construction of inexpensive and hence disposable multichannel pH monitoring assemblies of small diameter. Provided that they can be produced in large numbers with appropriate technical support, ISFETs have the potential to replace glass electrodes for long term monitoring of gastrointestinal luminal acidity.
---
paper_title: Integrated Micro Multi Ion Sensor Using Field Effect of Semiconductor
paper_content:
The fabrication of a microprobe for simultaneous, independent and in-vivo measurements of H+ and Na+ ion activities is described.
---
paper_title: ELECTROCHEMICAL BIOSENSORS: RECOMMENDED DEFINITIONS AND CLASSIFICATION*
paper_content:
*A special report on the International Union of Pure and Applied Chemistry, Physical Chemistry Division, Commission I.7 (Biophysical Chemistry), Analytical Chemistry Division, Commission V.5 (Electroanalytical Chemistry).
---
paper_title: Electrochemical biosensors - principles and applications
paper_content:
Summary The first scientifically proposed as well as successfully commercialized biosensors were those based on electrochemical sensors for multiple analytes. Electrochemical biosensors have been studied for a long time. Currently, transducers based on semiconductors and screen printed electrodes represent a typical platform for the construction of biosensors. Enzymes or enzyme labeled antibodies are the most common biorecognition components of biosensors. The principles of, and the most typical applications for electrochemical biosensors are described in this review. The relevant systems are divided into three types according to the operating principle governing their method of measurement: potentiometric, amperometric and impedimetric transducers, and the representative devices are described for each group. Some of the most typical assays are also mentioned in the text.
---
paper_title: ELECTROCHEMICAL BIOSENSORS: RECOMMENDED DEFINITIONS AND CLASSIFICATION*
paper_content:
*A special report on the International Union of Pure and Applied Chemistry, Physical Chemistry Division, Commission I.7 (Biophysical Chemistry), Analytical Chemistry Division, Commission V.5 (Electroanalytical Chemistry).
---
paper_title: Polymeric Sensors to Monitor Cockroach Locomotion
paper_content:
We have developed a method using a polyvinylidene fluoride (PVDF) polymeric sensor to monitor the leg movements of cockroaches. The PVDF sensor was coated with gold as electrodes. It was attached to the leg of a roach. The voltage signals generated through bending directly correlate to the movement of the legs. It was found that the output voltage was a function of the degree of sensor bending caused by the movement of leg sclerites. An ex situ motorized linear stage generated similar results.
---
paper_title: Stress-resolved and cockroach-friendly piezoelectric sensors
paper_content:
We investigate effects of bending stress on piezoelectric properties of polyvinylidene fluoride (PVDF) as a polymer ::: sensor. The sensor was designed and fabricated into a special size and shape so that it can be attached to small ::: insects, such as the American cockroach (Periplaneta Americana) to measure the insects' locomotion. The ::: performance of the sensor is studied using a controlled linear stage to buckle the sensor mimicking the bending of ::: the sensor due to the leg movements of cockroaches. For comparison, a roach robot was used for multi-leg study. ::: Results indicate that buckling motion of the sensor produce an output that is different from regular stretching effect. ::: The sensor-generated charge depends on the localized stress distribution and dipole alignment. This paper discusses ::: the methods of characterization of piezoelectricity useful for insect applications.
---
paper_title: Studying insect motion with piezoelectric sensors
paper_content:
Piezoelectric materials have been widely used in applications such as transducers, acoustic components, as well as ::: motion, pressure and airborne sensors. Because of the material's biocompatibility and flexibility, we have been able to ::: apply small piezoelectric sensors, made of PVDF, to cockroaches. We built a laboratory test system to study the ::: piezoelectric properties of a bending sensor. The tested motion was compared with that of the sensor attached to a ::: cockroach. Surface characterization and finite element analysis revealed the effects of microstructure on piezoelectric ::: response. The sensor attachment enables us to monitor the insects' locomotion and study their behaviors. The ::: applications of engineering materials to insects opens the door to innovating approaches to integrating biological, ::: mechanical and electrical systems.
---
paper_title: The molecular level modification of surfaces: from self-assembled monolayers to complex molecular assemblies
paper_content:
The modification of surfaces with self-assembled monolayers (SAMs) containing multiple different molecules, or containing molecules with multiple different functional components, or both, has become increasingly popular over the last two decades. This explosion of interest is primarily related to the ability to control the modification of interfaces with something approaching molecular level control and to the ability to characterise the molecular constructs by which the surface is modified. Over this time the level of sophistication of molecular constructs, and the level of knowledge related to how to fabricate molecular constructs on surfaces have advanced enormously. This critical review aims to guide researchers interested in modifying surfaces with a high degree of control to the use of organic layers. Highlighted are some of the issues to consider when working with SAMs, as well as some of the lessons learnt (169 references).
---
paper_title: Formation of monolayer films by the spontaneous assembly of organic thiols from solution onto gold
paper_content:
Abstract : Long-chain alkanethiols, HS(CH2)nX, adsorb from solution onto gold surfaces and form ordered, oriented monolayer films. The properties of the interfaces between the films and liquids are largely independent of chain length when n > 10; in particular, wetting is not directly influenced by the proximity of the underlying gold substrate. The specific interaction of gold with sulfur and other soft nucleophiles and its low reactivity toward most hard acids and bases make it possible to vary the structure of the terminal group, X, widely and thus permit the introduction of a great range of functional groups into a surface. Studies of wettability of these monolayers, and of their composition using X-ray photoelectron spectroscopy (XPS), indicate that the monolayers are oriented with the tail group, X, exposed at the monolayer-air or monolayer- liquid interface. The adsorption of simple n-alkanethiols generates hydrophobic surfaces whose free energy (19 mJ/sq. m) is the lowest of any hydrocarbon surface studied to date. Measurement of contact angles is a useful tool for studying the structure and chemistry of the outermost few angstroms of a surface. This work used contact angles and optical ellipsometry to study the kinetics of adsorption of monolayer films and to examine the experimental conditions necessary for the formation of high-quality films.
---
paper_title: Mechanisms and kinetics of self-assembled monolayer formation.
paper_content:
Recent applications of various in situ techniques have dramatically improved our understanding of the self-organization process of adsorbed molecular monolayers on solid surfaces. The process involves several steps, starting with bulk solution transport and surface adsorption and continuing with the two-dimensional organization on the substrate of interest. This later process can involve passage through one or more intermediate surface phases and can often be described using two-dimensional nucleation and growth models. A rich picture has emerged that combines elements of surfactant adsorption at interfaces and epitaxial growth with the additional complication of long-chain molecules with many degrees of freedom.
---
paper_title: Molecular-Level Approach To Inhibit Degradations of Alkanethiol Self-Assembled Monolayers in Aqueous Media
paper_content:
A molecular-level approach is developed to prevent or inhibit the degradation processes of alkanethiol self-assembled monolayers (SAMs). Previous studies revealed two degradation pathways: direct desorption and oxidation−desorption. By use of scanning tunneling microscopy (STM) and atomic force microscopy (AFM), in situ and time-dependent imaging reveals and confirms that degradations of alkanethiol SAMs on gold mainly initiate at defect sites, such as domain boundaries and vacancy islands, and then propagate into the ordered domains. Our approach targets at attaching small molecules with preferred adhesion to the defects. The best candidates are aqueous media containing a small amount of amphiphilic surfactant molecules, such as N,N-dimethylformamide (DMF) or dimethyl sulfoxide (DMSO). High-resolution studies demonstrate that DMSO and DMF molecules attach to SAM surfaces and more favorably at defect sites, forming relatively stable adsorbates. This attachment increases the activation energy sufficiently...
---
paper_title: Electrochemical Modification of Glassy Carbon Electrode Using Aromatic Diazonium Salts. 1. Blocking Effect of 4-Nitrophenyl and 4-Carboxyphenyl Groups
paper_content:
The effect of a 4-carboxyphenyl or a 4-nitrophenyl thin film at the surface of a glassy carbon electrode on their electrochemical responses in the presence of various electroactive probes has been investigated. The grafting of a substituted phenyl group to a glassy carbon electrode was achieved by electrochemical reduction of the corresponding substituted phenyldiazonium derivative in acetonitrile. The blocking properties of the film depend primarily on electrostatic and electrolyte/solvent effects. Permselectivity for the 4-carboxyphenyl film can be achieved by controlling the dissociation of the carboxy group. The substituted phenyl layer is much more compact and less permeable in contact with a nonaqueous solvent than with an aqueous solvent presumably because the layer is poorly solvated. Electrochemical impedance measurements indicate that the kinetics of electron transfer are slowed down when the time used to modify the glassy carbon electrode is increased. Cyclic voltammetry and X-ray photoelectron...
---
paper_title: Organometallic chemistry on silicon surfaces: formation of functional monolayers bound through Si–C bonds
paper_content:
Silicon chips form the backbone of modern computing and yet until recently, the surface chemistry of this technologically essential material has remained relatively unexplored. As the size of devices on silicon wafers shrink (towards gigascale integration), the surface characteristics play increasingly crucial roles in the proper functioning of the device since the ratio of surface atoms/bulk escalates. While surface oxide has served thus far as the main passivation route, there is strong interest in precisely tailoring the interface properties, not only for microelectronics, but other applications including sensors, MEMS and biologically active surfaces. As a result, organometallic and organic chemistry has become essential for the synthesis of functional, modifiable monolayers, bound to non-oxidized silicon surfaces through silicon–carbon bonds. The latest approaches towards preparation of monolayers through Si–C bonds on both flat and photoluminescent porous silicon are described. Wet chemical techniques, accessible to most organometallic/organic chemists are highlighted, but recent developments using UHV conditions also receive attention.
---
paper_title: Thiol self-assembled monolayers on mercury surfaces: the adsorption and electrochemistry of ω-mercaptoalkanoic acids
paper_content:
Abstract The formation of self-assembled monolayers (SAMs) of ω-mercaptoalkanoic acids on a mercury surface under potential control has been studied. Cyclic voltammetry and AC voltammetry were used for investigating the different phases and transitions that the homologous series of these thiols form and undergo. We find that the thiols are either physisorbed or chemisorbed depending on the applied potential and the transition between these two states occurs through a Faradaic process. Moreover, the chemisorption of the thiols results in multilayer deposition when the thiol monolayer does not block electron transfer. Finally, the implications of this study to the formation and manipulation of SAMs of ω-functionalized alkanethiols on Hg are discussed.
---
paper_title: Influence of Adsorbate Ordering on Rates of UV Photooxidation of Self-Assembled Monolayers
paper_content:
The photooxidation of a range of methyl terminated straight chain alkylthiol (CH3(CH2)n-1SH) self-assembled monolayers (SAMs) on Au has been investigated using X-ray photoelectron spectroscopy (XPS). The photooxidation process has been found to involve the oxidation of the thiolate group to a sulfonate by an active O2 species (possibly singlet oxygen). The rate of photooxidation was found to vary strongly with alkyl chain length and has been correlated with the amount of disorder within the monolayers. Short-chain SAMs (n ≤ 8) have the highest disorder and oxidize the fastest, with the rate of oxidation increasing sharply with decreasing n, while close packed long-chain SAMs (n ≥ 12) oxidize much more slowly, with the rate of oxidation decreasing slowly with increasing n.
---
paper_title: Comparison of the structures and wetting properties of self-assembled monolayers of n-alkanethiols on the coinage metal surfaces, copper, silver, and gold
paper_content:
Long-chain alkanethiols, HS(CH2),CH3, adsorb from solution onto.the surfaces of gold, silver, and copper and form monolayers. Reflection infrared spectroscopy indicates that monolayers on silver and on copper (when carefully prepared) have the chains in well-defined molecular orientations and in crystalline-like phase states, as has been observed on gold. Monolayers on silver are structurally related to those formed by adsorption on gold, but different in details of orientation. The monolayers formed on copper are structurally more complex and show a pronounced sensitivity to the details of the sample preparation. Quantitative analysis of the IR data using numerical simulations based on an average single chain model suggests that the alkyl chains in monolayers on silver are all-trans zig-zag and canted by - 12' from the normal to the surface. The analysis also suggests a twist of the plane containing the carbon backbone of -45' from the plane defined by the tilt and surface normal vectors. For comparison, the monolayers that form on adsorption of alkanethiols on gold surfaces, as judged by their vibrational spectra, are also trans zig-zag extended but, when interpreted in the context of the same single chain model, have a cant angle of -27O and a twist of the plane of the carbon backbone of -53'. The monolayers formed on copper (when they are obtained in high quality) exhibit infrared spectra effectively indistinguishable from those on silver and thus appear to have the same structure. Films on copper are also commonly obtained that are structurally ill-defined and appear to contain significant densities of gauche conformations. These spectroscopically based interpretations are compatible with inferences from wetting and XPS measurements. The structure of the substrate-sulfur interface appears to control molecular orientations of the alkyl groups in these films. An improved structural model, incorporating a two-chain unit cell and allowing for the temperature-dependent population of gauche conformations, is presented and applied to the specific case of the structures formed on gold.
---
paper_title: Synthesis, Structure, and Properties of Model Organic Surfaces
paper_content:
Interest in the properties of thin-film organic materials, especially regard ing organized monoand multilayer assemblies, has grown enormously in recent years. The impetus for this renaissance-the relevance of such structures and materials to biological interfaces and membranes, corrosion protection, electrochemistry, wetting, adhesion, and microelectronic cir cuit fabrication, for example-has been discussed extensively (1,2). Such materials are clearly contributing significantly to our more general under standing of the physics and chemistry of complex surfaces and interfaces. In this article, we provide a general overview of areas in which this has been persuasively demonstrated and, in so doing, suggest the intellectual issues that drive current research in this field. Until recently, there were no generally applicable methods to construct well-ordered, organic surface phases by using any rational synthetic scheme. Metal, semiconductor, and oxide surfaces can be easily prepared by orienting, cutting (or cleaving), and polishing single-crystal substrates followed by cleaning in ultrahigh vacuum (URV) (i.e. by ion bom-
---
paper_title: Self-Assembled Monolayers of Alkanethiolates on Palladium Are Good Etch Resists
paper_content:
This paper describes microcontact printing (μCP) of long-chain alkanethiolates on palladium, followed by solution-phase etching with an iron(III)-based etchant, to make patterned structures. The commonly used soft-lithographic procedure for fabricating microstructuresμCP of SAMs on goldhas three shortcomings: a significant surface density of pinhole defects, substantial edge roughness, and incompatibility with processes used in CMOS fabrication. Microcontact printing on palladium gives fewer defects and smaller edge roughness than on gold, and is compatible with CMOS. The mechanism by which etch-resistant patterns are formed is different for palladium and gold. The Pd/S interfacial layer formed by the reaction of palladium films with sulfur-containing compounds provides good resistance to etches independently of the barrier to access the surface provided by the film of (CH2)n groups in the long-chain SAMs. This barrier is the basis of the etch resistance of SAMs on gold, but only supplements the etch res...
---
paper_title: Control of carrier density by self-assembled monolayers in organic field-effect transistors
paper_content:
Control of carrier density by self-assembled monolayers in organic field-effect transistors
---
paper_title: Infrared characterization of interfacial Si-O bond formation on silanized flat SiO2/Si surfaces.
paper_content:
Chemical functionalization of silicon oxide (SiO(2)) surfaces with silane molecules is an important technique for a variety of device and sensor applications. Quality control of self-assembled monolayers (SAMs) is difficult to achieve because of the lack of a direct measure for newly formed interfacial Si-O bonds. Herein we report a sensitive measure of the bonding interface between the SAM and SiO(2), whereby the longitudinal optical (LO) phonon mode of SiO(2) provides a high level of selectivity for the characterization of newly formed interfacial bonds. The intensity and spectral position of the LO peak, observed upon silanization of a variety of silane molecules, are shown to be reliable fingerprints of formation of interfacial bonds that effectively extend the Si-O network after SAM formation. While the IR absorption intensities of functional groups (e.g., >C=O, CH(2)/CH(3)) depend on the nature of the films, the blue-shift and intensity increase of the LO phonon mode are common to all silane molecules investigated and their magnitude is associated with the creation of interfacial bonds only. Moreover, results from this study demonstrate the ability of the LO phonon mode to analyze the silanization kinetics of SiO(2) surfaces, which provides mechanistic insights on the self-assembly process to help achieve a stable and high quality SAM.
---
paper_title: The molecular level modification of surfaces: from self-assembled monolayers to complex molecular assemblies
paper_content:
The modification of surfaces with self-assembled monolayers (SAMs) containing multiple different molecules, or containing molecules with multiple different functional components, or both, has become increasingly popular over the last two decades. This explosion of interest is primarily related to the ability to control the modification of interfaces with something approaching molecular level control and to the ability to characterise the molecular constructs by which the surface is modified. Over this time the level of sophistication of molecular constructs, and the level of knowledge related to how to fabricate molecular constructs on surfaces have advanced enormously. This critical review aims to guide researchers interested in modifying surfaces with a high degree of control to the use of organic layers. Highlighted are some of the issues to consider when working with SAMs, as well as some of the lessons learnt (169 references).
---
paper_title: Organometallic chemistry on silicon surfaces: formation of functional monolayers bound through Si–C bonds
paper_content:
Silicon chips form the backbone of modern computing and yet until recently, the surface chemistry of this technologically essential material has remained relatively unexplored. As the size of devices on silicon wafers shrink (towards gigascale integration), the surface characteristics play increasingly crucial roles in the proper functioning of the device since the ratio of surface atoms/bulk escalates. While surface oxide has served thus far as the main passivation route, there is strong interest in precisely tailoring the interface properties, not only for microelectronics, but other applications including sensors, MEMS and biologically active surfaces. As a result, organometallic and organic chemistry has become essential for the synthesis of functional, modifiable monolayers, bound to non-oxidized silicon surfaces through silicon–carbon bonds. The latest approaches towards preparation of monolayers through Si–C bonds on both flat and photoluminescent porous silicon are described. Wet chemical techniques, accessible to most organometallic/organic chemists are highlighted, but recent developments using UHV conditions also receive attention.
---
paper_title: Electrochemical Modification of Glassy Carbon Electrode Using Aromatic Diazonium Salts. 1. Blocking Effect of 4-Nitrophenyl and 4-Carboxyphenyl Groups
paper_content:
The effect of a 4-carboxyphenyl or a 4-nitrophenyl thin film at the surface of a glassy carbon electrode on their electrochemical responses in the presence of various electroactive probes has been investigated. The grafting of a substituted phenyl group to a glassy carbon electrode was achieved by electrochemical reduction of the corresponding substituted phenyldiazonium derivative in acetonitrile. The blocking properties of the film depend primarily on electrostatic and electrolyte/solvent effects. Permselectivity for the 4-carboxyphenyl film can be achieved by controlling the dissociation of the carboxy group. The substituted phenyl layer is much more compact and less permeable in contact with a nonaqueous solvent than with an aqueous solvent presumably because the layer is poorly solvated. Electrochemical impedance measurements indicate that the kinetics of electron transfer are slowed down when the time used to modify the glassy carbon electrode is increased. Cyclic voltammetry and X-ray photoelectron...
---
paper_title: Advances in Interfacial Design for Electrochemical Biosensors and Sensors: Aryl Diazonium Salts for Modifying Carbon and Metal Electrodes
paper_content:
A high degree of control over the modification of electrode surfaces is required for many sensing applications. This critical review briefly outlines some of the considerations for interfacial design in amperometric sensors and discusses some of advantages and disadvantages of alkanethiol modified metal electrodes for such applications before concentrating on the modification of electrodes surfaces using aryl diazonium salts. The pros and cons of this chemistry, and the application of aryl diazonium salts for sensing on carbon electrodes is reviewed before recent advances in using this chemistry for modifying metal electrodes are presented.
---
paper_title: Layer-by-layer assembly of cationic lipid and plasmid DNA onto gold surface for stent-assisted gene transfer
paper_content:
Intravascular stent-assisted gene transfer is an advanced approach for the therapy of vascular diseases such as atherosclerosis and stenosis. This approach requires a stent that allows local and efficient administration of therapeutic genes to the target cells at the vascular wall. To create such a stent, a method was developed for loading plasmid DNA onto the metal surface. The method involves the formation of self-assembled monolayer on the noble metal surface followed by electrostatic layer-by-layer (LBL) assembly of a cationic lipid/plasmid DNA complex and free plasmid DNA. In this in vitro feasibility study, the thin plainer film and the wire of gold were used as a substrate. The LBL assembly process was characterized by surface plasmon resonance spectroscopy and static contact angle measurement. Plasmid DNA loaded in the multilayer exhibited improved resistance against nuclease digestion. When cultured directly on the DNA-loaded surface, cells were transfected to express exogenous gene in the DNA loading-dependent manner. Plasmid DNA could also be transferred to endothelial cells from its apical side by placing the DNA-loaded gold wire onto the cell layer.
---
paper_title: Electrogeneration of a poly(pyrrole)-NTA chelator film for a reversible oriented immobilization of histidine-tagged proteins
paper_content:
This contribution reports, for the first time, the synthesis and electropolymerization of a pyrrole N-substituted by a nitrilotriacetic acid acting as a chelating center of Cu2+. A step-by-step approach for protein immobilization was developed via the successive coordination of Cu2+ and histidine-tagged proteins. The self-assembly of histidine-tagged glucose oxidase led to the formation of a close-packed enzyme monolayer at the poly(pyrrole) surface, and the reversibility and reproducibility of this affinity process were demonstrated.
---
paper_title: Electron transfer and ligand binding to cytochrome c' immobilized on self-assembled monolayers
paper_content:
We have successfully immobilized Allochromatium vinosum cytochrome c‘ on carboxylic acid-terminated thiol monolayers on gold and have investigated its electron-transfer and ligand binding properties. Immobilization could only be achieved for pH's ranging from 3.5 to 5.5, reflecting the fact that the protein is only sufficiently positively charged below pH 5.5 (pI = 4.9). Upon immobilization, the protein retains a near-native conformation, as is suggested by the observed potential of 85 mV vs SHE for the heme FeIII/FeII transition, which is close to the value of 60 mV reported in solution. The electron-transfer rate to the immobilized protein depends on the length of the thiol spacer, displaying distance-dependent electron tunneling for long thiols and distance-independent protein reorganization for short thiols. The unique CO-induced dimer-to-monomer transition observed for cytochrome c‘ in solution also seems to occur for immobilized cytochrome c‘. Upon saturation with CO, a new anodic peak corresponding...
---
paper_title: Spectroelectrochemistry of type II cytochrome c3 on a glycosylated self-assembled monolayer.
paper_content:
A modified silver electrode was prepared by the self-assembly of a thiol-derivatized neoglycoconjugate, forming a 2D surface with maltose functionality. This self-assembled-monolayer-modified electrode was utilized for adsorption and spectroelectrochemical studies of tetraheme-containing type II cytochrome c3. The glycosylated surface allowed for the determination of the hemes' redox potentials and demonstrated enhanced spectroelectrochemical performance, in comparison to the widely used self-assembled monolayer of 11-mercapto-undecanoic acid.
---
paper_title: Reversible oriented surface immobilization of functional proteins on oxide surfaces.
paper_content:
Reversible and oriented immobilization of proteins in a functionally active form on solid surfaces is a prerequisite for the investigation of molecular interactions by surface-sensitive techniques. We demonstrate a method generally applicable for the attachment of proteins to oxide surfaces. A nitrilotriacetic acid group serving as a chelator for transition metal ions was covalently bound to the surface via silane chemistry. Reversible binding of the green fluorescent protein, modified with a hexahistidine extension, was monitored in situ using total internal reflection fluorescence. The association constant and kinetic parameters of the binding process were determined. The reversible, directed immobilization of proteins on surfaces as described here opens new ways for structural investigation of proteins and receptor-ligand interactions.
---
paper_title: Self-assembled monolayers for MALDI-TOF mass spectrometry for immunoassays of human protein antigens.
paper_content:
This paper reports a method that combines self-assembled monolayers with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to perform immunoassays on clinical samples. The immunosensors are prepared by immobilizing His-tagged protein G (or A) to a monolayer presenting the Ni2+ chelates, followed by immobilization of IgG antibodies with specificity for the intended analyte. The SAMDI mass spectrometry technique confirms the presence of the two proteins on the immunosensor and additionally provides a label-free analysis of antigens that bind to the sensor. This paper reports examples of detecting several proteins from human serum, including multianalyte assays that resolve each analyte according to their mass-to-charge ratio in the SAMDI spectra. An example is described wherein SAMDI is used to identify a proteolytic fragment of cystatin C in cerebral spinal fluids from patients diagnosed with multiple sclerosis. The SAMDI-TOF immunoassay, which combines well-defined surface chemistries for the selective and reproducible localization of analytes with mass spectrometry for label-free detection of analytes, may offer an alternative methodology to address many of the issues associated with standardized clinical diagnostics.
---
paper_title: Electrochemical interrogation of conformational changes as a reagentless method for the sequence-specific detection of DNA
paper_content:
We report a strategy for the reagentless transduction of DNA hybridization into a readily detectable electrochemical signal by means of a conformational change analogous to the optical molecular beacon approach. The strategy involves an electroactive, ferrocene-tagged DNA stem-loop structure that self-assembles onto a gold electrode by means of facile gold-thiol chemistry. Hybridization induces a large conformational change in this surface-confined DNA structure, which in turn significantly alters the electron-transfer tunneling distance between the electrode and the redoxable label. The resulting change in electron transfer efficiency is readily measured by cyclic voltammetry at target DNA concentrations as low as 10 pM. In contrast to existing optical approaches, an electrochemical DNA (E-DNA) sensor built on this strategy can detect femtomoles of target DNA without employing cumbersome and expensive optics, light sources, or photodetectors. In contrast to previously reported electrochemical approaches, the E-DNA sensor achieves this impressive sensitivity without the use of exogenous reagents and without sacrificing selectivity or reusability. The E-DNA sensor thus offers the promise of convenient, reusable detection of picomolar DNA.
---
paper_title: The self-assembled monolayer of saccharide via click chemistry: Formation and protein recognition
paper_content:
Abstract We prepared the saccharide-immobilized substrate via click chemistry. The azide-terminated saccharides were reacted by a facile metathesis reaction. The density of saccharides was controlled by the incubation time of SAM preparation. The saccharide–protein interaction was analyzed using the saccharide substrate. The interaction of the saccharide substrate with protein was strong and specific due to the glyco-cluster effects. The interaction with amyloid β was analyzed by the monosaccharide-immobilized substrate, and the sulfonated saccharides showed strong interaction.
---
paper_title: Carbohydrate arrays for the evaluation of protein binding and enzymatic modification.
paper_content:
This paper reports a chemical strategy for preparing carbohydrate arrays and utilizes these arrays for the characterization of carbohydrate-protein interactions. Carbohydrate chips were prepared by the Diels-Alder-mediated immobilization of carbohydrate-cyclopentadiene conjugates to self-assembled monolayers that present benzoquinone and penta(ethylene glycol) groups. Surface plasmon resonance spectroscopy showed that lectins bound specifically to immobilized carbohydrates and that the glycol groups prevented nonspecific protein adsorption. Carbohydrate arrays presenting ten monosaccharides were then evaluated by profiling the binding specificities of several lectins. These arrays were also used to determine the inhibitory concentrations of soluble carbohydrates for lectins and to characterize the substrate specificity of beta-1,4-galactosyltransferase. Finally, a strategy for preparing arrays with carbohydrates generated on solid phase is shown. This surface engineering strategy will permit the preparation and evaluation of carbohydrate arrays that present diverse and complex structures.
---
paper_title: Carbohydrate-protein interactions by "clicked" carbohydrate self-assembled monolayers.
paper_content:
A Huisgen 1,3-dipolar cycloaddition "click chemistry" was employed to immobilize azido sugars (mannose, lactose, alpha-Gal) to fabricate carbohydrate self-assembled monolayers (SAMs) on gold. This fabrication was based on preformed SAM templates incorporated with alkyne terminal groups, which could further anchor the azido sugars to form well-packed, stable, and rigid sugar SAMs. The clicked mannose, lactose, and alpha-Gal trisaccharide SAMs were used in the analysis of specific carbohydrate-protein interactions (i.e., mannose-Con A; ECL-lactose, alpha-Gal-anti-Gal). The apparent affinity constant of Con A binding to mannose was (8.7 +/- 2.8) x 10(5) and (3.9 +/- 0.2) x 10(6) M(-1) measured by QCM and SPR, respectively. The apparent affinity constants of lactose binding with ECL and alpha-Gal binding with polyclonal anti-Gal antibody were determined to be (4.6 +/- 2.4) x 10(6) and (6.7 +/- 3.3) x 10(6) M(-1), respectively by QCM. SPR, QCM, AFM, and electrochemistry studies confirmed that the carbohydrate SAM sensors maintained the specificity to their corresponding lectins and nonspecific adsorption on the clicked carbohydrate surface was negligible. This study showed that the clicked carbohydrate SAMs in concert with nonlabel QCM or SPR offered a potent platform for high-throughput characterization of carbohydrate-protein interactions. Such a combination should complement other methods such as ITC and ELISA in a favorable manner and provide insightful knowledge for the corresponding complex glycobiological processes.
---
paper_title: Building conjugated organic structures on Si(111) surfaces via microwave-assisted Sonogashira coupling.
paper_content:
A novel step-by-step method employing microwave-assisted Sonogashira coupling is developed to grow fully conjugated organosilicon structures. As the first case study, p-(4-bromophenyl)acetylene is covalently conjugated to a p-(4-iodophenyl)acetylene-derived monolayer on a Si(111) surface. By bridging the two aromatic rings with C[triple bond]C, the pregrown monolayer is structurally extended outward from the Si surface, forming a fully conjugated (p-(4-bromophenylethynyl)phenyl)vinylene film. The film growth process, which reaches 90% yield after 2 h, is characterized thoroughly at each step by using X-ray reflectivity (XRR), X-ray standing waves (XSW), and X-ray fluorescence (XRF). The high yield and short reaction time offered by microwave-assisted surface Sonogashira coupling chemistry make it a promising strategy for functionalizing Si surfaces.
---
paper_title: The ion gating effect: using a change in flexibility to allow label free electrochemical detection of DNA hybridisation
paper_content:
A label free electrochemical method of detecting DNA hybridisation is presented based on the change in flexibility between a single strand of DNA and a duplex causing an ion-gating effect where hybridisation opens up the electrode to access of ions.
---
paper_title: Optical determination of surface density in oriented metalloprotein nanostructures
paper_content:
In the current work we circumvent a difficulty in estimating surface coverage by noting that iron-porphyrin proteins in solution have been assayed spectrophotometrically after conversion to pyridine hemochromes. By comparing the total adsorbance obtained from direct absorption measurements of oriented metalloprotein layers on SiO[sub 2] at the Soret resonance (410 nm in cytochrome b[sub 5]) to the total number density of surface protein, obtained from subsequent pyridine hemochrome assay (PHCA) analysis, the apparent surface molar absorptivity is obtained directly. In this correspondence we report the use of the PHCA to determine the surface molar absorptivity for oriented arrays of cytochrome b[sub 5] mutants. The heme is completely dissociated from the surface cytochrome b[sub 5] and converted to the reduced pyridine hemochrome in solution. Subsequently, the absorbance of reduced species in solution is determined colorimetrically. From the correlation of the absorbance of reduced hemochrome to the standard curve obtained from pyridine hemochrome assay of solution cytochrome b[sub 5], the surface concentration is estimated. 18 refs., 2 figs., 1 tab.
---
paper_title: Carbohydrate and Protein Immobilization onto Solid Surfaces by Sequential Diels−Alder and Azide−Alkyne Cycloadditions
paper_content:
We demonstrate the applicability of sequential Diels−Alder and azide−alkyne [3 + 2] cycloaddition reactions (click chemistry) for the immobilization of carbohydrates and proteins onto a solid surface. An α,ω-poly(ethylene glycol) (PEG) linker carrying alkyne and cyclodiene terminal groups was synthesized and immobilized onto an N-(e-maleimidocaproyl) (EMC)-functionalized glass slide via an aqueous Diels−Alder reaction. In the process, an alkyne-terminated PEGylated surface was provided for the conjugation of azide-containing biomolecules via click chemistry, which proceeded to completion at low temperature and in aqueous solvent. As anticipated, alkyne, azide, cyclodiene, and EMC are independently stable and do not react with common organic reagents nor functional groups in biomolecules. Given an appropriate PEG linker, sequential Diels−Alder and azide−alkyne [3 + 2] cycloaddition reactions provide an effective strategy for the immobilization of a wide range of functionally complex substances onto solid s...
---
paper_title: Chemoselective Attachment of Biologically Active Proteins to Surfaces by Expressed Protein Ligation and Its Application for “Protein Chip” Fabrication
paper_content:
The present work describes a general method for the selective attachment of proteins to solid surfaces through their C-termini that can be used for the efficient creation of protein chips. Our method is based in the chemoselective reaction between a protein C-terminal α-thioester and a modified surface containing N-terminal Cys residues. α-Thioester proteins can be obtained using standard recombinant techniques by using expression vectors containing engineered inteins. This new method was used to immobilize two fluorescent proteins and a functional SH3 domain using a protein microarrayer.
---
paper_title: Development of DNA electrochemical biosensor based on covalent immobilization of probe DNA by direct coupling of sol-gel and self-assembly technologies.
paper_content:
A new procedure for fabricating deoxyribonucleic acid (DNA) electrochemical biosensor was developed based on covalent immobilization of target single-stranded DNA (ssDNA) on Au electrode that had been functionalized by direct coupling of sol-gel and self-assembled technologies. Two siloxanes, 3-mercaptopropyltrimethoxysiloxane (MPTMS) and 3-glycidoxypropyltrimethoxysiloxane (GPTMS) were used as precursors to prepare functionally self-assembly sol-gel film on Au electrode. The thiol group of MPTMS allowed assembly of MPTMS sol-gel on gold electrode surface. Through co-condensation between silanols, GPTMS sol-gel with epoxide groups interconnected into MPTMS sol-gel and enabled covalent immobilization of target NH(2)-ssDNA through epoxide/amine coupling reaction. The concentration of MPTMS and GPTMS influenced the performance of the resulting biosensor due to competitive sol-gel process. The linear range of the developed biosensor for determination of complementary ssDNA was from 2.51 x 10(-9) to 5.02 x 10(-7)M with a detection limit of 8.57 x 10(-10)M. The fabricated biosensor possessed good selectivity and could be regenerated. The covalent immobilization of target ssDNA on self-assembled sol-gel matrix could serve as a versatile platform for DNA immobilization and fabrication of biosensors.
---
paper_title: The molecular level modification of surfaces: from self-assembled monolayers to complex molecular assemblies
paper_content:
The modification of surfaces with self-assembled monolayers (SAMs) containing multiple different molecules, or containing molecules with multiple different functional components, or both, has become increasingly popular over the last two decades. This explosion of interest is primarily related to the ability to control the modification of interfaces with something approaching molecular level control and to the ability to characterise the molecular constructs by which the surface is modified. Over this time the level of sophistication of molecular constructs, and the level of knowledge related to how to fabricate molecular constructs on surfaces have advanced enormously. This critical review aims to guide researchers interested in modifying surfaces with a high degree of control to the use of organic layers. Highlighted are some of the issues to consider when working with SAMs, as well as some of the lessons learnt (169 references).
---
paper_title: Covalent coupling of antibodies to self-assembled monolayers of carboxy-functionalized poly(ethylene glycol): protein resistance and specific binding of biomolecules
paper_content:
We report the synthesis, film formation, protein resistance, and specific antigen binding capability of a carboxy-functionalized poly(ethylene glycol) alkanethiol [HOOC−CH2−(OCH2−CH2)n−O−(CH2)11−SH, n = 22−45]. Despite its polymeric character, the molecule is found to form a densely packed self-assembled monolayer on polycrystalline gold, if adsorbed from dimethylformamide solution. Due to its chain length distribution, the carboxy tailgroups are expected to be partially buried within the film and, thus, do not affect the protein repulsive characteristics of the ethylene glycol moieties when exposed to fibrinogen and immunoglobulin G (IgG). However, if activated by N-hydroxysuccinimide and N-(3-dimethylaminopropyl)-N-ethylcarbodiimide hydrochloride, antibodies can be covalently coupled to the monolayer. While resistance to nonspecific fibrinogen and IgG adsorption is maintained for this biologically active layer, it exhibits high specific antigen binding capacity. The performance of this highly selective ...
---
paper_title: Maleimide-functionalized self-assembled monolayers for the preparation of peptide and carbohydrate biochips
paper_content:
This paper reports a convenient method for immobilizing biologically active ligands to self-assembled monolayers of alkanethiolates on gold (SAMs). This methodology is based on monolayers that present maleimide and penta(ethylene glycol) groups. The maleimide groups react efficiently with thiol-terminated ligands, whereas the penta(ethylene glycol) groups prevent the nonspecific adsorption of protein to the substrate. The rate and selectivity of the immobilization of a ferrocene-thiol conjugate were characterized using cyclic voltammetry. This paper presents three examples of biochips prepared using this methodology. In the first example, four carbohydrate-thiol conjugates were immobilized to monolayers and the lectin-binding properties of the substrates were examined using fluorescence and surface plasmon resonance spectroscopy. The second biochip was used to study the enzymatic phosphorylation of the immobilized peptide IYGEFKKKC by the tyrosine kinase c-src. Monolayers presenting this peptide were then...
---
paper_title: Specific capture of mammalian cells by cell surface receptor binding to ligand immobilized on gold thin films.
paper_content:
Aldehyde-terminated self-assembled monolayers (SAMs) on gold surfaces were modified with proteins and employed to capture intact living cells through specific ligand-cell surface receptor interactions. In our model system, the basic fibroblast growth factor (bFGF) binding receptor was targeted on baby hamster kidney (BHK-21) cells. Negative control and target proteins were immobilized on a gold surface by coupling protein primary amines to surface aldehyde groups. Cell-binding was monitored by phase contrast microscopy or surface plasmon resonance (SPR) imaging. The specificity of the receptor-ligand interaction was confirmed by the lack of cell binding to the negative control proteins, cytochrome c and insulin, and by the disruption of cell binding by treatment with heparitinase to destroy heparan sulfate which plays an essential role in the binding of bFGF to FGF receptors. This approach can simultaneously probe a large number of receptor-ligand interactions in cell populations and has potential for targeting and isolating cells from mixtures according to the receptors expressed on their surface.
---
paper_title: Molecular self-assembly of two-terminal, voltammetric microsensors with internal references.
paper_content:
Self-assembly of a ferrocenyl thiol and a quinone thiol onto Au microelectrodes forms the basis for a new microsensor concept: a two-terminal, voltammetric microsensor with reference and sensor functions on the same electrode. The detection is based on measurement of the potential difference of current peaks for oxidation and reduction of the reference (ferrocene) and indicator (quinone) in aqueous electrolyte in a two-terminal, linear sweep voltammogram in which a counterelectrode of relatively large surface area is used. The quinone has a half-wave potential, E((1/2)), that is pH-sensitive and can be used as a pH indicator; the ferrocene center has an E(1/2) that is a pH-insensitive reference. The key advantages are that such sensors require no separate reference electrode and function as long as current peaks can be located for reference and indicator molecules.
---
paper_title: Printing proteins as microarrays for high-throughput function determination.
paper_content:
Systematic efforts are currently under way to construct defined sets of cloned genes for high-throughput expression and purification of recombinant proteins. To facilitate subsequent studies of protein function, we have developed miniaturized assays that accommodate extremely low sample volumes and enable the rapid, simultaneous processing of thousands of proteins. A high-precision robot designed to manufacture complementary DNA microarrays was used to spot proteins onto chemically derivatized glass slides at extremely high spatial densities. The proteins attached covalently to the slide surface yet retained their ability to interact specifically with other proteins, or with small molecules, in solution. Three applications for protein microarrays were demonstrated: screening for protein-protein interactions, identifying the substrates of protein kinases, and identifying the protein targets of small molecules.
---
paper_title: Glucose Oxidase-graphene-chitosan modified electrode for direct electrochemistry and glucose sensing
paper_content:
Abstract Direct electrochemistry of a glucose oxidase (GOD)–graphene–chitosan nanocomposite was studied. The immobilized enzyme retains its bioactivity, exhibits a surface confined, reversible two-proton and two-electron transfer reaction, and has good stability, activity and a fast heterogeneous electron transfer rate with the rate constant ( k s ) of 2.83 s −1 . A much higher enzyme loading (1.12 × 10 −9 mol/cm 2 ) is obtained as compared to the bare glass carbon surface. This GOD–graphene–chitosan nanocomposite film can be used for sensitive detection of glucose. The biosensor exhibits a wider linearity range from 0.08 mM to 12 mM glucose with a detection limit of 0.02 mM and much higher sensitivity (37.93 μA mM −1 cm −2 ) as compared with other nanostructured supports. The excellent performance of the biosensor is attributed to large surface-to-volume ratio and high conductivity of graphene, and good biocompatibility of chitosan, which enhances the enzyme absorption and promotes direct electron transfer between redox enzymes and the surface of electrodes.
---
paper_title: Self-Assembled Graphene-Enzyme Hierarchical Nanostructures for Electrochemical Biosensing
paper_content:
The self-assembly of sodium dodecyl benzene sulphonate (SDBS) functionalized graphene sheets (GSs) and horseradish peroxidase (HRP) by electrostatic attraction into novel hierarchical nanostructures in aqueous solution is reported. Data from scanning electron microscopy, high-resolution transmission electron microscopy, and X-ray diffraction demonstrate that the HRP―GSs bionanocomposites feature ordered hierarchical nanostructures with well-dispersed HRP intercalated between the GSs. UV-vis and infrared spectra indicate the native structure of HRP is maintained after the assembly, implying good biocompatibility of SDBS-functionalized GSs. Furthermore, the HRP―GSs composites are utilized for the fabrication of enzyme electrodes (HRP―GSs electrodes). Electrochemical measurements reveal that the resulting HRP―GSs electrodes display high electrocatalytic activity to H 2 O 2 with high sensitivity, wide linear range, low detection limit, and fast amperometric response. These desirable electrochemical performances are attributed to excellent biocompatibility and superb electron transport efficiency of GSs as well as high HRP loading and synergistic catalytic effect of the HRP―GSs bionanocomposites toward H 2 O 2 . As graphene can be readily non-covalently functionalized by "designer" aromatic molecules with different electrostatic properties, the proposed self-assembly strategy affords a facile and effective platform for the assembly of various biomolecules into hierarchically ordered bionanocomposites in biosensing and biocatalytic applications.
---
paper_title: A novel hydrogen peroxide biosensor based on Au-graphene-HRP-chitosan biocomposites
paper_content:
Abstract Graphene was prepared successfully by introducing –SO 3 − to separate the individual sheets. TEM, EDS and Raman spectroscopy were utilized to characterize the morphology and composition of graphene oxide and graphene. To construct the H 2 O 2 biosensor, graphene and horseradish peroxidase (HRP) were co-immobilized into biocompatible polymer chitosan (CS), then a glassy carbon electrode (GCE) was modified by the biocomposite, followed by electrodeposition of Au nanoparticles on the surface to fabricate Au/graphene/HRP/CS/GCE. Cyclic voltammetry demonstrated that the direct electron transfer of HRP was realized, and the biosensor had an excellent performance in terms of electrocatalytic reduction towards H 2 O 2 . The biosensor showed high sensitivity and fast response upon the addition of H 2 O 2 , under the conditions of pH 6.5, potential −0.3 V. The time to reach the stable-state current was less than 3 s, and the linear range to H 2 O 2 was from 5 × 10 −6 M to 5.13 × 10 −3 M with a detection limit of 1.7 × 10 −6 M (S/N = 3). Moreover, the biosensor exhibited good reproducibility and long-term stability.
---
paper_title: A Graphene Platform for Sensing Biomolecules
paper_content:
Sensitive platform: The use of graphene oxide (GO) as a platform for the sensitive and selective detection of DNA and proteins is presented. The interaction of GO and dye-labeled single-stranded DNA leads to quenching of the dye fluorescence. Conversely, the presence of a target DNA or protein leads to the binding of the dye-labeled DNA and target, releasing the DNA from GO, thereby restoring the dye fluorescence (see picture).
---
paper_title: Direct Electrochemistry of Glucose Oxidase and Biosensing for Glucose Based on Graphene
paper_content:
We first reported that polyvinylpyrrolidone-protected graphene was dispersed well in water and had good electrochemical reduction toward O2 and H2O2. With glucose oxidase (GOD) as an enzyme model, we constructed a novel polyvinylpyrrolidone-protected graphene/polyethylenimine-functionalized ionic liquid/GOD electrochemical biosensor, which achieved the direct electron transfer of GOD, maintained its bioactivity and showed potential application for the fabrication of novel glucose biosensors with linear glucose response up to 14 mM.
---
paper_title: Disposable biosensor based on graphene oxide conjugated with tyrosinase assembled gold nanoparticles.
paper_content:
A highly efficient enzyme-based screen printed electrode (SPE) was obtained by using covalent attachment between 1-pyrenebutanoic acid, succinimidyl ester (PASE) adsorbing on the graphene oxide (GO) sheets and amines of tyrosinase-protected gold nanoparticles (Tyr-Au). Herein, the bi-functional molecule PASE was assembled onto GO sheets. Subsequently, the Tyr-Au was immobilized on the PASE-GO sheets forming a biocompatible nanocomposite, which was further coated onto the working electrode surface of the SPE. The characterization of obtained nanocomposite and modified SPE surface was investigated by atomic force microscopy (AFM), transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Attributing to the synergistic effect of GO-Au integration and the good biocompatibility of the hybrid-material, the fabricated disposable biosensor (Tyr-Au/PASE-GO/SPE) exhibited a rapid amperometric response (less than 6s) with a high sensitivity and good storage stability for monitoring catechol. This method shows a good linearity in the range from 8.3×10(-8) to 2.3×10(-5) M for catechol with a squared correlation coefficient of 0.9980, a quantitation limit of 8.2×10(-8) M (S/N=10) and a detection limit of 2.4×10(-8) M (S/N=3). The Michaelis-Menten constant was measured to be 0.027 mM. This disposable tyrosinase biosensor could offer a great potential for rapid, cost-effective and on-field analysis of phenolic compounds.
---
paper_title: Catalyst-Free Efficient Growth, Orientation and Biosensing Properties of Multilayer Graphene Nanoflake Films with Sharp Edge Planes
paper_content:
We report a novel microwave plasma enhanced chemical vapor deposition strategy for the efficient synthesis of multilayer graphene nanoflake films (MGNFs) on Si substrates. The constituent graphene nanoflakes have a highly graphitized knife-edge structure with a 2-3 nm thick sharp edge and show a preferred vertical orientation with respect to the Si substrate as established by near-edge X-ray absorption fine structure spectroscopy. The growth rate is approximately 1.6 mu m min(-1), which is 10 times faster than the previously reported best value. The MGNFs are shown to demonstrate fast electron-transfer (ET) kinetics for the Fe(CN)(6)(3-/4-) redox system and excellent electrocatalytic activity for simultaneously determining dopamine (DA), ascorbic acid (AA) and uric acid (UA). Their biosensing DA performance in the presence of common interfering agents AA and UA is superior to other bare solid-state electrodes and is comparable only to that of edge plane pyrolytic graphite. Our work here, establishes that the abundance of graphitic edge planes/defects are essentially responsible for the fast ET kinetics, active electrocatalytic and biosensing properties. This novel edge-plane-based electrochemical platform with the high surface area and electrocatalytic activity offers great promise for creating a revolutionary new class of nanostructured electrodes for biosensing, biofuel cells and energy-conversion applications.
---
paper_title: Electric Field Effect in Atomically Thin Carbon Films
paper_content:
tection efficiency for the read beam is $ ; 0.04, so we infer the efficiency of quantum state transfer from the atoms onto the photon, O ; 0.03. We have realized a quantum node by combining the entanglement of atomic and photonic qubits with the atom-photon quantum state transfer. By implementing the second node at a different location and performing a joint detection of the signal photons from the two nodes, the quantum repeater protocol (11), as well as distant teleportation of an atomic qubit, may be realized. Based on this work, we estimate the rate for these protocols to be R 2 ; ($O"n s ) 2 R ; 3 � 10 j7 s j1 . However, improvements in O that are based on increasing the optical thickness of atomic samples (16), as well as elimination of transmission losses, could provide several orders of magnitude increase in R 2 . Our results also demonstrate the possibility of realizing quantum nodes consisting of multiple atomic qubits by using multiple beams of light. This approach shows promise for implementation of distributed quantum computation (20, 21).
---
paper_title: Improved Detection Limit and Stability of Amperometric Carbon Nanotube-Based Immunosensors by Crosslinking Antibodies with Polylysine.
paper_content:
Amperometric immunosensor configurations featuring covalently bound anti-biotin antibodies (Ab) embedded into a polylysine (PLL)-single walled carbon nanotube (SWCNT) composite layer were evaluated. Assemblies were made by first oxidizing pyrolytic graphite (PG) electrodes to form surface carboxylic acid groups, to which PLL, SWCNTs and anti-biotin were covalently linked. Incorporating SWCNT into PLL-antibody assemblies improved the amperometric detection limit for biotin (Ag) labeled with horseradish peroxidase to 10 fmol mL(-1). Anti-biotin embedded into the PLL matrix had improved thermal stability and retained its binding ability for biotin after exposure to temperatures of 42 degrees C for up to 3 hours, while the noncrosslinked antibody was inactivated at this temperature in several minutes.
---
paper_title: Amperometric biosensing of glutamate using carbon nanotube based electrode
paper_content:
Abstract Amperometric biosensing of glutamate using nanobiocomposite derived from multiwall carbon nanotube (CNT), biopolymer chitosan (CHIT), redox mediator meldola’s blue (MDB) and glutamate dehydrogenase (GlDH) is described. The CNT composite electrode shows a reversible voltammetric response for the redox reaction of MDB at −0.15 V; the composite electrode efficiently mediates the oxidation of NADH at −0.07 V, which is 630 mV less positive than that on an unmodified glassy carbon (GC) electrode. The CNTs in the composite electrode facilitates the mediated electron transfer for the oxidation of NADH. The CNT composite electrode is highly sensitive (5.9 ± 1.52 nA/μM) towards NADH and it could detect as low as 0.5 μM of NADH in neutral pH. The CNT composite electrode is highly stable and does not undergo deactivation by the oxidation products. The electrode does not suffer from the interference due to other anionic electroactive compounds such as ascorbate (AA) and urate (UA). Separate voltammetric peaks have been observed for NADH, AA and UA, allowing the individual or simultaneous determination of these bioanalytes. The glutamate biosensor was developed by combining the electrocatalytic activity of the composite film and GlDH. The enzymatically generated NADH was electrocatalytically detected using the biocomposite electrode. Glutamate has been successfully detected at −0.1 V without any interference. The biosensor is highly sensitive, stable and shows linear response. The sensitivity and the limit of detection of the biosensor was 0.71 ± 0.08 nA/μM and 2 μM, respectively.
---
paper_title: Hydrogen peroxide sensor based on modified vitreous carbon with multiwall carbon nanotubes and composites of Pt nanoparticles–dopamine
paper_content:
Abstract Sensors using nanostructured materials have been under development in the last decade due to their selectivity for the detection and quantification of different compounds. The physical and chemical characteristics of carbon nanotubes provide significant advantages when used as electrodes for electronic devices, fuel cells and electrochemical sensors. This paper presents preliminary results on the modification of vitreous carbon electrodes with Multiwall Carbon Nanotubes (MWCNTs) and composites of Pt nanoparticles–dopamine (DA) as electro-catalytic materials for the hydrogen peroxide (H 2 O 2 ) reaction. Chemical pre-treatment and consequent functionalization of MWCNTs with carboxylic groups was necessary to increase the distribution of the composites. In addition, the presence of DA was important to protect the active sites and eliminate the pasivation of the surface after the electro-oxidation of H 2 O 2 takes place. The proposed H 2 O 2 sensor exhibited a linear response in the 0–5 mM range, with detection and quantification limits of 0.3441 mM and 1.1472 mM, respectively.
---
paper_title: Electrochemical behaviors of amino acids at multiwall carbon nanotubes and Cu2O modified carbon paste electrode.
paper_content:
A carbon paste electrode modified with multiwall carbon nanotubes and copper(I) oxide (MWCNT-Cu(2)O CPME) was fabricated, and the electrochemical behaviors of 19 kinds of natural amino acids at this modified electrode were studied. The experimental results showed that the various kinds of amino acids without any derivatization displayed obvious oxidation current responses at the modified electrode. It was also found that the current response values of amino acids were dependent mainly on pH values of buffer solutions. The phenomenon could be explained by the fact that the amino acids suffered complexation or electrocatalytic oxidation processes under different pH values. Six kinds of amino acids (arginine, tryptophan, histidine, threonine, serine, and tyrosine), which performed high-oxidation current responses in alkaline buffers, were selected to be detected simultaneously by capillary zone electrophoresis coupled with amperometric detection (CZE-AD). These amino acids could be perfectly separated within 20 min, and their detection limits were as low as 10(-7) or 10(-8)mol L(-1) magnitude (signal/noise ratio=3). The above results demonstrated that MWCNT-Cu(2)O CPME could be successfully employed as an electrochemical sensor for amino acids with some advantages of convenient preparation, high sensitivity, and good repeatability.
---
paper_title: Carbon-nanotubes doped polypyrrole glucose biosensor
paper_content:
We report on the one-step preparation route of amperometric enzyme electrodes based on incorporating a carbon-nanotube (CNT) dopant and the biocatalyst within an electropolymerized polypyrrole film. Cyclic voltammetric growth profiles indicate that the anionic CNT is incorporated within the growing film for maintaining its electrical neutrality. The entrapment of the CNT has little effect upon the electropolymerization rate and redox properties of the resulting film. The CNT dopant retains its electrocatalytic activity to impart high sensitivity and selectivity. Linearity prevails up to ca. 50 mM glucose, with a slight curvature thereafter. Relevant parameters of the film preparation were examined and optimized. Such an electropolymerization avenue represents a simple, one-step route for preparing enzyme electrodes and should further facilitate the widespread production of CNT-based electrochemical biosensors.
---
paper_title: Aligned carbon nanotube–DNA electrochemical sensors
paper_content:
Single-strand DNA chains were chemically grafted onto aligned carbon nanotube electrodes, leading to novel aligned carbon nanotube–DNA sensors of a high sensitivity and selectivity for probing complementary DNA and target DNA chains of specific sequences.
---
paper_title: Synthesis of individual single-walled carbon nanotubes on patterned silicon wafers
paper_content:
Recent progress1,2,3 in the synthesis of high-quality single-walled carbon nanotubes4 (SWNTs) has enabled the measurement of their physical and materials properties5,6,7,8. The idea that nanotubes might be integrated with conventional microstructures to obtain new types of nanoscale devices, however, requires an ability to synthesize, isolate, manipulate and connect individual nanotubes. Here we describe a strategy for making high-quality individual SWNTs on silicon wafers patterned with micrometre-scale islands of catalytic material. We synthesize SWNTs by chemical vapour deposition of methane on the patterned substrates. Many of the synthesized nanotubes are perfect, individual SWNTs with diameters of 1–3 nm and lengths of up to tens of micrometres. The nanotubes are rooted in the islands, and are easily located, characterized and manipulated with the scanning electron microscope and atomic force microscope. Some of the SWNTs bridge two metallic islands, offering the prospect of using this approach to develop ultrafine electrical interconnects and other devices.
---
paper_title: Low-Potential Stable NADH Detection at Carbon-Nanotube-Modified Glassy Carbon Electrodes
paper_content:
Carbon-nanotube (CNT) modified glassy-carbon electrodes exhibiting strong and stable electrocatalytic response toward NADH are described. A substantial (490 mV) decrease in the overvoltage of the NADH oxidation reaction (compared to ordinary carbon electrodes) is observed using single-wall and multi-wall carbon-nanotube coatings, with oxidation starting at ca.?0.05V (vs. Ag/AgCl; pH 7.4). Furthermore, the NADH amperometric response of the coated electrodes is extremely stable, with 96 and 90% of the initial activity remaining after 60min stirring of 2x10-4M and 5x10-3M NADH solutions, respectively (compared to 20 and 14% at the bare surface). The CNT-coated electrodes thus allow highly-sensitive, low-potential, stable amperometric sensing. Such ability of carbon-nanotubes to promote the NADH electron-transfer reaction suggests great promise for dehydrogenase-based amperometric biosensors.
---
paper_title: Enzyme-Doped Graphene Nanosheets for Enhanced Glucose Biosensing
paper_content:
In this work, we report the enhanced performance of polypyrrole−graphene−glucose oxidase based enzymatic biosensors employed for in vitro electrochemical glucose detection. Initially, graphene nanosheets were chemically synthesized and surface morphologies were characterized by several physical methods. Following this, graphene nanosheets were covalently conjugated to an enzyme model glucose oxidase (GOD). The presence of various reactive functionalities such as ketonic, quinonic, and carboxylic functional groups on the edge plane of graphene easily binds with the free amine terminals of the glucose oxidase to result in a strong covalent amide linkage. Further, this covalent conjugation of the enzyme to graphene was confirmed by FT-IR measurements. Following this, the surface of a glassy carbon electrode was modified by polypyrrole. Later, the conjugated graphene−GOD were then immobilized onto the glassy carbon electrode surface already modified with polypyrrole (Ppy) and the entire assembly was employed ...
---
paper_title: SU-8 based microprobes with integrated planar electrodes for enhanced neural depth recording
paper_content:
Abstract Here, we describe new fabrication methods aimed to integrate planar tetrode-like electrodes into a polymer SU-8 based microprobe for neuronal recording applications. New concepts on the fabrication sequences are introduced in order to eliminate the typical electrode–tissue gap associated to the passivation layer. Optimization of the photolithography technique and high step coverage of the sputtering process have been critical steps in this new fabrication process. Impedance characterization confirmed the viability of the electrodes for reliable neuronal recordings with values comparable to commercial probes. Furthermore, a homogeneous sensing behavior was obtained in all the electrodes of each probe. Finally, in vivo action potential and local field potential recordings were successfully obtained from the rat dorsal hippocampus. Peak-to-peak amplitude of action potentials ranged from noise level to up to 400–500 μV. Moreover, action potentials of different amplitudes and shapes were recorded from all the four recording sites, suggesting improved capability of the tetrode to distinguish from different neuronal sources.
---
paper_title: Conductive polymer-based sensors for biomedical applications.
paper_content:
A class of organic polymers, known as conducting polymers (CPs), has become increasingly popular due to its unique electrical and optical properties. Material characteristics of CPs are similar to those of some metals and inorganic semiconductors, while retaining polymer properties such as flexibility, and ease of processing and synthesis, generally associated with conventional polymers. Owing to these characteristics, research efforts in CPs have gained significant traction to produce several types of CPs since its discovery four decades ago. CPs are often categorised into different types based on the type of electric charges (e.g., delocalized pi electrons, ions, or conductive nanomaterials) responsible for conduction. Several CPs are known to interact with biological samples while maintaining good biocompatibility and hence, they qualify as interesting candidates for use in a numerous biological and medical applications. In this paper, we focus on CP-based sensor elements and the state-of-art of CP-based sensing devices that have potential applications as tools in clinical diagnosis and surgical interventions. Representative applications of CP-based sensors (electrochemical biosensor, tactile sensing 'skins', and thermal sensors) are briefly discussed. Finally, some of the key issues related to CP-based sensors are highlighted.
---
paper_title: Conducting electroactive polymer-based biosensors
paper_content:
Abstract Conductive electroactive polymers are materials discovered just on two decades ago. Originally heralded for their high conductivity/weight ratio, it is the unique chemical properties they possess that now arouse much attention. The ability to synthesise these materials under mild conditions enables a range of biological moieties (enzymes, antibodies and even whole living cells) to be incorporated into the polymer structure. The unique electronic properties then allow direct and interactive communication with the biochemistries incorporated to produce a range of analytical signals. This work reviews the options available for immobilisation of biocomponents and signal generation using conducting polymer-based biosensors.
---
paper_title: Synthesis of electrically conducting organic polymers: halogen derivatives of polyacetylene, (CH)x
paper_content:
When silvery films of the semiconducting polymer, trans‘polyacetylene’, (CH)x, are exposed to chlorine, bromine, or iodine vapour, uptake of halogen occurs, and the conductivity increases markedly (over seven orders of magnitude in the case of iodine) to give, depending on the extent of halogenation, silvery or silvery-black films, some of which have a remarkably high conductivity at room temperature.
---
paper_title: Immobilization of biotinylated biomolecules onto electropolymerized poly(pyrrole-nitrilotriacetic acid)―Cu2+ film
paper_content:
Abstract A novel and simple immobilization strategy for biotinylated biological macromolecules onto electropolymerized poly(pyrrole-nitrilotriacetic acid)(NTA)–Cu 2+ films without avidin as connecting bridge is reported. After complexation of Cu 2+ by the polymerized NTA chelator, biotinylated biomolecules were immobilized by coordination of the biotin groups on the NTA–Cu 2+ complex. The anchoring of biotinylated glucose oxidase was demonstrated by fluorescent characterization via FITC-labeled avidin and amperometric measurement of glucose. The resulting calibration curve led to a sensitivity and maximum current density values of 0.6 mA mol − 1 L cm − 2 and 13.2 μA cm − 2 , respectively. Thus, biotinylated polyphenol oxidase was fixed leading to a catechol sensor with a sensitivity of 656 mA mol − 1 L cm − 2 and maximum current density of 25.4 μA cm − 2 . This system was also applied to the efficient immobilization of biotinylated DNA, illustrated by impedimetric detection of the formation of the DNA duplex.
---
paper_title: Galvanostatic Entrapment of Sulfite Oxidase into Ultrathin Polypyrrole Films for Improved Amperometric Biosensing of Sulfite
paper_content:
The entrapment of sulfite oxidase (SOx) into ultrathin polypyrrole (PPy) films of 27–135 nm thickness has been successfully used for amperometric biosensing of sulfite with considerably improved performance. Optimum galvanostatic entrapment was accomplished in an electrolyte-free solution which contained 0.1 M pyrrole and 5 U/mL of SOx with a polymerization period of 120 seconds and an applied current density of 0.2 mA cm−2. Evidence of the incorporation and retention of SOx in the ultrathin PPy film was obtained by scanning electron microscopy, cyclic voltammetry and amperometric measurements. Entrapment of the enzyme in a 54 nm thick PPy-SOx film gave optimum amperometric response for sulfite and enabled the detection of as little as 0.9 μM of sulfite with a linear concentration range of 0.9 to 400 μM. The successful application of the biosensor to the determination of sulfite in beer and wine samples is reported. Comparison with a spectrophotometric method indicates that the biosensor was more superior for the determination of sulfite in red wine.
---
paper_title: Screen-printed microsystems for the ultrasensitive electrochemical detection of alkaline phosphatase
paper_content:
Screen printing technique has been used to manufacture a microsystem where the graphite-based electrodes hold both a functional and an architectural task. The thick film manufacturing technique has proved valid to develop a very low volume (ca. 20 microL) device where different electrochemical operations can be very efficiently performed. Biomolecule immobilisation within the microsystem for biosensors applications has been explored by inducing and optimizing the in situ generation of a potential pulse polypyrrole electropolymerised film entrapping either glucose oxidase or glucose dehydrogenase. This biomodified microsystem was applied to the ultrasensitive electrochemical detection of alkaline phosphatase yielding limits of detection below 10(-12) M for glucose oxidase and of 10(-15) M for glucose dehydrogenase modified systems, within 15 min of incubation time. The results obtained showed the advantages of using low volume microsystems in combination with an optimised polypyrrole-enzyme film, which displayed a good immobilisation efficiency in conjunction with a good diffusion of species through. Ultrasensitive detection of AP in combination with a stable and reproducible surface modification for entrapping of biomolecules opens the window for new electrochemical detection platform with great potential for integrated biosensor applications.
---
| Title: Interfacial Structures and Properties of Organic Materials for Biosensors: An Overview
Section 1: Introduction
Description 1: This section will provide an overview of biosensors, their components, and applications.
Section 2: Types of Biosensors
Description 2: This section will discuss various types of biosensors and their sensing mechanisms.
Section 3: Resonant Biosensors
Description 3: This section will describe resonant biosensors, including surface acoustic wave sensors, magnetoelastic sensors, and quartz crystal biosensors.
Section 4: Thermal Detection Biosensors
Description 4: This section will explore the principles and applications of thermal detection biosensors.
Section 5: Photometric Biosensors
Description 5: This section will focus on photometric biosensors, particularly those using surface plasmon resonance and other optical properties.
Section 6: Ion-Sensitive FETs Biosensors
Description 6: This section will cover the development and functioning of ion-sensitive field-effect transistors (ISFETs) for biosensing.
Section 7: Electrochemical Biosensors
Description 7: This section will delve into the principles and types of electrochemical biosensors, including potentiometric, amperometric, and impedimetric sensors.
Section 8: Self-Assembled Monolayer
Description 8: This section will focus on the use of self-assembled monolayers (SAMs) in biosensors, including their chemistry, assembly, and biomolecule attachment strategies.
Section 9: Chemistry of SAM
Description 9: This section will describe different SAM systems based on their chemical characteristics, such as alkanethiols, organosilanes, hydrosilylation, and aryl diazonium.
Section 10: Immobilization of Biomolecules to SAM Biosensor Systems
Description 10: This section will discuss methods for immobilizing biomolecules onto SAMs through non-covalent and covalent bonds.
Section 11: Other Materials
Description 11: This section will introduce other organic materials used in biosensors, such as graphene, carbon nanotubes (CNTs), and conductive polymers.
Section 12: Conclusions
Description 12: This section will summarize the review and discuss the future prospects of biosensors and their applications. |
Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing | 8 | ---
paper_title: Social Cognition in Humans
paper_content:
We review a diversity of studies of human social interaction and highlight the importance of social signals. We also discuss recent findings from social cognitive neuroscience that explore the brain basis of the capacity for processing social signals. These signals enable us to learn about the world from others, to learn about other people, and to create a shared social world. Social signals can be processed automatically by the receiver and may be unconsciously emitted by the sender. These signals are non-verbal and are responsible for social learning in the first year of life. Social signals can also be processed consciously and this allows automatic processing to be modulated and overruled. Evidence for this higher-level social processing is abundant from about 18 months of age in humans, while evidence is sparse for non-human animals. We suggest that deliberate social signalling requires reflective awareness of ourselves and awareness of the effect of the signals on others. Similarly, the appropriate reception of such signals depends on the ability to take another person's point of view. This ability is critical to reputation management, as this depends on monitoring how our own actions are perceived by others. We speculate that the development of these high level social signalling systems goes hand in hand with the development of consciousness.
---
paper_title: An Introduction to the Physiology of Hearing
paper_content:
This book deals with the way that the auditory system processes acoustic signals. The current edition has been thoroughly revised to reflect the progress that has been made since the previous edition. Particularly major updates have been made in the following areas: cochlear function, including cochlear mechanics, hair cell function and mechanisms of transduction; the auditory central nervous system, a major area of advance in recent years; physiological correlates of auditory perception, including speech perception; and, cochlear pathophysiology and sensorineural hearing loss, including the restoration of hearing by electrical stimulation of the ear, and molecular and cellular approaches to hair cell repair, replacement, and regeneration.A reading scheme has been provided to guide readers to the section most appropriate for their interests. The book is written so that those entering auditory research from very little background in auditory neuroscience are able to understand the current research issues and research literature. It is also intended to be a source book and reference work for advanced undergraduates studying the special senses, and for clinicians in the speciality of Otorhinolaryngology.It offers a contemporary look at the physiology of hearing: each chapter has been thoroughly revised. It is an excellent reading companion to practitioners and scholars. It is also suitable for those undertaking auditory research. It includes a reading scheme to guide readers through the book.
---
paper_title: Cognitive modelling of human social signals
paper_content:
The paper defines as "social signal" a communicative or informative signal that, either directly or indirectly, conveys information about social actions, social interactions, social emotions, social attitudes and social relationships. It proposes a conceptual definition of these social facts and some examples of relevant social signals.
---
paper_title: Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship
paper_content:
Interfaces that talk and listen are populating computers, cars, call centers, and even home appliances and toys, but voice interfaces invariably frustrate rather than help. In Wired for Speech, Clifford Nass and Scott Brave reveal how interactive voice technologies can readily and effectively tap into the automatic responses all speech -- whether from human or machine -- evokes. Wired for Speech demonstrates that people are "voice-activated": we respond to voice technologies as we respond to actual people and behave as we would in any social situation. By leveraging this powerful finding, voice interfaces can truly emerge as the next frontier for efficient, user-friendly technology.Wired for Speech presents new theories and experiments and applies them to critical issues concerning how people interact with technology-based voices. It considers how people respond to a female voice in e-commerce (does stereotyping matter?), how a car's voice can promote safer driving (are "happy" cars better cars?), whether synthetic voices have personality and emotion (is sounding like a person always good?), whether an automated call center should apologize when it cannot understand a spoken request ("To Err is Interface; To Blame, Complex"), and much more. Nass and Brave's deep understanding of both social science and design, drawn from ten years of research at Nass's Stanford laboratory, produces results that often challenge conventional wisdom and common design practices. These insights will help designers and marketers build better interfaces, scientists construct better theories, and everyone gain better understandings of the future of the machines that speak with us.
---
paper_title: Socially aware computation and communication
paper_content:
By building machines that understand social signaling and social context, we can dramatically improve collective decision making and help keep remote users 'in the loop.' I will describe three systems that have a substantial understanding of social context, and use this understanding to improve human group performance. The first system is able to interpret social displays of interest and attraction, and uses this information to improve conferences and meetings. The second is able to infer friendship, acquaitance, and workgroup relationships, and uses this to help people build social capital. The third is able to examine human interactions and categorize participants attitudes (attentive, agreeable, determined, interested, etc), and uses this information to proactively promote group cohesion and to match participants on the basis of their compatiblity.
---
paper_title: The mirror-neuron system
paper_content:
� Abstract A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others’ actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism—the mirror-neuron mechanism—that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language.
---
paper_title: Social Intelligence: The New Science of Success
paper_content:
Foreword. Acknowledgment. Preface. 1. A Different Kind of "Smart". Old Wine in New Bottles? Going Beyond IQ. EI, SI, or Both? From Toxic to Nourishing. Blind Spots, Lenses, and Filters. Social Halitosis, Flatulence, and Dandruff. The "Dilbert" Factor. Can We Become a Socially Smarter Species? S.P.A.C.E.:The Skills of Interaction. 2. "S" Stands for Situational Awareness. Situational Dumbness and Numbness. Ballistic Podiatry: Making the Worst of a Situation. Reading the Social Context. What to Look For. The Proxemic Context. The Behavioral Context. The Semantic Context. Navigating Cultures and Subcultures. Codes of Conduct: Violate the Rules at Your Peril. Building the Skills of Situational Awareness. 3. "P" Stands for Presence. Being There. Is Charisma Over-Rated? Do Looks Matter? Reading (and Shaping) the "Rules of Engagement". The Ugly American Syndrome. More of You, Less of Me. A Case of Attitude. Building the Skills of Presence. 4. "A" Stands for Authenticity. Take a Tip from Popeye. It's a Beautiful Day in the SI Neighborhood. The Snap-On Smile: Can You Fake Sincerity? Left-Handed Compliments. The Puppy Dog Syndrome. Narcissism: It's Really All About Me. Head Games, Power Struggles, and Manipulation. Building the Skills of Authenticity. 5. "C" Stands for Clarity. A Way with Words. Hoof-in-Mouth Disease: Sometimes Silence Works Best. Role-Speak and Real-Speak. Helicopter Language and Elevator Speeches. "Clean" Language and "Dirty" Language. Verbal Bludgeons. Taking a Brain for a Walk. The Power of Metaphor. E-Prime: the Language of Sanity. Building the Skills of Clarity. 6. "E" Stands for Empathy. What Destroys Empathy? What Builds Empathy? The Platinum Rule. The Irony of Empathic Professions. L.E.A.P.S.: Empathy by Design. Empathy in Four Minutes. Building the Skills of Empathy. 7. Assessing and Developing SI. Assessing Your Interaction Skills. Self-Awareness: Seeing Yourself as Others See You. Assessing Your Interaction Style: Drivers, Energizers, Diplomats, and Loners. The Strength-Weakness Irony. Priorities for Improvement. 8. SI in the World of Work: Some Reflections. The Real and Legal Consequences of Social Incompetence. Cultures of Conflict and Craziness. Hierarchies, Testosterone, and Gender Politics. Getting it Right at Work and Wrong at Home. The Diversity Puzzle. Ritual, Ceremony, and Celebration. Positive Politics: Getting Ahead with Your Value System Intact. 9. SI in Charge: Thoughts on Developing Socially Intelligent Leaders. The S.O.B. Factor. Executive Hubris: Its Costs and Consequences. Best Boss, Worst Boss. P.O.W.E.R.: Where It Comes From, How to Get It. How the Worst Bastards on the Planet Get and Keep Power. The Algebra of Influence. S.P.I.C.E.: Leading When You're Not In Charge. 10. SI and Conflict: Thoughts About Getting Along. The Double Spiral of Conflict. Why Argue? Crucial Conversations. Added Value Negotiating. Epilogue. SI and the Next Generation: Who's Teaching Our Kids? Our Children Are Not Our Children. The (Only) Ten Basic News Stories. Anxiety Drives Attention. Breaking the Addiction to Television. The Buying of Our Babies. Video Games:The New Sandlot. Teachers, Parents, or Neither? Belonging or Be Longing? The S.P.A.C.E. Solution for Schools. A Prescription for SI at Every Age. Index. About the Author.
---
paper_title: Social Computing: From Social Informatics to Social Intelligence
paper_content:
Social computing represents a new computing paradigm and an interdisciplinary research and application field. Undoubtedly, it strongly influences system and software developments in the years to come. We expect that social computing's scope continues to expand and its applications multiply. From both theoretical and technological perspectives, social computing technologies moves beyond social information processing towards emphasizing social intelligence. As we've discussed, the move from social informatics to social intelligence is achieved by modeling and analyzing social behavior, by capturing human social dynamics, and by creating artificial social agents and generating and managing actionable social knowledge
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Selection for universal facial emotion
paper_content:
Facial expression is heralded as a communication system common to all human populations, and thus is generally accepted as a biologically based, universal behavior. Happiness, sadness, fear, anger, surprise, and disgust are universally recognized and produced emotions, and communication of these states is deemed essential in order to navigate the social environment. It is puzzling, however, how individuals are capable of producing similar facial expressions when facial musculature is known to vary greatly among individuals. Here, the authors show that although some facial muscles are not present in all individuals, and often exhibit great asymmetry (larger or absent on one side), the facial muscles that are essential in order to produce the universal facial expressions exhibited 100% occurrence and showed minimal gross asymmetry in 18 cadavers. This explains how universal facial expression production is achieved, implies that facial muscles have been selected for essential nonverbal communicative function, and yet also accommodate individual variation.
---
paper_title: Social signal processing: What are the relevant variables? And in what ways do they relate?
paper_content:
Studies of the processing of social signals and behaviour tend to focus intuitively on a few variables, without a framework to guide selection. Here, we attempt to provide a broad overview of the relevant variables, describing both signs and what they signify. Those are matched by systematic consideration of how the variables relate. Variables interact not only on an intrapersonal level but also on an interpersonal level. It is also recognised explicitly that a comprehensive framework needs to embrace the role of context and individual differences in personality and culture.
---
paper_title: Crossmodal binding of fear in voice and face
paper_content:
In social environments, multiple sensory channels are simultaneously engaged in the service of communication. In this experiment, we were concerned with defining the neuronal mechanisms for a perceptual bias in processing simultaneously presented emotional voices and faces. Specifically, we were interested in how bimodal presentation of a fearful voice facilitates recognition of fearful facial expression. By using event-related functional MRI, that crossed sensory modality (visual or auditory) with emotional expression (fearful or happy), we show that perceptual facilitation during face fear processing is expressed through modulation of neuronal responses in the amygdala and the fusiform cortex. These data suggest that the amygdala is important for emotional crossmodal sensory convergence with the associated perceptual bias during fear processing, being mediated by task-related modulation of face-processing regions of fusiform cortex.
---
paper_title: Integrating Face and Voice in Person Perception
paper_content:
Integration of information from face and voice plays a central role in our social interactions. It has been mostly studied in the context of audiovisual speech perception: integration of affective or identity information has received comparatively little scientific attention. Here, we review behavioural and neuroimaging studies of face-voice integration in the context of person perception. Clear evidence for interference between facial and vocal information has been observed during affect recognition or identity processing. Integration effects on cerebral activity are apparent both at the level of heteromodal cortical regions of convergence, particularly bilateral posterior superior temporal sulcus (pSTS), and at 'unimodal' levels of sensory processing. Whether the latter reflects feedback mechanisms or direct crosstalk between auditory and visual cortices is as yet unclear.
---
paper_title: Nonverbal Behavior and Nonverbal Communication: What do Conversational Hand Gestures Tell Us?
paper_content:
Publisher Summary This chapter explores how gestures contribute to comprehension, how gesturing affect speech and what can be learned from studying conversational gestures. The primary function of conversational hand gestures is to aid in the formulation of speech. Gestures can convey nonsemantic information. The study of speech and gestures overlaps with the study of person perception and attribution processes. The significance of gestures can be ambiguous and will affect the meanings and consequences to the observed gestures. A topology of gestures is adopters, symbolic gestures, and conversational gestures. Different types of conversational gestures can be distinguished as—namely, motor movements and lexical movements. Conversational hand gestures have been assumed to convey semantic information. Several studies that attempt to assess the kinds of information conversational gestures convey to naive observers and the extent to which gestures enhance the communicativeness of spoken messages are described in the chapter.
---
paper_title: The Effects of the Gesture Viewpoint on the Students' Memory of Words and Stories
paper_content:
The goal of this work is to estimate the effects of teacher's iconic gestures on the students' memory of words and short stories. Indeed, some evidence seems to confirm the possibility that iconics help the listener, but it is unclear what are the elements that make gestures more or less useful. According to McNeill's observation that children produce many more Character Viewpoint gestures than Observer Viewpoint ones, we hypothesize that they also understand and remember better words accompanied by these gestures. The results of two experimental studies showed that iconic gestures helped students to remember words and tales better and that younger students performed better in memory tasks when their teacher used Character Viewpoint gestures.
---
paper_title: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents
paper_content:
We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies that evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.
---
paper_title: Communicative Feedback Phenomena across Languages and Modalities
paper_content:
This research deals with human communicative behaviour related to feedback, analysed across languages (Italian and Swedish), modalities (auditory versus visual) and different communicative situations (human-human versus human-machine dialogues). The aim is to give more insight into how humans express feedback and at the same time suggest a method to collect valuable data that can be useful to control facial and head movements related to visual feedback in synthetic conversational agents. The analysed data span from spontaneous conversations video-recorded in real communicative situations, and semi spontaneous dialogues obtained with different eliciting techniques, to a specific corpus of controlled interactive speech collected by means of a motion capture system. A specific coding scheme has been developed, tested and used to annotate feedback with the support of different available software packages for audio visual analysis. This study should be especially useful to professionals in Speech Communication and Speech Technology, but even Psychologists studying human behavior in human-human and human-machine interactions might find it interesting.
---
paper_title: Speaking: From Intention to Articulation
paper_content:
In Speaking, Willem "Pim" Levelt, Director of the Max-Planck-Institut fur Psycholinguistik, accomplishes the formidable task of covering the entire process of speech production, from constraints on conversational appropriateness to articulation and self-monitoring of speech. Speaking is unique in its balanced coverage of all major aspects of the production of speech, in the completeness of its treatment of the entire speech process, and in its strategy of exemplifying rather than formalizing theoretical issues.
---
paper_title: Honest Signals: How They Shape Our World
paper_content:
How can you know when someone is bluffing? Paying attention? Genuinely interested? The answer, writes Sandy Pentland in Honest Signals, is that subtle patterns in how we interact with other people reveal our attitudes toward them. These unconscious social signals are not just a back channel or a complement to our conscious language; they form a separate communication network. Biologically based "honest signaling," evolved from ancient primate signaling mechanisms, offers an unmatched window into our intentions, goals, and values. If we understand this ancient channel of communication, Pentland claims, we can accurately predict the outcomes of situations ranging from job interviews to first dates. Pentland, an MIT professor, has used a specially designed digital sensor worn like an ID badgea "sociometer"to monitor and analyze the back-and-forth patterns of signaling among groups of people. He and his researchers found that this second channel of communication, revolving not around words but around social relations, profoundly influences major decisions in our liveseven though we are largely unaware of it. Pentland presents the scientific background necessary for understanding this form of communication, applies it to examples of group behavior in real organizations, and shows how by "reading" our social networks we can become more successful at pitching an idea, getting a job, or closing a deal. Using this "network intelligence" theory of social signaling, Pentland describes how we can harness the intelligence of our social network to become better managers, workers, and communicators.
---
paper_title: Laws of organization in perceptual forms.
paper_content:
Theoretically I might say there were 327 brightnesses and nuances of colour. Do I have "327"? No. I have sky, house, and trees. It is impossible to achieve "327 " as such. And yet even though such droll calculation were possible and implied, say, for the house 120, the trees 90, the sky 117 -I should at least have this arrangement and division of the total, and not, say, 127 and 100 and 100; or 150 and 177.
---
paper_title: Challenges ahead head movements and other social acts in conversation
paper_content:
When involved in face-to-face conversations, people move their heads in typical ways. The pattern of head gestures and their function in conversation has been studied in various disciplines. Many factors are involved in determining the exact patterns that occur in conversation. These can be explained by considering some of the basic properties of face-to-face interactions. The fact that conversations are a type of joint activity involving social actions together with a few other properties, such as the need for grounding, can explain the variety in functions that are served by the multitude of movements that people display during conversations.
---
paper_title: MULTIMODALITY IN OWN COMMUNICATION MANAGEMENT
paper_content:
This study studies how gestures (here defined in a wide sense, including all body movements which have a communicative function) are used for Own Communication Management (OCM), an interesting and not completely well described part of the language system. OCM concerns how a speaker continuously manages the planning and execution of his/her own communication and is a basic function in face-to-face interaction. It has two main functions, i.e. "choice" and "change". The study investigates how much of OCM involves gestures and whether there is a difference between choice and change OCM in this respect. It also concerns what kinds of gestures are used in OCM and what the relation is between vocal and gestural OCM. Some of the main findings are that roughly 50% of all speech based OCMcooccurs with gestures and that most of the OCM involving gestures (about 90%) is choice directed. Gestures occurring with OCM can illustrate the content of a sought after word, but also more generally induce word activation. They can also signal to an interlocutor that the speaker needs time. Gestures are often multifunctional and, thus, both choice and change are often integrated with more interactive functions. A final observation is that gestural OCM either precedes or occurs simultaneously with verbal OCM.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Communicative Feedback Phenomena across Languages and Modalities
paper_content:
This research deals with human communicative behaviour related to feedback, analysed across languages (Italian and Swedish), modalities (auditory versus visual) and different communicative situations (human-human versus human-machine dialogues). The aim is to give more insight into how humans express feedback and at the same time suggest a method to collect valuable data that can be useful to control facial and head movements related to visual feedback in synthetic conversational agents. The analysed data span from spontaneous conversations video-recorded in real communicative situations, and semi spontaneous dialogues obtained with different eliciting techniques, to a specific corpus of controlled interactive speech collected by means of a motion capture system. A specific coding scheme has been developed, tested and used to annotate feedback with the support of different available software packages for audio visual analysis. This study should be especially useful to professionals in Speech Communication and Speech Technology, but even Psychologists studying human behavior in human-human and human-machine interactions might find it interesting.
---
paper_title: Perception of non-verbal emotional listener feedback
paper_content:
This paper reports on a listening test assessing the perception of short non-verbal emotional vocalisations emitted by a listener as feedback to the speaker. We clarify the concepts backchannel and feedback, and investigate the use of affect bursts as a means of giving emotional feedback via the backchannel. Experiments with German and Dutch subjects confirm that the recognition of emotion from affect bursts in a dialogical context is similar to their perception in isolation. We also investigate the acceptability of affect bursts when used as listener feedback. Acceptability appears to be linked to display rules for emotion expression. While many ratings were similar between Dutch and German listeners, a number of clear differences was found, suggesting language-specific affect bursts.
---
paper_title: Social signals and the action — Cognition loop. The case of overhelp and evaluation
paper_content:
The paper explores the action — cognition loop by investigating the relation between overhelp and evaluation. It presents a study on the helping and overhelping behaviors of teachers with students of their own vs. of a stigmatized culture, and analyses them in terms of a taxonomy of helping behavior, and adopting an annotation scheme to assess the multimodal behavior of teachers and pupils. Results show that overhelping teachers induce more negative evaluations, more often concerning general capacities, and frequently expressed indirectly. This seems to show that the overhelp offered blocks a child's striving for autonomy since it generates a negative evaluation, in particular the belief of an inability of the receiver.
---
paper_title: GraphLaugh: A tool for the interactive generation of humorous puns
paper_content:
While automatic generation of funny texts delivers incrementally better results, for the time being semiautomatic generation can already provide something useful. In particular, we present an interactive system for producing humorous puns obtained through variation (i.e., word substitution) performed on familiar expressions. The replacement word is selected according to phonetic similarity and semantic constraints expressing semantic opposition or evoking ridiculous traits of people. Examples of these puns are Chaste makes waste (variation on proverb) and Genital Hospital (variation on soap opera title). Lexical substitution is the humorous core on which funniness of pun is based. We implemented an interactive tool (called GraphLaugh) that can automatically generate different type of lexical associations and visualize them through a dynamic graph. Through the interaction with the network nodes and arcs, the user can control the selection of words, semantic associations and familiar expressions. In this way, a restrict set of familiar expressions are filtered, the best word substitutions to apply them are easily individuated, and finally a list of funny puns is created.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools
paper_content:
While detecting and interpreting temporal patterns of non-verbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. Nevertheless, it is an important one to achieve if the goal is to realise a naturalistic communication between humans and machines. Machines that are able to sense social attitudes like agreement and disagreement and respond to them in a meaningful way are likely to be welcomed by users due to the more natural, efficient and human-centered interaction they are bound to experience. This paper surveys the nonverbal cues that could be present during agreement and disagreement behavioural displays and lists a number of tools that could be useful in detecting them, as well as a few publicly available databases that could be used to train these tools for analysis of spontaneous, audiovisual instances of agreement and disagreement.
---
paper_title: Nonverbal Behaviors, Persuasion, and Credibility
paper_content:
This study examined the relationships among nonverbal behaviors, dimensions of source credibility, and speaker persuasiveness in a public speaking context. Relevant nonverbal literature was organized according to a Brunswikian lens model. Nonverbal behavioral composites, grouped according to their likely proximal percepts, were hypothesized to significantly affect both credibility and persuasiveness. A sample of 60 speakers gave videotaped speeches that were judged on credibility and persuasiveness by classmates. Pairs of trained raters coded 22 vocalic, kinesic, and proxemic nonverbal behaviors evidenced in the tapes. Results confirmed numerous associations between nonverbal behaviors and attributions of credibility and persuasiveness. Greater perceived competence and composure were associated with greater vocal and facial pleasantness, with greater facial expressiveness contributing to competence perceptions. Greater sociability was associated with more kinesic/proxemic immediacy, dominance, and relaxation and with vocal pleasantness. Most of these same cues also enhanced character judgments. No cues were related to dynamism judgments. Greater perceived persuasiveness correlated with greater vocal pleasantness (especially fluency and pitch variety), kinesic/proxemic immediacy, facial expressiveness, and kinesic relaxation (especially high random movement but little tension). All five dimensions of credibility related to persuasiveness. Advantages of analyzing nonverbal cues according to proximal percepts are discussed.
---
paper_title: Behavioral markers and recognizability of the smile of enjoyment.
paper_content:
Ekman and Friesen (1982) predicted that smiles that express enjoyment would be marked by smoother zygomatic major actions of more consistent duration than the zygomatic major actions of nonenjoyment smiles. Study 1 measured the duration and smoothness of smiles shown by female subjects in response to positive emotion films while alone and in a social interaction. Enjoyment smiles in both situations were of more consistent duration and smoother than nonenjoyment smiles. In Study 2 observers who were shown videotapes of enjoyment and nonenjoyment smiles were able to accurately identify enjoyment smiles at rates greater than chance; moreover, accuracy was positively related to increased salience of orbicularis oculi action. In Study 3, another group of observers were asked to record their impressions of the smiling women shown in Study 2. These women were seen as more positive when they showed enjoyment compared with nonenjoyment smiles. These results provide further evidence that enjoyment smiles are entities distinct from smiles in general.
---
paper_title: The signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame
paper_content:
According to appeasement hypotheses, embarrassment should have a distinct nonverbal display that is more readily perceived when displayed by individuals from lower status groups. The evidence from 5 studies supported these two claims. The nonverbal behavior of embarrassment was distinct from a related emotion (amusement), resembled the temporal pattern of facial expressions of emotion, was uniquely related to self-reports of embarrassment, and was accurately identified by observers who judged the spontaneous displays of various emotions. Across the judgment studies, observers were more accurate and attributed more emotion to the embarrassment displays of female and AfricanAmerican targets than those of male and Caucasian targets. Discussion focused on the universality and appeasement function of the embarrassment display. Since universal facial expressions of a limited set of emotions were first documented (Ekman & Friesen, 1971; Ekman, Sorenson, & Friesen, 1969; Izard, 1971), sparse attention has been given to facial expressions of other emotions. The resulting lacuna in the field—that the emotions with identified displays are fewer (7 to 10) than the states that lay people (Fehr & Russell, 1984) and emotion theorists (Ekman, 1992; Izard, 1977; Tomkins, 1963, 1984) label as emotions—presents intriguing possibilities. Displays of other emotions may be blends of other emotional displays, unidentifiable, or may await discovery.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: An overview of automatic speaker diarization systems
paper_content:
Audio diarization is the process of annotating an input audio channel with information that attributes (possibly overlapping) temporal regions of signal energy to their specific sources. These sources can include particular speakers, music, background noise sources, and other signal source/channel characteristics. Diarization can be used for helping speech recognition, facilitating the searching and indexing of audio archives, and increasing the richness of automatic transcriptions, making them more readable. In this paper, we provide an overview of the approaches currently used in a key area of audio diarization, namely speaker diarization, and discuss their relative merits and limitations. Performances using the different techniques are compared within the framework of the speaker diarization task in the DARPA EARS Rich Transcription evaluations. We also look at how the techniques are being introduced into real broadcast news systems and their portability to other domains and tasks such as meetings and speaker verification
---
paper_title: Gesture Recognition: A Survey
paper_content:
Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted
---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: Detecting Faces in Images : A Survey
paper_content:
Images containing faces are essential to intelligent vision-based human-computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face, regardless of its 3D position, orientation and lighting conditions. Such a problem is challenging because faces are non-rigid and have a high degree of variability in size, shape, color and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: D.: Computational Studies of Human Motion: Part 1, Tracking and Motion Synthesis
paper_content:
We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation between the animation and computer vision communities. The review confines itself to the earlier stages of motion, focusing on tracking and motion synthesis; future material will cover activity representation and motion generation.
---
paper_title: SMaRT: the Smart Meeting Room Task at ISL
paper_content:
As computational and communications systems become increasingly smaller, faster, more powerful, and more integrated, the goal of interactive, integrated meeting support rooms is slowly becoming reality. It is already possible, for instance, to rapidly locate task-related information during a meeting, filter it, and share it with remote users. Unfortunately, the technologies that provide such capabilities are as obstructive as they are useful - they force humans to focus on the tool rather than the task. Thus the veneer of utility often hides the true costs of use, which are longer, less focused human interactions. To address this issue, we present our current research efforts towards SMaRT: the Smart Meeting Room Task. The goal of SMaRT is to provide meeting support services that do not require explicit human-computer interaction. Instead, by monitoring the activities in the meeting room using both video and audio analysis, the room is able to react appropriately to users' needs and allow the users to focus on their own goals.
---
paper_title: Modeling human interaction in meetings
paper_content:
The paper investigates the recognition of group actions in meetings by modeling the joint behaviour of participants. Many meeting actions, such as presentations, discussions and consensus, are characterised by similar or complementary behaviour across participants. Recognising these meaningful actions is an important step towards the goal of providing effective browsing and summarisation of processed meetings. A corpus of meetings was collected in a room equipped with a number of microphones and cameras. The corpus was labeled in terms of a predefined set of meeting actions characterised by global behaviour. In experiments, audio and visual features for each participant are extracted from the raw data and the interaction of participants is modeled using HMM-based approaches. Initial results on the corpus demonstrate the ability of the system to recognise the set of meeting actions.
---
paper_title: Automatic nonverbal analysis of social interaction in small groups : A review
paper_content:
An increasing awareness of the scientific and technological value of the automatic understanding of face-to-face social interaction has motivated in the past few years a surge of interest in the devising of computational techniques for conversational analysis. As an alternative to existing linguistic approaches for the automatic analysis of conversations, a relatively recent domain is using findings in social cognition, social psychology, and communication that have established the key role that nonverbal communication plays in the formation, maintenance, and evolution of a number of fundamental social constructs, which emerge from face-to-face interactions in time scales that range from short glimpses all the way to long-term encounters. Small group conversations are a specific case on which much of this work has been conducted. This paper reviews the existing literature on automatic analysis of small group conversations using nonverbal communication, and aims at bridging the current fragmentation of the work in this domain, currently split among half a dozen technical communities. The review is organized around the main themes studied in the literature and discusses, in a comparative fashion, about 100 works addressing problems related to the computational modeling of interaction management, internal states, personality traits, and social relationships in small group conversations, along with pointers to the relevant literature in social science. Some of the many open challenges and opportunities in this domain are also discussed.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: C4.5: Programs for Machine Learning
paper_content:
From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses.
---
paper_title: The Rules Behind Roles: Identifying Speaker Role in Radio Broadcasts
paper_content:
Previous work has shown that providing information about story structure is critical for browsing audio broadcasts. We investigate the hypothesis that Speaker Role is an important cue to story structure. We implement an algorithm that classies story segments into three Speaker Roles based on several content and duration features. The algorithm correctly classies about 80% of segments (compared with a baseline frequency of 35.4%) when applied to ASR derived transcriptions of broadcast data.
---
paper_title: Speakers Role Recognition in Multiparty Audio Recordings Using Social Network Analysis and Duration Distribution Modeling
paper_content:
This paper presents two approaches for speaker role recognition in multiparty audio recordings. The experiments are performed over a corpus of 96 radio bulletins corresponding to roughly 19 h of material. Each recording involves, on average, 11 speakers playing one among six roles belonging to a predefined set. Both proposed approaches start by segmenting automatically the recordings into single speaker segments, but perform role recognition using different techniques. The first approach is based on Social Network Analysis, the second relies on the intervention duration distribution across different speakers. The two approaches are used separately and combined and the results show that around 85% of the recording time can be labeled correctly in terms of role.
---
paper_title: Initial Study On Automatic Identification Of Speaker Role In Broadcast News Speech
paper_content:
Identifying a speaker's role (anchor, reporter, or guest speaker) is important for finding the structural information in broadcast news speech. We present an HMM-based approach and a maximum entropy model for speaker role labeling using Mandarin broadcast news speech. The algorithms achieve classification accuracy of about 80% (compared to the baseline of around 50%) using the human transcriptions and manually labeled speaker turns. We found that the maximum entropy model performs slightly better than the HMM, and that the combination of them outperforms any model alone. The impact of the contextual role information is also examined in this study.
---
paper_title: Automatic role recognition in multiparty recordings using social networks and probabilistic sequential models
paper_content:
The automatic analysis of social interactions is attracting significant interest in the multimedia community. This work addresses one of the most important aspects of the problem, namely the recognition of roles in social exchanges. The proposed approach is based on Social Network Analysis, for the representation of individuals in terms of their interactions with others, and probabilistic sequential models, for the recognition of role sequences underlying the sequence of speakers in conversations. The experiments are performed over different kinds of data (around 90 hours of broadcast data and meetings), and show that the performance depends on how formal the roles are, i.e. on how much they constrain people behavior.
---
paper_title: Using Simple Speech–Based Features to Detect the State of a Meeting and the Roles of the Meeting Participants
paper_content:
We introduce a simple taxonomy of meeting states and participant roles. Our goal is to automatically detect the state of a meeting and the role of each meeting participant and to do so concurrent with a meeting. We trained a decision tree classifier that learns to detect these states and roles from simple speech–based features that are easy to compute automatically. This classifier detects meeting states 18% absolute more accurately than a random classifier, and detects participant roles 10% absolute more accurately than a majority classifier. The results imply that simple, easy to compute features can be used for this purpose.
---
paper_title: Modeling Vocal Interaction for Text-Independent Participant Characterization in Multi-Party Conversation
paper_content:
An important task in automatic conversation understanding is the inference of social structure governing participant behavior. We explore the dependence between several social dimensions, including assigned role, gender, and seniority, and a set of low-level features descriptive of talkspurt deployment in a multiparticipant context. Experiments conducted on two large, publicly available meeting corpora suggest that our features are quite useful in predicting these dimensions, excepting gender. The classification experiments we present exhibit a relative error rate reduction of 37% to 67% compared to choosing the majority class.
---
paper_title: Automatic detection of group functional roles in face to face interactions
paper_content:
In this paper, we discuss a machine learning approach to automatically detect functional roles played by participants in a face to face interaction. We shortly introduce the coding scheme we used to classify the roles of the group members and the corpus we collected to assess the coding scheme reliability as well as to train statistical systems for automatic recognition of roles. We then discuss a machine learning approach based on multi-class SVM to automatically detect such roles by employing simple features of the visual and acoustical scene. The effectiveness of the classification is better than the chosen baselines and although the results are not yet good enough for a real application, they demonstrate the feasibility of the task of detecting group functional roles in face to face interactions.
---
paper_title: Automatic Role Recognition in Multiparty Recordings: Using Social Affiliation Networks for Feature Extraction
paper_content:
Automatic analysis of social interactions attracts increasing attention in the multimedia community. This letter considers one of the most important aspects of the problem, namely the roles played by individuals interacting in different settings. In particular, this work proposes an automatic approach for the recognition of roles in both production environment contexts (e.g., news and talk-shows) and spontaneous situations (e.g., meetings). The experiments are performed over roughly 90 h of material (one of the largest databases used for role recognition in the literature) and show that the recognition effectiveness depends on how much the roles influence the behavior of people. Furthermore, this work proposes the first approach for modeling mutual dependences between roles and assesses its effect on role recognition performance.
---
paper_title: Using the influence model to recognize functional roles in meetings
paper_content:
In this paper, an influence model is used to recognize functional roles played during meetings. Previous works on the same corpus demonstrated a high recognition accuracy using SVMs with RBF kernels. In this paper, we discuss the problems of that approach, mainly over-fitting, the curse of dimensionality and the inability to generalize to different group configurations. We present results obtained with an influence modeling method that avoid these problems and ensures both greater robustness and generalization capability.
---
paper_title: Role recognition for meeting participants: an approach based on lexical information and social network analysis
paper_content:
This paper presents experiments on the automatic recognition of roles in meetings. The proposed approach combines two sources of information: the lexical choices made by people playing different roles on one hand, and the Social Networks describing the interactions between the meeting participants on the other hand. Both sources lead to role recognition results significantly higher than chance when used separately, but the best results are obtained with their combination. Preliminary experiments obtained over a corpus of 138 meeting recordings (over 45 hours of material) show that around 70% of the time is labeled correctly in terms of role.
---
paper_title: Opinion mining and sentiment analysis
paper_content:
An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: Head Gestures for Perceptual Interfaces: The Role of Context in Improving Recognition
paper_content:
Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human-computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent. In non-conversational interfaces, context features based on user-interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach to contextual prediction and multi-modal integration, performance of head gesture detection was improved with context features even when the topic of the test set was significantly different than the training set.
---
paper_title: Identifying Agreement And Disagreement In Conversational Speech: Use Of Bayesian Networks To Model Pragmatic Dependencies
paper_content:
We describe a statistical approach for modeling agreements and disagreements in conversational interaction. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational, and structural features that look both forward and backward in the discourse. We then classify utterances as agreement or disagreement using these adjacency pairs and features that represent various pragmatic influences of previous agreement or disagreement on the current utterance. Our approach achieves 86.9% accuracy, a 4.9% increase over previous work.
---
paper_title: The Chameleon Effect as Social Glue: Evidence for the Evolutionary Significance of Nonconscious Mimicry
paper_content:
The "chameleon effect" refers to the tendency to adopt the postures, gestures, and mannerisms of interaction partners (Chartrand & Bargh, 1999). This type of mimicry occurs outside of conscious awareness, and without any intent to mimic or imitate. Empirical evidence suggests a bi-directional relationship between nonconscious mimicry on the one hand, and liking, rapport, and affiliation on the other. That is, nonconscious mimicry creates affiliation, and affiliation can be ex- pressed through nonconscious mimicry. We argue that mimicry played an impor- tant role in human evolution. Initially, mimicry may have had survival value by helping humans communicate. We propose that the purpose of mimicry has now evolved to serve a social function. Nonconscious behavioral mimicry increases af- filiation, which serves to foster relationships with others. We review current re- search in light of this proposed framework and suggest future areas of research.
---
paper_title: Dominance Detection in Meetings Using Easily Obtainable Features
paper_content:
We show that, using a Support Vector Machine classifier, it is possible to determine with a 75% success rate who dominated a particular meeting on the basis of a few basic features. We discuss the corpus we have used, the way we had people judge dominance and the features that were used.
---
paper_title: The Five-Factor Model of Personality
paper_content:
Study I in this chapter reports an Indian (Marathi) adaptation of the Revised NEO Personality Inventory (NEO-PI-R), its psychometric evaluation, and gender differences based on data from 214 subjects. Factor analyses supported the Five-Factor Model and indicated factorial invariance across Indian and American cultures. The study also demonstrated the utility of oblique and orthogonal Procrustes rotations and multiple group factor analysis in evaluating the Five-Factor Model. Study II, employing 300 subjects, examined the Eysenckian correlates of the Indian (Marathi) NEO Five-Factor Inventory (NEO-FFI). The obtained correlations provide validity evidence for the NEO-FFI and its parent instrument, the NEO-PI-R.
---
paper_title: A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances
paper_content:
A novel probabilistic framework is proposed for inferring the structure of conversation in face-to-face multiparty communication, based on gaze patterns, head directions and the presence/absence of utterances. As the structure of conversation, this study focuses on the combination of participants and their participation roles. First, we assess the gaze patterns that frequently appear in conversations, and define typical types of conversation structure, called conversational regime, and hypothesize that the regime represents the high-level process that governs how people interact during conversations. Next, assuming that the regime changes over time exhibit Markov properties, we propose a probabilistic conversation model based on Markov-switching; the regime controls the dynamics of utterances and gaze patterns, which stochastically yield measurable head-direction changes. Furthermore, a Gibbs sampler is used to realize the Bayesian estimation of regime, gaze pattern, and model parameters from observed head directions and utterances. Experiments on four-person conversations confirm the effectiveness of the framework in identifying conversation structures.
---
paper_title: Capturing Order in Social Interactions 1
paper_content:
Such an ambitious plan of filling the social intelligence gap between humans and machines starts from a fundamental problem, namely how to make social phenomena accessible to computers when the only evidence these have at disposition about the world are signals captured with devices like microphones and cameras. The consequent question is “do social phenomena leave physical, machine detectable, traces in signals
---
paper_title: Multimodal support to group dynamics
paper_content:
The complexity of group dynamics occurring in small group interactions often hinders the performance of teams. The availability of rich multimodal information about what is going on during the meeting makes it possible to explore the possibility of providing support to dysfunctional teams from facilitation to training sessions addressing both the individuals and the group as a whole. A necessary step in this direction is that of capturing and understanding group dynamics. In this paper, we discuss a particular scenario, in which meeting participants receive multimedia feedback on their relational behaviour, as a first step towards increasing self-awareness. We describe the background and the motivation for a coding scheme for annotating meeting recordings partially inspired by the Bales’ Interaction Process Analysis. This coding scheme was aimed at identifying suitable observable behavioural sequences. The study is complemented with an experimental investigation on the acceptability of such a service.
---
paper_title: Using linguistic cues for the automatic recognition of personality in conversation and text
paper_content:
It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker's personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using self-reports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.
---
paper_title: Interpersonal Adaptation: Dyadic Interaction Patterns
paper_content:
List of figures and tables Preface Part I. Overview: 1. Introduction Part II. Interaction Adaptation Theories and Models: 2. Biological approaches 3. Arousal and affect approaches 4. Social norm approaches 5. Communication and cognitive approaches Part III. Issues in Studying Interaction Adaptation: 6. Reconceptualising interaction adaptation patterns 7. Operationalising adaptation patterns 8. Analysing adaptation patterns Part IV. Multimethod Tests of Reciprocity and Compensation: 9. A first illustration 10. Further illustrations Part V. Developing a New Interpersonal Adaptation Theory: 11. The theories revisited 12. A research agenda References Index.
---
paper_title: Capturing Individual and Group Behavior with Wearable Sensors
paper_content:
We show how to obtain high level descriptions of human behavior in terms of physical activity, speech activity, face-to-face interaction (f2f), physical proximity, and social network attributes from sensor data. We present experimental results that showthat it is possible to identify individual personality traits as well as subjective and objective group performance metrics from low level data collected using wearable sensors.
---
paper_title: Detection and application of influence rankings in small group meetings
paper_content:
We address the problem of automatically detecting participant's influence levels in meetings. The impact and social psychological background are discussed. The more influential a participant is, the more he or she influences the outcome of a meeting. Experiments on 40 meetings show that application of statistical (both dynamic and static) models while using simply obtainable features results in a best prediction performance of 70.59% when using a static model, a balanced training set, and three discrete classes: high, normal and low. Application of the detected levels are shown in various ways i.e. in a virtual meeting environment as well as in a meeting browser system.
---
paper_title: The chameleon effect : The perception-behavior link and social interaction
paper_content:
The chameleon effect refers to nonconscious mimicry of the postures, mannerisms, facial expressions, and other behaviors of one's interaction partners, such that one's behavior passively and unintentionally changes to match that of others in one's current social environment. The authors suggest that the mechanism involved is Has. perception-behavior link, the recently documented finding (e.g., J. A. Bargh, M. Chen, & L. Burrows, 1996) that the mere perception of another's behavior automatically increases the likelihood of engaging in that behavior oneself. Experiment 1 showed that the motor behavior of participants unintentionally matched that of strangers with whom they worked on a task. Experiment 2 had confederates mimic the posture and movements of participants and showed that mimicry facilitates the smoothness of interactions and increases liking between interaction partners. Experiment 3 showed that dispositionally empathic individuals exhibit the chameleon effect to a greater extent than do other people.
---
paper_title: Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools
paper_content:
While detecting and interpreting temporal patterns of non-verbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. Nevertheless, it is an important one to achieve if the goal is to realise a naturalistic communication between humans and machines. Machines that are able to sense social attitudes like agreement and disagreement and respond to them in a meaningful way are likely to be welcomed by users due to the more natural, efficient and human-centered interaction they are bound to experience. This paper surveys the nonverbal cues that could be present during agreement and disagreement behavioural displays and lists a number of tools that could be useful in detecting them, as well as a few publicly available databases that could be used to train these tools for analysis of spontaneous, audiovisual instances of agreement and disagreement.
---
paper_title: Thin Slices of Negotiation: Predicting outcomes from conversational dynamics within the first five minutes
paper_content:
In this research we examine whether conversational dynamics occurring within the first five minutes of a negotiation can predict negotiated outcomes. In a simulated employment negotiation, micro-coding conducted by a computer showed that activity level, conversational engagement, prosodic emphasis, and vocal mirroring predicted 30% of the variance in individual outcomes. The conversational dynamics associated with individual success among high-status parties were different from those associated with individual success among low-status parties. Results are interpreted in light of theory and research exploring the predictive power of "thin slices" (Ambady & Rosenthal, 1992). Implications include the development of new technology to diagnose and improve negotiation processes.
---
paper_title: A corpus for studying addressing behavior in multi-party dialogues
paper_content:
This paper describes a multi-modal corpus of hand-annotated meeting dialogues that was designed for studying addressing behavior in face-to-face conversations. The corpus contains annotated dialogue acts, addressees, adjacency pairs and gaze direction. First, we describe the corpus design where we present the annotation schema, annotation tools and annotation process itself. Then, we analyze the reproducibility and stability of the annotation schema.
---
paper_title: The behaviour markup language: recent developments and challenges
paper_content:
Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the progress made in the last year in further developing BML. It discusses some of the key challenges identified that the effort is facing, and reviews a number of projects that already are making use of BML or support its use.
---
paper_title: Automatic nonverbal analysis of social interaction in small groups : A review
paper_content:
An increasing awareness of the scientific and technological value of the automatic understanding of face-to-face social interaction has motivated in the past few years a surge of interest in the devising of computational techniques for conversational analysis. As an alternative to existing linguistic approaches for the automatic analysis of conversations, a relatively recent domain is using findings in social cognition, social psychology, and communication that have established the key role that nonverbal communication plays in the formation, maintenance, and evolution of a number of fundamental social constructs, which emerge from face-to-face interactions in time scales that range from short glimpses all the way to long-term encounters. Small group conversations are a specific case on which much of this work has been conducted. This paper reviews the existing literature on automatic analysis of small group conversations using nonverbal communication, and aims at bridging the current fragmentation of the work in this domain, currently split among half a dozen technical communities. The review is organized around the main themes studied in the literature and discusses, in a comparative fashion, about 100 works addressing problems related to the computational modeling of interaction management, internal states, personality traits, and social relationships in small group conversations, along with pointers to the relevant literature in social science. Some of the many open challenges and opportunities in this domain are also discussed.
---
paper_title: Canal9: A database of political debates for analysis of social interactions
paper_content:
Automatic analysis of social interactions attracts major attention in the computing community, but relatively few benchmarks are available to researchers active in the domain. This paper presents a new, publicly available, corpus of political debates including not only raw data, but a rich set of socially relevant annotations such as turn-taking (who speaks when and how much), agreement and disagreement between participants, and role played by people involved in each debate. The collection includes 70 debates for a total of 43 hours and 10 minutes of material.
---
paper_title: The SEMAINE API: Towards a standards-based framework for building emotion-oriented systems
paper_content:
This paper presents the SEMAINE API, an open source framework for building emotion-oriented systems. By encouraging and simplifying the use of standard representation formats, the framework aims to contribute to interoperability and reuse of system components in the research community. By providing a Java and C++ wrapper around a message-oriented middleware, the API makes it easy to integrate components running on different operating systems and written in different programming languages. The SEMAINE system 1.0 is presented as an example of a full-scale system built on top of the SEMAINE API. Three small example systems are described in detail to illustrate how integration between existing and new components is realised with minimal effort.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Detection and application of influence rankings in small group meetings
paper_content:
We address the problem of automatically detecting participant's influence levels in meetings. The impact and social psychological background are discussed. The more influential a participant is, the more he or she influences the outcome of a meeting. Experiments on 40 meetings show that application of statistical (both dynamic and static) models while using simply obtainable features results in a best prediction performance of 70.59% when using a static model, a balanced training set, and three discrete classes: high, normal and low. Application of the detected levels are shown in various ways i.e. in a virtual meeting environment as well as in a meeting browser system.
---
paper_title: The AMI Meeting Corpus: A Pre-announcement
paper_content:
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.
---
paper_title: Automatic analysis of multimodal group actions in meetings
paper_content:
This paper investigates the recognition of group actions in meetings. A framework is employed in which group actions result from the interactions of the individual participants. The group actions are modeled using different HMM-based approaches, where the observations are provided by a set of audiovisual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modeling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis.
---
paper_title: Multimedia Database of Meetings and Informal Interactions for Tracking Participant Involvement and Discourse Flow
paper_content:
At ATR, we are collecting and analysing ‘meetings’ data using a table-top sensor device consisting of a small 360-degree camera surrounded by an array of high-quality directional microphones. This equipment provides a stream of information about the audio and visual events of the meeting which is then processed to form a representation of the verbal and non-verbal interpersonal activity, or discourse flow, during the meeting. This paper describes the resulting corpus of speech and video data which is being collected for the above research. It currently includes data from 12 monthly sessions, comprising 71 video and 33 audio modules. Collection is continuing monthly and is scheduled to include another ten sessions.
---
paper_title: VACE Multimodal Meeting Corpus
paper_content:
In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.
---
paper_title: Human computing and machine understanding of human behavior: a survey
paper_content:
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior.
---
paper_title: Multimodal databases of everyday emotion: facing up to complexity.
paper_content:
In everyday life, speech is part of a multichannel system involved in conveying emotion. Understanding how it operates in that context requires suitable data, consisting of multimodal records of emotion drawn from everyday life. This paper reflects the experience of two teams active in collecting and labelling data of this type. It sets out the core reasons for pursuing a multimodal approach, reviews issues and problems for developing relevant databases, and indicates how we can move forward both in terms of data collection and approaches to labelling.
---
paper_title: Culture and social behavior
paper_content:
Our culture influences who we are and our understanding of social behaviour why bother to study culture-social behaviour relationships how to study cultures the analysis of subjective cultures some interesting differences in subjective cultures cultural differences in patterns of social behaviour culture and communication cultural differences in aggression, helping, dominance, conformity, obedience and intimacy dealing with diversity and intercultural relations intercultural training.
---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: A new image for fear and emotion
paper_content:
We experience many emotions every day, but as a topic of study emotion has been neglected until recently. Now, however, with the development of sophisticated techniques such as functional magnetic resonance imaging, the neural basis of emotion can be studied. Four new reports have used slightly different approaches, and conclude that the amygdala — a complex structure within the temporal lobes of the brain — is involved in the responses to fear.
---
paper_title: Speech-driven lip motion generation with a trajectory HMM
paper_content:
Automatic speech animation remains a challenging problem that can be described as finding the optimal sequence of animation parameter configurations given some speech. In this paper we present a novel technique to automatically synthesise lip motion trajectories from a speech signal. The developed system predicts lip motion units from the speech signal and generates animation trajectories automatically employing a ”Trajectory Hidden Markov Model”. Using the MLE criterion, its parameter generation algorithm produces the optimal smooth motion trajectories that are used to drive control points on the lips directly. Additionally, experiments were carried out to find a suitable model unit that produces the most accurate results. Finally a perceptual evaluation was conducted, that showed that the developed motion units perform better than phonemes.
---
paper_title: Nonverbal Communication in Human Interaction
paper_content:
Preface. Part I: AN INTRODUCTION TO THE STUDY OF NONVERBAL COMMUNICATION. 1. Nonverbal Communication: Basic Perspectives. 2. The Roots of Nonverbal Behavior. 3. The Ability to Receive and Send Nonverbal Signals. Part II: THE COMMUNICATION ENVIRONMENT. 4. The Effects of the Environment on Human Communication. 5. The Effects of Territory and Personal Space on Human Communication. Part III: THE COMMUNICATORS. 6. The Effects of Physical Characteristics on Human Communication. Part IV: The Communicators' Behavior. 7. The Effects of Gesture and Posture on Human Communication. 8. The Effects of Touch on Human Communication. 9. The Effects of the Face on Human Communication. 10. The Effects of Eye Behavior on Human Communication. 11. The Effects of Vocal Cues That Accompany Spoken Words. Part V: COMMUNICATING IMPORTANT MESSAGES. 12. Using Nonverbal Behavior in Daily Interaction. 13. Nonverbal Messages in Special Contexts.
---
paper_title: The Significance of Posture in Communication Systems
paper_content:
Devices relying on the Inverse Wiedman Effect. Devices include a current conductive, magnetically anisotropic rod, through which an AC current flows. Wound about the rod is a conductive coil having output terminals. By varying the AC current, the anisotropy, or both, variations in the output across coil terminals can be obtained. Devices may be for current sensing, pressure sensing, fluid flow sensing or an electric push button.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Facial and Vocal Expressions of Emotion
paper_content:
A flurry of theoretical and empirical work concerning the production of and response to facial and vocal expressions has occurred in the past decade. That emotional expressions express emotions is a tautology but may not be a fact. Debates have centered on universality, the nature of emotion, and the link between emotions and expressions. Modern evolutionary theory is informing more models, emphasizing that expressions are directed at a receiver, that the interests of sender and receiver can conflict, that there are many determinants of sending an expression in addition to emotion, that expressions influence the receiver in a variety of ways, and that the receiver's response is more than simply decoding a message.
---
paper_title: A survey of affect recognition methods: audio, visual and spontaneous expressions
paper_content:
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions-despite the fact that deliberate behavior differs in visual and audio expressions from spontaneously occurring behavior. Recently efforts to develop algorithms that can process naturally occurring human affective behavior have emerged. This paper surveys these efforts. We first discuss human emotion perception from a psychological perspective. Next, we examine the available approaches to solving the problem of machine understanding of human affective behavior occurring in real-world settings. We finally outline some scientific and engineering challenges for advancing human affect sensing technology.
---
paper_title: Embodiment in conversational interfaces: Rea
paper_content:
In this paper, we argue for embodied corrversational charactersas the logical extension of the metaphor of human - computerinteraction as a conversation. We argue that the only way to fullymodel the richness of human I&+ to-face communication is torely on conversational analysis that describes sets ofconversational behaviors as fi~lfilling conversational functions,both interactional and propositional. We demonstrate how toimplement this approach in Rea, an embodied conversational agentthat is capable of both multimodal input understanding and outputgeneration in a limited application domain. Rea supports bothsocial and task-oriented dialogue. We discuss issues that need tobe addressed in creating embodied conversational agents, anddescribe the architecture of the Rea interface.
---
paper_title: How (Not) to Add Laughter to Synthetic Speech
paper_content:
Laughter is a powerful means of emotion expression which has not yet been used in speech synthesis. The current paper reports on a pilot study in which differently created types of laughter were combined with synthetic speech in a dialogical situation. A perception test assessed the effect on perceived social bonding as well as the appropriateness of the laughter. Results indicate that it is crucial to carefully model the intensity of the laughter, whereas speaker identity and generation method appear less important.
---
paper_title: Social force model for pedestrian dynamics
paper_content:
It is suggested that the motion of pedestrians can be described as if they would be subject to ``social forces.'' These ``forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically.
---
paper_title: Towards a virtual agent using similarity-based laughter production
paper_content:
Facial expressions of emotions are often described statically at their apex. Lately several researchers (Keltner, 1995) have shown through analysis of video corpora that emotions are expressed through a sequence of micro-behaviours. These micro-behaviours correspond to signals spread over the whole body (face, head, gaze, gesture, etc). All these signals do not have to occur simultaneously. Some of them occur more often at the beginning of expressions, some others at the end, some occur in a sequence, while others are synchronized (Keltner, 1995).
---
paper_title: D.: Incremental multimodal feedback for conversational agents
paper_content:
Just like humans, conversational computer systems should not listen silently to their input and then respond. Instead, they should enforce the speaker-listener link by attending actively and giving feedback on an utterance while perceiving it. Most existing systems produce direct feedback responses to decisive (e.g. prosodic) cues. We present a framework that conceives of feedback as a more complex system, resulting from the interplay of conventionalized responses to eliciting speaker events and the multimodal behavior that signals how internal states of the listener evolve. A model for producing such incremental feedback, based on multi-layered processes for perceiving, understanding, and evaluating input, is described.
---
paper_title: Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents
paper_content:
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.
---
paper_title: Automatic acoustic synthesis of human-like laughter.
paper_content:
A technique to synthesize laughter based on time-domain behavior of real instances of human laughter is presented. In the speech synthesis community, interest in improving the expressive quality of synthetic speech has grown considerably. While the focus has been on the linguistic aspects, such as precise control of speech intonation to achieve desired expressiveness, inclusion of nonlinguistic cues could further enhance the expressive quality of synthetic speech. Laughter is one such cue used for communicating, say, a happy or amusing context. It can be generated in many varieties and qualities: from a short exhalation to a long full-blown episode. Laughter is modeled at two levels, the overall episode level and at the local call level. The first attempts to capture the overall temporal behavior in a parametric model based on the equations that govern the simple harmonic motion of a mass-spring system is presented. By changing a set of easily available parameters, the authors are able to synthesize a var...
---
paper_title: Analyzing single episodes of interaction: an exercise in conversation analysis
paper_content:
A variety of analytic resources provided by past work in conversation analysis are brought to bear on the analysis of a single utterance in its sequential context, drawn from an ordinary conversation. Various facets of the organization of talk-in-interaction are thereby both introduced and exemplified. The result displays the capacity of this analytic modality to meet a fundamental responsibility of social analysis, namely the capacity to explicate single episodes of action in interaction as a basic locus of social order.
---
paper_title: Annotating meaning of listener vocalizations for speech synthesis
paper_content:
Generation of listener vocalizations is one of the major objectives of emotionally colored conversational speech synthesis. Success in this endeavor depends on the answers to three questions: What kinds of meaning are expressed through listener vocalizations? What form is suitable for a given meaning? And, in what context should which listener vocalizations be produced? In this paper, we address the first of these questions. We present a method to record natural and expressive listener vocalizations for synthesis, and describe our approach to identify a suitable categorical description of the meaning conveyed in the vocalizations. In our data, one actor produces a total of 967 listener vocalizations, in his natural speaking style and three acted emotion-specific personalities. In an open categorization scheme, we find that eleven categories occur on at least 5% of the vocalizations, and that most vocalizations are better described by two or three categories rather than a single one. Furthermore, an annotation of meaning reference, according to Bu¨hler's Organon model, allows us to make interesting observations regarding the listener's own state, his stance towards the interlocutor, and his attitude towards the topic of the conversation.
---
paper_title: Human–machine interaction as a model of machine–machine interaction: how to make machines interact as humans do
paper_content:
Turn-taking is one of the main features of communicative systems. In particular, it is one of the bases allowing robust interactions in imitation, thanks to its two linked aspects, i.e., communication and learning. In this article, we propose a simple model based on the interaction of two neural oscillators inhibiting each other which explain how 'turn-taking' may emerge dynamically between two agents. An implementation of the model on a simple robotic platform made of one CCD camera and one simple arm (ADRIANA platform) is detailed. Results showing the emergence of a 'turn-taking' dynamics on this platform are discussed and an extension in simulation for a larger scale of parameters in order to validate robustness is given.
---
paper_title: Dynamic Movement and Positioning of Embodied Agents in Multiparty Conversations
paper_content:
For embodied agents to engage in realistic multiparty conversation, they must stand in appropriate places with respect to other agents and the environment. When these factors change, for example when an agent joins a conversation, the agents must dynamically move to a new location and/or orientation to accommodate. This paper presents an algorithm for simulating the movement of agents based on observed human behavior using techniques developed for pedestrian movement in crowd simulations. We extend a previous group conversation simulation to include an agent motion algorithm. We examine several test cases and show how the simulation generates results that mirror real-life conversation settings.
---
paper_title: Perception of non-verbal emotional listener feedback
paper_content:
This paper reports on a listening test assessing the perception of short non-verbal emotional vocalisations emitted by a listener as feedback to the speaker. We clarify the concepts backchannel and feedback, and investigate the use of affect bursts as a means of giving emotional feedback via the backchannel. Experiments with German and Dutch subjects confirm that the recognition of emotion from affect bursts in a dialogical context is similar to their perception in isolation. We also investigate the acceptability of affect bursts when used as listener feedback. Acceptability appears to be linked to display rules for emotion expression. While many ratings were similar between Dutch and German listeners, a number of clear differences was found, suggesting language-specific affect bursts.
---
paper_title: Natural Behavior of a Listening Agent
paper_content:
In contrast to the variety of listening behaviors produced in human-to-human interaction, most virtual agents sit or stand passively when a user speaks. This is a reflection of the fact that although the correct responsive behavior of a listener during a conversation is often related to the semantics, the state of current speech understanding technology is such that semantic information is unavailable until after an utterance is complete. This paper will illustrate that appropriate listening behavior can also be generated by other features of a speaker's behavior that are available in real time such as speech quality, posture shifts and head movements. This paper presents a mapping from these real-time obtainable features of a human speaker to agent listening behaviors.
---
paper_title: Predicting listener backchannels: A probabilistic multimodal approach
paper_content:
During face-to-face interactions, listeners use backchannel feedback such as head nods as a signal to the speaker that the communication is working and that they should continue speaking. Predicting these backchannel opportunities is an important milestone for building engaging and natural virtual humans. In this paper we show how sequential probabilistic models (e.g., Hidden Markov Model or Conditional Random Fields) can automatically learn from a database of human-to-human interactions to predict listener backchannels using the speaker multimodal output features (e.g., prosody, spoken words and eye gaze). The main challenges addressed in this paper are automatic selection of the relevant features and optimal feature representation for probabilistic models. For prediction of visual backchannel cues (i.e., head nods), our prediction model shows a statistically significant improvement over a previously published approach based on hand-crafted rules.
---
paper_title: Expressing degree of activation in synthetic speech
paper_content:
This paper presents the design, implementation, and evaluation of a system capable of expressing a continuum of emotional states in synthetic speech. A review of the literature and an analysis of a naturalistic database of emotional speech provided detailed descriptions of the link between acoustic parameters and the three emotion dimensions activation, evaluation, and power. We formulated a set of emotional prosody rules and implemented them in a German text-to-speech (TTS) system. A perception study investigated how well the resulting synthesized prosody fits with emotional states defined through textual situation descriptions. Results show that degree of activation is perceived as intended
---
paper_title: Modelling personality features by changing prosody in synthetic speech
paper_content:
This study explores how features of brand personalities can b e modelled with the prosodic parameters pitch level, pitch range, articulation rate and loudness. Experiments with parametric al diphone synthesis showed that listeners rated the prosodically changed versions better than a baseline version for the dimen- sions "sincerity", "competence", "sophistication", "excitem ent" and "ruggedness". The contribution of prosodic features such as lower pitch and an enlarged pitch range are analyzed and discussed.
---
paper_title: Verification of Acoustical Correlates of Emotional Speech using Formant- Synthesis
paper_content:
This paper explores the perceptual relevance of acoustical correlates of emotional speech by means of speech synthesis. Besides, the research aims at the development of emotion-rules which enable an optimized speech synthesis system to generate emotional speech. Two invesigations using this synthetizer are described : 1) the systematic variation of selected acoustical features to gain a preliminary impression regarding the importance of certain acoustical features for emotional expression, and 2) the specific manipulation of a stimulus spoken under emotionally neutral condition to investigate further the effect of certain features and the overall ability of the synthetizer to generate recognizable emotional expression. It is shown that this approach is indeed capable of generating emotional speech that is recognized almost as well as utterances realized by actors
---
paper_title: Expressive Speech Synthesis: Past, Present, and Possible Futures
paper_content:
Approaches towards adding expressivity to synthetic speech have changed considerably over the last 20 years. Early systems, including formant and diphone systems, have been focused around “explicit control” models; early unit selection systems have adopted a “playback” approach. Currently, various approaches are being pursued to increase the flexibility in expression while maintaining the quality of state-of-the-art systems, among them a new “implicit control” paradigm in statistical parametric speech synthesis, which provides control over expressivity by combining and interpolating between statistical models trained on different expressive databases. The present chapter provides an overview of the past and present approaches, and ventures a look into possible future developments.
---
paper_title: Automatic Exploration of Corpus-Specific Properties for Expressive Text-to-Speech: A Case Study in Emphasis.
paper_content:
In this paper we explore an approach to expressive text-tospeech synthesis in which pre-existing expression-specific corpora are complemented with automatically generated labels to augment the search space of units the engine can exploit to increase its expressiveness. We motivate this data-discovery approach as an alternative to an approach guided by data collection, in order to harness the full usefulness of the expressiveness already contained in a synthesis corpus. We illustrate the approach with a case study that uses emphasis as its intended expression, describe algorithms for the automatic discovery of such instances in the database and how to make use of them during synthesis, and, finally, evaluate the benefits of the proposal to demonstrate the feasibility of the approach.
---
paper_title: Endowing spoken language dialogue systems with emotional intelligence
paper_content:
While most dialogue systems restrict themselves to the adjustment of the propositional contents, our work concentrates on the generation of stylistic va- riations in order to improve the user's perception of the interaction. To accomplish this goal, our approach integrates a social theory of politeness with a cognitive theory of emotions. We propose a hierarchical selection process for politeness behaviors in order to enable the refinement of decisions in case additional context information becomes available.
---
paper_title: Model Adaptation Approach to Speech Synthesis with Diverse Voices and Styles
paper_content:
In human computer interaction and dialogue systems, it is often desirable for text-to-speech synthesis to be able to generate natural sounding speech with an arbitrary speaker's voice and with varying speaking styles and/or emotional expressions. We have developed an average-voice-based speech synthesis method using statistical average voice models and model adaptation techniques for this purpose. In this paper, we describe an overview of the speech synthesis system and show the current performance with several experimental results.
---
paper_title: Simultaneous Modeling Of Spectrum, Pitch And Duration In HMM-Based Speech Synthesis
paper_content:
In this paper, we describe an HMM-based speech synthesis system in which spectrum, pitch and state duration are modeled simultaneously in a unified framework of HMM. In the system, pitch and state duration are modeled by multi-space probability distribution HMMs and multi-dimensional Gaussian distributions, respectively. The distributions for spectral parameter, pitch parameter and the state duration are clustered independently by using a decision-tree based context clustering technique. Synthetic speech is generated by using an speech parameter generation algorithm from HMM and a mel-cepstrum based vocoding technique. Through informal listening tests, we have confirmed that the proposed system successfully synthesizes natural-sounding speech which resembles the speaker in the training database.
---
paper_title: The IBM expressive text-to-speech synthesis system for American English
paper_content:
Expressive text-to-speech (TTS) synthesis should contribute to the pleasantness, intelligibility, and speed of speech-based human-machine interactions which use TTS. We describe a TTS engine which can be directed, via text markup, to use a variety of expressive styles, here, questioning, contrastive emphasis, and conveying good and bad news. Differences in these styles lead us to investigate two approaches for expressive TTS, a "corpus-driven" and a "prosodic-phonology" approach. Each speaker records 11 h (excluding silences) of "neutral" sentences. In the corpus-driven approach, the speaker also records 1-h corpora in each expressive style; these segments are tagged by style for use during search, and decision trees for determining f0 contours and timing are trained separately for each of the neutral and expressive corpora. In the prosodic-phonology approach, rules translating certain expressive markup elements to tones and break indices (ToBI) are manually determined, and the ToBI elements are used in single f0 and duration trees for all expressions. Tests show that listeners identify synthesis in particular styles ranging from 70% correctly for "conveying bad news" to 85% for "yes-no questions". Further improvements are demonstrated through the use of speaker-pooled f0 and duration models
---
paper_title: Politeness : some universals in language usage
paper_content:
Symbols and abbreviations Foreword John J. Gumperz Introduction to the reissue Notes 1. Introduction 2. Summarized argument 3. The argument: intuitive bases and derivative definitions 4. On the nature of the model 5. Realizations of politeness strategies in language 6. Derivative hypotheses 7. Sociological implications 8. Implications for language studies 9. Conclusions Notes References Author index Subject index.
---
paper_title: Modelling politeness in natural language generation
paper_content:
One of the main objectives of research in Natural Language generation (NLG) is to account for linguistic variation in a systematic way. Research on linguistic politeness provides important clues as to the possible causes of linguistic variation and the ways in which it may be modelled formally. In this paper we present a simple language generation model for choosing the appropriate surface realisations of tutoring responses based on the politeness notion of face. We adapt the existing definition of face to the demands of the educational genre and we demonstrate how a politeness driven NLG system may result in a more natural and a more varied form of linguistic output.
---
paper_title: Modelling prominence and emphasis improves unit-selection synthesis
paper_content:
We describe the results of large scale perception experiments showing improvements in synthesising two distinct kinds of prominence: standard pitch-accent and strong emphatic accents. Previously prominence assignment has been mainly evaluated by computing accuracy on a prominence-labelled test set. By contrast we integrated an automatic pitch-accent classifie r into the unit selection target cost and showed that listeners preferred these synthesised sentences. We also describe an improved recording script for collecting emphatic accents, and show that generating emphatic accents leads to further improvements in the fict ion genre over incorporating pitch accent only. Finally, we show diffe rences in the effe cts of prominence between childdirected speech and news and fict ion genres.
---
paper_title: A style control technique for HMM-based speech synthesis.
paper_content:
This paper describes an approach to controlling style of synthetic speech in HMM-based speech synthesis. The style is defined as one of speaking styles and emotional expressions in speech. We model each speech synthesis unit by using a context-dependent HMM whose mean vector of the output distribution function is given by a function of a parameter vector called style control vector. We assume that the mean vector is modeled by multiple regression with the style control vector. The multiple regression matrices are estimated by EMalgorithm as well as other model parameters of HMMs. In the synthesis stage, the mean vectors are modified by transforming an arbitrarily given control vector which is associated with a desired style. The results of subjective tests show that we can control styles by choosing the style control vector appropriately.
---
paper_title: Generating Socially Appropriate Tutorial Dialog
paper_content:
Analysis of student-tutor coaching dialogs suggest that good human tutors attend to and attempt to influence the motivational state of learners. Moreover, they are sensitive to the social face of the learner, and seek to mitigate the potential face threat of their comments. This paper describes a dialog generator for pedagogical agents that takes motivation and face threat factors into account. This enables the agent to interact with learners in a socially appropriate fashion, and foster intrinsic motivation on the part of the learner, which in turn may lead to more positive learner affective states.
---
paper_title: Politeness and Alignment in Dialogues with a Virtual Guide
paper_content:
Language alignment is something that happens automatically in dialogues between human speakers. The ability to align is expected to increase the believability of virtual dialogue agents. In this paper we extend the notion of alignment to affective language use, describing a model for dynamically adapting the linguistic style of a virtual agent to the level of politeness and formality detected in the user's utterances. The model has been implemented in the Virtual Guide, an embodied conversational agent giving directions in a virtual environment. Evaluation shows that our formality model needs improvement, but that the politeness tactics used by the Guide are mostly interpreted as intended, and that the alignment to the user's language is noticeable.
---
paper_title: Unit selection in a concatenative speech synthesis system using a large speech database
paper_content:
One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.
---
paper_title: Social role awareness in animated agents
paper_content:
This paper promotes {\itshape social role awareness\/} as a desirable capability of animated agents, that are by now strong affective reasoners, but otherwise often lack the social competence observed with humans. In particular, humans may easily adjust their behavior depending on their respective role in a socio-organizational setting, whereas their synthetic pendants tend to be driven mostly by attitudes, emotions, and personality. Our main contribution is the incorporation of `social filter programs' to mental models of animated agents. Those programs may qualify an agent's expression of its emotional state by the social context, thereby enhancing the agent's believability as a conversational partner or virtual teammate. Our implemented system is entirely web-based and demonstrates socially aware animated agents in an environment similar to Hayes-Roth's Cybercaf\'{e}.
---
paper_title: 2007b Model of facial expressions management for an embodied conversational agent
paper_content:
In this paper we present a model of facial behaviour encompassing interpersonal relations for an Embodied Conversational Agent (ECA). Although previous solutions of this problem exist in ECA's domain, in our approach a variety of facial expressions (i.e. expressed, masked, inhibited, and fake expressions) is used for the first time. Moreover, our rules of facial behaviour management are consistent with the predictions of politeness theory as well as the experimental data (i.e. annotation of the video-corpus). Knowing the affective state of the agent and the type of relations between interlocutors the system automatically adapts the facial behaviour of an agent to the social context. We present also the evaluation study we have conducted of our model. In this experiment we analysed the perception of interpersonal relations from the facial behaviour of our agent.
---
paper_title: Limited domain synthesis of expressive military speech for animated characters
paper_content:
Text-to-speech synthesis can play an important role in interactive education and training applications, as voices for animated agents. Such agents need high-quality voices capable of expressing intent and emotion. This paper presents preliminary results in an effort aimed at synthesizing expressive military speech for training applications. Such speech has acoustic and prosodic characteristics that can differ markedly from ordinary conversational speech. A limited domain synthesis approach is used employing samples of expressive speech, classified according to speaking style. The resulting synthesizer was tested both in isolation and in the context of a virtual reality training scenario with animated characters.
---
paper_title: Generating Politeness in Task Based Interaction: an Evaluation of the Effect of Linguistic Form and Culture
paper_content:
Politeness is an integral part of human language variation, e.g. consider the difference in the pragmatic effect of realizing the same communicative goal with either "Get me a glass of water mate!" or "I wonder if I could possibly have some water please?" This paper presents POLLy (Politeness for Language Learning), a system which combines a natural language generator with an AI Planner to model Brown and Levinson's theory of politeness (BL (2) our indirect strategies which should be the politest forms, are seen as the rudest; and (3) English and Indian native speakers of English have different perceptions of the level of politeness needed to mitigate particular face threats.
---
paper_title: IDEAS4Games: Building Expressive Virtual Characters for Computer Games
paper_content:
In this paper we present two virtual characters in an interactive poker game using RFID-tagged poker cards for the interaction. To support the game creation process, we have combined models, methods, and technology that are currently investigated in the ECA research field in a unique way. A powerful and easy-to-use multimodal dialog authoring tool is used for the modeling of game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer. During the game, the characters show a consistent expressive behavior that reflects the individually simulated affect in speech and animations. As a result, users are provided with an engaging interactive poker experience.
---
paper_title: People as Flexible Interpreters: Evidence and Issues from Spontaneous Trait Inference
paper_content:
Publisher Summary This chapter investigates the ways in which readily inferences about others occur when inferences are not the focal task. The evidence and issues from spontaneous trait inference (STI) are also discussed in the chapter. STI occurs when attending to another person's behavior produces a trait inference in the absence of explicit intention to infer traits or form an impression of that person. Seven different paradigms have been employed to detect and investigate spontaneous trait inference: (1) cued recall under memory instructions; (2) cued recall of distractors; (3) recognition probe; (4) lexical decision; (5) delayed recognition; (6) word stem completion; and (7) relearning. Informational conditions review the ways in which the trait-relevant information presented in STI studies is systematically varied and its effects. The treatment of cognitive conditions focuses on the efficiency of STI and its minimal demands on cognitive capacity. The motivational conditions are divided into: proximal and distal goals. The chapter also explores whether STI refers to actors or merely to behaviors and the consequences of STI based on awareness, priming, prediction, and correspondence bias.
---
paper_title: Spontaneous Inferences, Implicit Impressions, and Implicit Theories
paper_content:
People make social inferences without intentions, awareness, or effort, i.e., spontaneously. We review recent findings on spontaneous social inferences (especially traits, goals, and causes) and closely related phenomena. We then describe current thinking on some of the most relevant processes, implicit knowledge, and theories. These include automatic and controlled processes and their interplay; embodied cognition, including mimicry; and associative versus rule-based processes. Implicit knowledge includes adult folk theories, conditions of personhood, self-knowledge to simulate others, and cultural and social class differences. Implicit theories concern Bayesian networks, recent attribution research, and questions about the utility of the disposition-situation dichotomy. Developmental research provides new insights. Spontaneous social inferences include a growing array of phenomena, but they have been insufficiently linked to other phenomena and theories. We hope the links suggested in this review begin to remedy this.
---
paper_title: Social Signal Processing: Survey of an Emerging Domain
paper_content:
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
---
paper_title: Expressive Speech Synthesis: Past, Present, and Possible Futures
paper_content:
Approaches towards adding expressivity to synthetic speech have changed considerably over the last 20 years. Early systems, including formant and diphone systems, have been focused around “explicit control” models; early unit selection systems have adopted a “playback” approach. Currently, various approaches are being pursued to increase the flexibility in expression while maintaining the quality of state-of-the-art systems, among them a new “implicit control” paradigm in statistical parametric speech synthesis, which provides control over expressivity by combining and interpolating between statistical models trained on different expressive databases. The present chapter provides an overview of the past and present approaches, and ventures a look into possible future developments.
---
paper_title: The behaviour markup language: recent developments and challenges
paper_content:
Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the progress made in the last year in further developing BML. It discusses some of the key challenges identified that the effort is facing, and reviews a number of projects that already are making use of BML or support its use.
---
paper_title: Comparing Rule-Based and Data-Driven Selection of Facial Displays
paper_content:
The non-verbal behaviour of an embodied conversational agent is normally based on recorded human behaviour. There are two main ways that the mapping from human behaviour to agent behaviour has been implemented. In some systems, human behaviour is analysed, and then rules for the agent are created based on the results of that analysis; in others, the recorded behaviour is used directly as a resource for decision-making, using data-driven techniques. In this paper, we implement both of these methods for selecting the conversational facial displays of an animated talking head and compare them in two user evaluations. In the first study, participants were asked for subjective preferences: they tended to prefer the output of the data-driven strategy, but this trend was not statistically significant. In the second study, the data-driven facial displays affected the ability of users to perceive user-model tailoring in synthesised speech, while the rule-based displays did not have any effect.
---
paper_title: The perception of emotions by ear and by eye
paper_content:
Emotions are expressed in the voice as well as on the face. As a first step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice. Experiment 1 showed that subjects can effectively combine information from the two sources, in that identification of the emotion in the face is biased in the direction of the simultaneously presented tone of voice. Experiment 2 showed that this effect occurs also under instructions to base the judgement exclusively on the face. Experiment 3 showed the reverse effect, a bias from the emotion in the face on judgement of the emotion in the voice. These results strongly suggest the existence of mandatory bidirectional links between affect detection structures in vision and audition.
---
paper_title: Honest Signals: How They Shape Our World
paper_content:
How can you know when someone is bluffing? Paying attention? Genuinely interested? The answer, writes Sandy Pentland in Honest Signals, is that subtle patterns in how we interact with other people reveal our attitudes toward them. These unconscious social signals are not just a back channel or a complement to our conscious language; they form a separate communication network. Biologically based "honest signaling," evolved from ancient primate signaling mechanisms, offers an unmatched window into our intentions, goals, and values. If we understand this ancient channel of communication, Pentland claims, we can accurately predict the outcomes of situations ranging from job interviews to first dates. Pentland, an MIT professor, has used a specially designed digital sensor worn like an ID badgea "sociometer"to monitor and analyze the back-and-forth patterns of signaling among groups of people. He and his researchers found that this second channel of communication, revolving not around words but around social relations, profoundly influences major decisions in our liveseven though we are largely unaware of it. Pentland presents the scientific background necessary for understanding this form of communication, applies it to examples of group behavior in real organizations, and shows how by "reading" our social networks we can become more successful at pitching an idea, getting a job, or closing a deal. Using this "network intelligence" theory of social signaling, Pentland describes how we can harness the intelligence of our social network to become better managers, workers, and communicators.
---
paper_title: Stuff I've seen: a system for personal information retrieval and re-use
paper_content:
Most information retrieval technologies are designed to facilitate information discovery. However, much knowledge work involves finding and re-using previously seen information. We describe the design and evaluation of a system, called Stuff I've Seen (SIS), that facilitates information re-use. This is accomplished in two ways. First, the system provides a unified index of information that a person has seen, whether it was seen as email, web page, document, appointment, etc. Second, because the information has been seen before, rich contextual cues can be used in the search interface. The system has been used internally by more than 230 employees. We report on both qualitative and quantitative aspects of system use. Initial findings show that time and people are important retrieval cues. Users find information more easily using SIS, and use other search tools less frequently after installation.
---
paper_title: RoleNet: Movie Analysis from the Perspective of Social Networks
paper_content:
With the idea of social network analysis, we propose a novel way to analyze movie videos from the perspective of social relationships rather than audiovisual features. To appropriately describe role's relationships in movies, we devise a method to quantify relations and construct role's social networks, called RoleNet. Based on RoleNet, we are able to perform semantic analysis that goes beyond conventional feature-based approaches. In this work, social relations between roles are used to be the context information of video scenes, and leading roles and the corresponding communities can be automatically determined. The results of community identification provide new alternatives in media management and browsing. Moreover, by describing video scenes with role's context, social-relation-based story segmentation method is developed to pave a new way for this widely-studied topic. Experimental results show the effectiveness of leading role determination and community identification. We also demonstrate that the social-based story segmentation approach works much better than the conventional tempo-based method. Finally, we give extensive discussions and state that the proposed ideas provide insights into context-based video analysis.
---
paper_title: Implicit Human-Centered Tagging
paper_content:
This paper provides a general introduction to the concept of Implicit Human-Centered Tagging (IHCT) — the automatic extraction of tags from nonverbal behavioral feedback of media users. The main idea behind IHCT is that nonverbal behaviors displayed when interacting with multimedia data (e.g., facial expressions, head nods, etc.) provide information useful for improving the tag sets associated with the data. As such behaviors are displayed naturally and spontaneously, no effort is required from the users, and this is why the resulting tagging process is said to be “implicit”. Tags obtained through IHCT are expected to be more robust than tags associated with the data explicitly, at least in terms of: generality (they make sense to everybody) and statistical reliability (all tags will be sufficiently represented). The paper discusses these issues in detail and provides an overview of pioneering efforts in the field.
---
paper_title: Integrating facial expressions into user profiling for the improvement of a multimodal recommender system
paper_content:
Over the years, recommender systems have been systematically applied in both industry and academia to assist users in dealing with information overload. One of the factors that determine the performance of a recommender system is user feedback, which has been traditionally communicated through the application of explicit and implicit feedback techniques. In this paper, we propose a novel video search interface that predicts the topical relevance of a video by analysing affective aspects of user behaviour. We, furthermore, present a method for incorporating such affective features into user profiling, to facilitate the generation of meaningful recommendations, of unseen videos. Our experiment shows that multimodal interaction feature is a promising way to improve the performance of recommendation.
---
paper_title: Mobile presence and intimacy—Reshaping social actions in mobile contextual configuration
paper_content:
Abstract Mobile communication has putatively affected our time–space relationship and the co-ordination of social action by weaving co-present interactions and mediated distant exchanges into a single, seamless web. In this article, we use Goodwin's notion of contextual configuration to review, elaborate and specify these processes. Goodwin defines contextual configuration as a local, interwoven set of language and material structures that frame social production of action and meaning. We explore how the mobile context is configured in mobile phone conversations. Based on the analysis of recordings of mobile conversation in Finland and Sweden, we analyze the ways in which ordinary social actions such as invitations and offers are carried out while people are mobile. We suggest that the mobile connection introduces a special kind of relationship to semiotic resources, creating its own conditions for emerging social actions. The reformation of social actions in mobility involves the possibility of intimate connection to the ongoing activities of the distant party. The particularities of mobile social actions are discerned here through sequential analysis that opens up contextually reconfigured actions as they are revealed in the details of mobile communication. In this way, we shed light on the reformation of social actions in mobile space–time.
---
paper_title: Smartphones An Emerging Tool for Social Scientists
paper_content:
Recent developments in mobile technologies have produced a new kind of device: a programmable mobile phone, the smartphone. In this article, the authors argue that the technological and social char...
---
paper_title: ClearBoard: a seamless medium for shared drawing and conversation with eye contact
paper_content:
This paper introduces a novel shared drawing medium called ClearBoard. It realizes (1) a seamless shared drawing space and (2) eye contact to support realtime and remote collaboration by two users. We devised the key metaphor: “talking through and drawing on a transparent glass window” to design ClearBoard. A prototype of ClearBoard is implemented based on the “Drafter-Mirror” architecture. This paper first reviews previous work on shared drawing support to clarify the design goals. We then examine three methaphors that fulfill these goals. The design requirements and the two possible system architectures of ClearBoard are described. Finally, some findings gained through the experimental use of the prototype, including the feature of “gaze awareness”, are discussed.
---
paper_title: Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship
paper_content:
Interfaces that talk and listen are populating computers, cars, call centers, and even home appliances and toys, but voice interfaces invariably frustrate rather than help. In Wired for Speech, Clifford Nass and Scott Brave reveal how interactive voice technologies can readily and effectively tap into the automatic responses all speech -- whether from human or machine -- evokes. Wired for Speech demonstrates that people are "voice-activated": we respond to voice technologies as we respond to actual people and behave as we would in any social situation. By leveraging this powerful finding, voice interfaces can truly emerge as the next frontier for efficient, user-friendly technology.Wired for Speech presents new theories and experiments and applies them to critical issues concerning how people interact with technology-based voices. It considers how people respond to a female voice in e-commerce (does stereotyping matter?), how a car's voice can promote safer driving (are "happy" cars better cars?), whether synthetic voices have personality and emotion (is sounding like a person always good?), whether an automated call center should apologize when it cannot understand a spoken request ("To Err is Interface; To Blame, Complex"), and much more. Nass and Brave's deep understanding of both social science and design, drawn from ten years of research at Nass's Stanford laboratory, produces results that often challenge conventional wisdom and common design practices. These insights will help designers and marketers build better interfaces, scientists construct better theories, and everyone gain better understandings of the future of the machines that speak with us.
---
paper_title: The 30-Sec Sale: Using Thin-Slice Judgments to Evaluate Sales Effectiveness
paper_content:
A successful sale depends on a customer's perception of the salesperson's personality, motivations, trustworthiness, and affect. Person perception research has shown that consistent and accurate assessments of these traits can be made based on very brief observations, or “thin slices.” Thus, examining impressions based on thin slices offers an effective approach to study how perceptions of salespeople translate into real-world results, such as sales performance and customer satisfaction. The literature on the accuracy of thin-slice judgments is briefly reviewed. Then, 2 studies are presented that investigated the predictive validity of judgments of salespeople based on thin slices of the vocal channel. Participants rated 20-sec audio clips extracted from interviews with a sample of sales managers, on variables gauging interpersonal skills, task-related skills, and anxiety. Results supported the hypothesis that observability of the rated variable is a key determinant in the criterion validity of thin-slice judgments. Implications for the use of thin-slice judgments in salesperson selection and customer satisfaction are discussed.
---
paper_title: A Conceptual Overview of the Self-Presentational Concerns and Response Tendencies of Focus Group Participants
paper_content:
Focus group respondents are often requested to perform tasks that require them to convey information about themselves. However, despite the potential for respondents to have self-presentational concerns, research on focus group productivity has virtually ignored extant scholarship on impression management. This shortcoming is addressed by presenting a conceptual overview of the effects of self-presentational concerns on focus group participation. A product of this overview is a conceptual model that posits that the amount and nature of information that people convey about themselves to others is a function of their eagerness to make desired impressions and their subjective probabilities of doing so. According to the model, when focus group participants are highly motivated to make desired impressions, they may be reluctant to present unbiased images of themselves. However, they are not likely to deceive unless they are confident in their abilities to ascertain and enact desired images. Those who are motivated to make desired impressions but are doubtful of doing so are likely to protect themselves by concealing self-relevant information or avoiding self-relevant issues. Implications of this model for research and practice are discussed.
---
paper_title: Automatic Generation of Non-verbal Behavior for Agents in Virtual Worlds: A System for Supporting Multimodal Conversations of Bots and Avatars
paper_content:
This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.
---
paper_title: Socially intelligent robots: dimensions of human–robot interaction
paper_content:
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human–robot interaction (HRI) poses many challenges regarding the nature of interactivity and ‘social behaviour’ in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a ‘robotiquette’) that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human–child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human–robot experiments. The paper concludes by examining different paradigms regarding ‘social relationships’ of robots and people interacting with them.
---
| Title: Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing
Section 1: INTRODUCTION
Description 1: This section introduces the concept of Social Signal Processing (SSP) and its importance in bridging the gap between humans, who are inherently social, and machines, which lack social capabilities.
Section 2: NONVERBAL COMMUNICATION AND SOCIAL SIGNALS
Description 2: This section explores the history, components, and significance of nonverbal communication and how it relates to social signals.
Section 3: Toward a General Definition of Signals
Description 3: This section examines various definitions and theories of signals across multiple disciplines to develop a comprehensive understanding of what constitutes a social signal.
Section 4: Social Signals and Social Facts
Description 4: This section defines social signals and explains their roles in social interactions, social emotions, social attitudes, and social relations.
Section 5: AUTOMATIC ANALYSIS OF SOCIAL SIGNALS VIA NONVERBAL COMMUNICATION
Description 5: This section details the steps, current state-of-the-art techniques, challenges, and data resources involved in the automatic analysis of social signals through nonverbal communication.
Section 6: SYNTHESIS OF SOCIAL BEHAVIOR
Description 6: This section discusses the methodologies and challenges faced in the synthesis of social behaviors for artificial agents, including the generation of social actions, social emotions, social attitudes, and expressive speech.
Section 7: APPLICATIONS OF SSP
Description 7: This section highlights various application domains where Social Signal Processing can have significant impacts, such as multimedia indexing, mobile social interactions, human-computer interaction, marketing, and social robots.
Section 8: CONCLUSIONS
Description 8: This section summarizes the current state and future directions of Social Signal Processing, emphasizing the critical challenges and potential applications. |
A Survey of Codes for Optical Disk Recording | 10 | ---
paper_title: Optical Disc System for Digital Video Recording
paper_content:
We have developed a new error correction method (Picket: a combination of a long distance code (LDC) and a burst indicator subcode (BIS)), a new channel modulation scheme (17PP, or (1, 7) RLL parity preserve (PP)-prohibit repeated minimum transition runlength (RMTR) in full), and a new address format (zoned constant angular velocity (ZCAV) with headers and wobble, and practically constant linear density) for a digital video recording system (DVR) using a phase change disc with 9.2 GB capacity with the use of a red (λ=650 nm) laser and an objective lens with a numerical aperture (NA) of 0.85 in combination with a thin cover layer. Despite its high density, this new format is highly reliable and efficient. When extended for use with blue-violet (λ≈405 nm) diode lasers, the format is well suited to be the basis of a third-generation optical recording system with over 22 GB capacity on a single layer of a 12-cm-diameter disc.
---
paper_title: Codes for zero spectral density at zero frequency
paper_content:
In pulse-amplitude modulation (PAM) digital transmission systems line encoding is used for shaping the spectrum of the encoded symbol sequence to suit the frequency characteristics of the transmission channel. In particular, it is often required that the encoded symbol sequence have a zero mean and spectral density vanishing at zero frequency. We show that the finite running digital sum condition is a necessary and sufficient condition for this to occur. The result holds in particular for alphabetic codes, which are the most widely used line codes.
---
paper_title: Upper bound on the efficiency of dc-constrained codes
paper_content:
We derive the limiting efficiencies of dc-constrained codes. Given bounds on the running digital sum (RDS), the best possible coding efficiency η, for a K-ary transmission alphabet, is η = log 2 λ max /log 2 K, where λ max is the largest eigenvalue of a matrix which represents the transitions of the allowable states of RDS. Numerical results are presented for the three special cases of binary, ternary and quaternary alphabets.
---
paper_title: Channel capacity of charge-constrained run-length limited codes
paper_content:
The methods of information theory are applied to run-length limited codes with charge constraints. These self-clocking codes are useful in several areas of information storage, including magnetic recording, where it may be desirable to eliminate the dc component of the frequency spectrum. The channel capacity of run-length limited codes, with and without charge constraints, is derived and tabulated. The channel capacity specifies the maximum ratio of data/message bits achievable in implementing these codes and gives insight into the choice of codes for a particular task. The well-known frequency modulation (FM) code provides a simple example of these techniques.
---
paper_title: On modulation, coding and signal processing for optical and magnetic recording systems
paper_content:
A filter retaining device to prevent a unit filter from inadvertently falling out of a bottom access flow-through housing when the housing is opened to gain access to the interior. Bottom access flow-through filter housings typically include spaced apart elongated channels which slidably receive a unit filter. These channels serve to locate and hold a unit filter in a gas stream passing through the flow-through housing. The filter retaining means is removably attached to the bottom open end of each channel to support the unit filter in the flow-through housing when the bottom access is opened to gain access to the interior of the filter housing.
---
paper_title: Optical Disc System for Digital Video Recording
paper_content:
We have developed a new error correction method (Picket: a combination of a long distance code (LDC) and a burst indicator subcode (BIS)), a new channel modulation scheme (17PP, or (1, 7) RLL parity preserve (PP)-prohibit repeated minimum transition runlength (RMTR) in full), and a new address format (zoned constant angular velocity (ZCAV) with headers and wobble, and practically constant linear density) for a digital video recording system (DVR) using a phase change disc with 9.2 GB capacity with the use of a red (λ=650 nm) laser and an objective lens with a numerical aperture (NA) of 0.85 in combination with a thin cover layer. Despite its high density, this new format is highly reliable and efficient. When extended for use with blue-violet (λ≈405 nm) diode lasers, the format is well suited to be the basis of a third-generation optical recording system with over 22 GB capacity on a single layer of a 12-cm-diameter disc.
---
paper_title: Maximum Entropy Charge-Constrained Run-Length Codes
paper_content:
The authors present a study of run-length-limiting codes that have a null at zero frequency or DC. The class of codes or sequences considered is specified by three parameters: (d, k, c). The first two constraints, d and k, put lower and upper bounds on the run-lengths, while the charge constraint, c, is responsible for the spectral null. A description of the combined (d, k, c) constraints, in terms of a variable length graph, and its adjacency matrix, A(D), are presented. The maximum entropy description of the constraint described by a run-length graph is presented as well as the power spectral density. The results are used to study several examples of (d, k, c) constraints. The eigenvalues and eigenvectors of the classes of (d, k=2c-1, c) and (d, k=d+1, c) constraints for (c=1,2,. . .), are shown to satisfy certain second-order recursive equations. These equations are solved using the theory of Chebyshev polynomials. >
---
paper_title: Upper bound on the efficiency of dc-constrained codes
paper_content:
We derive the limiting efficiencies of dc-constrained codes. Given bounds on the running digital sum (RDS), the best possible coding efficiency η, for a K-ary transmission alphabet, is η = log 2 λ max /log 2 K, where λ max is the largest eigenvalue of a matrix which represents the transitions of the allowable states of RDS. Numerical results are presented for the three special cases of binary, ternary and quaternary alphabets.
---
paper_title: Channel capacity of charge-constrained run-length limited codes
paper_content:
The methods of information theory are applied to run-length limited codes with charge constraints. These self-clocking codes are useful in several areas of information storage, including magnetic recording, where it may be desirable to eliminate the dc component of the frequency spectrum. The channel capacity of run-length limited codes, with and without charge constraints, is derived and tabulated. The channel capacity specifies the maximum ratio of data/message bits achievable in implementing these codes and gives insight into the choice of codes for a particular task. The well-known frequency modulation (FM) code provides a simple example of these techniques.
---
paper_title: Optimization of low-frequency properties of eight-to-fourteen modulation
paper_content:
A description is given of the eight to fourteen modulation system (EFM) designed for the Compact Disc Digital Audio System with optical read-out. EFM combines high information density and immunity to tolerances in the light path with low power at the low-frequency end of the modulation bit stream spectrum. In this modulation scheme, blocks of eight data input bits are transformed into fourteen channel bits, which follow certain minimum and maximum run-length constraints by using a code book. To prevent violation of the minimum and maximum run-length constraints a certain number of merging bits are needed to concatenate the blocks. There are cases where the merging bits are not uniquely determined by the concatenation rules. This freedom of choice thus created is used for minimizing the power of the modulated bit sequence at low frequencies. The paper presents the results of algorithms that were used to minimize this low frequency content.
---
paper_title: Method for encoding binary data
paper_content:
Hydraulically powered percussive apparatus for breaking rock, drilling etc. in which a combined spool valve and hammer reciprocates in the bore of a cylinder structure. A high pressure fluid inlet leads to the bore and a low pressure outlet leads from the bore. A fluid flow passage communicates with the bore at two axially spaced locations. As the hammer reciprocates, it cyclically connects, via the bore, the inlet to the passage and the outlet to the passage. Pressures in the inlet and outlet fluctuate between fairly narrow limits whereas the pressure in the passage fluctuates between a pressure approximating inlet pressure and a pressure approximating outlet pressure. The hammer has two operating faces. The first of these is exposed to the relatively steady inlet pressure and the second is subjected to the widely varying pressure in the passage. The direction of the resultant force on the hammer reverses at the pressure in the passage varies from approximately inlet pressure to approximately outlet pressure.
---
paper_title: Method for encoding binary data
paper_content:
Hydraulically powered percussive apparatus for breaking rock, drilling etc. in which a combined spool valve and hammer reciprocates in the bore of a cylinder structure. A high pressure fluid inlet leads to the bore and a low pressure outlet leads from the bore. A fluid flow passage communicates with the bore at two axially spaced locations. As the hammer reciprocates, it cyclically connects, via the bore, the inlet to the passage and the outlet to the passage. Pressures in the inlet and outlet fluctuate between fairly narrow limits whereas the pressure in the passage fluctuates between a pressure approximating inlet pressure and a pressure approximating outlet pressure. The hammer has two operating faces. The first of these is exposed to the relatively steady inlet pressure and the second is subjected to the widely varying pressure in the passage. The direction of the resultant force on the hammer reverses at the pressure in the passage varies from approximately inlet pressure to approximately outlet pressure.
---
paper_title: EFMplus: the coding format of the multimedia compact disc
paper_content:
Reports on an alternative to eight-to-fourteen modulation (EFM), called EFMPlus, which has been adopted as coding format of the multimedia compact disc proposal. The rate of the new code is 8/16, which means that a 6-7 % higher information density can be obtained. EFMPlus is the spitting image of EFM (same minimum and maximum runlength, clock content etc). Computer simulations have shown that the low-frequency content of the new code is only slightly larger than its conventional EFM counterpart. >
---
paper_title: A new approach to constructing optimal block codes for runlength-limited channels
paper_content:
The paper describes a technique for constructing fixed-length block codes for (d, k)-constrained channels. The codes described are of the simplest variety-codes for which the encoder restricted to any particular channel state is a one-to-one mapping and which is not permitted to "look ahead" to future messages. Such codes can be decoded with no memory and no anticipation and are thus an example of what Schouhamer Immink (1992) has referred to as block-decodable. For a given blocklength n and given values of (d, k), the procedure constructs a code with the highest possible rate among all such block codes, and it does so without the iterative search that is typically used (i.e., Franaszek's recursive elimination algorithm). The technique used is similar to Beenker and Immink's (1983) "Construction 2" in that every message is associated with a (d, k, l, r) sequence of length n-d; however the values used in the present approach are l=k-d and r=k-1, as opposed to Beenker and Schouhamer Immink's values of l=r=k-d. Thus the present approach demonstrates that "Construction 2" is optimal for d=1 but is suboptimal for d>1. Furthermore, the structure of the present codes permits enumerative coding techniques to simplify encoding and decoding. >
---
paper_title: Constructions of almost block-decodable runlength-limited codes
paper_content:
Describes a new technique for constructing fixed-length (d,k) runlength-limited block codes. The new codes are very close to block-decodable codes, as decoding of the retrieved sequence can be accomplished by observing (part of) the received codeword plus a very small part (usually only a single bit) of the previous codeword. The basic idea of the new construction is to uniquely represent each source word by a (d,k) sequence with specific predefined properties, and to construct a bridge of /spl beta/, 1/spl les//spl beta//spl les/d, merging bits between every pair of adjacent words. An essential element of the new coding principle is look ahead. The merging bits are governed by the state of the encoder (the history), the present source word to be translated, and by the upcoming source word. The new constructions have the virtue that only one look-up table is required for encoding and decoding. >
---
paper_title: On runlength-limited coding with DC control
paper_content:
Constructions are presented of finite-state encoders for certain (d,k) runlength-limited (RLL) constraints with direct current control. In particular, an example is provided for a rate 8:16 encoder for the (2,10)-RLL constraint that requires no look-ahead in decoding, thus, performing favorably compared to the EFMPlus code used in the DVD standard.
---
paper_title: EFM coding: squeezing the last bits
paper_content:
Runlength-limited (RLL) codes have found widespread usage in optical and magnetic recording products. Specifically, the RLL codes EFM and its successor, EFMPlus, are used in the compact discs (CD) and the digital versatile discs (DVD), respectively. EFMPlus offers a 6% increase in storage capacity with respect to EFM. The work reports on the feasibility and limits of EFM like codes that offer an even larger capacity. To this end, we provide an overview of the various limiting factors, such as runlength constraint, dc-content, and code complexity, and outline their relative effect on the code rate. In the second part of the article we show how the performance predicted by the tenets of information theory can be realized in practice. A worked example of a code whose rate is 7.5% larger than EFMPlus, namely a rate 256/476, (d=2, k=15) code, showing a 13 dB attenuation at f/sub b/=10/sup -3/, is given to illustrate the theory.
---
paper_title: An Enumerative Coding Technique for DC-Free Runlength-Limited Sequences
paper_content:
We present an enumerative technique for encoding and decoding DC-free runlength-limited sequences. This technique enables the encoding and decoding of sequences approaching the maxentropic performance bounds very closely in terms of the code rate and low-frequency suppression capability. Use of finite-precision floating-point notation to express the weight coefficients results in channel encoders and decoders of moderate complexity. For channel constraints of practical interest, the hardware required for implementing such a quasi-maxentropic coding scheme consists mainly of a ROM of at most 5 kB.
---
paper_title: DC-free run-length-limited codes for magnetic recording
paper_content:
In this paper, a method to design high-efficiency codes with combined run-length and dc-null constraints is presented. Using the new method, two new dc-free (1,7) codes, with rates of 12/20 and 16/26 were developed. The code efficiencies are 89.31 and 91.6%, respectively. The proposed approach is more general than the technique used to design the existing dc-free d=1 code. The performances of the new codes are also compared with that of the rate 8/10 dc-free (0,4) code. The minimum distance analysis and the bit error rate simulation both show that while at low densities the proposed codes exhibit performance degradation compared to the higher rate (0,4) code, they perform better than the latter as density increases, despite the code rate disadvantage.
---
paper_title: Binary two-thirds rate code with full word look-ahead
paper_content:
A new 2/3-rate run-length limited code with d = 1 and k = 7 is described in this paper. It is a state dependent, look-ahead code that has advantages over the MFM and 3PM
---
| Title: A Survey of Codes for Optical Disk Recording
Section 1: Properties of RLL and DCRLL Sequences
Description 1: Explain the basic characteristics of RLL and DCRLL sequences, their importance in optical disk recording, and their capacity and spectral properties.
Section 2: Capacity and Spectral Properties of DCRLL Sequences
Description 2: Discuss the capacity and spectral properties of DCRLL sequences in detail, including their power density functions and the impact of different parameters on the spectral performance.
Section 3: EFM
Description 3: Describe the EFM coding scheme, including its parameters, operation, and techniques used for minimizing low-frequency content.
Section 4: EFMPlus
Description 4: Outline the design and characteristics of EFMPlus, its advantages over EFM, and its application in DVD systems.
Section 5: Alternatives to EFM Schemes
Description 5: Explore the possibilities of redesigning EFM and EFMPlus codes, comparing their performance and complexity, and propose alternative code designs.
Section 6: Performance of EFM-Like Coding Schemes
Description 6: Assess the spectral performance of various EFM-like coding schemes through computer simulation results and theoretical bounds.
Section 7: Other Examples of DCRLL Codes
Description 7: Present other examples of DCRLL codes, their systematic design approaches, and methods for enhancing dc-control.
Section 8: Dc-Control on Data Level versus Coded Level
Description 8: Compare the effectiveness of dc-control strategies performed at the source data level versus the channel data level, illustrating their impacts on low-frequency content suppression.
Section 9: Codes with Parity Preserving Word Assignment
Description 9: Describe the design and advantages of parity preserving codes, providing examples and performance comparisons with standard codes.
Section 10: Conclusion
Description 10: Summarize the survey, highlighting the key insights about channel codes for optical disk recording systems and their proximity to theoretical bounds. |
A Literature Review on Design Strategies and Methodologies of Low Power VLSI Circuits | 7 | ---
| Title: A Literature Review on Design Strategies and Methodologies of Low Power VLSI Circuits
Section 1: Introduction
Description 1: This section introduces the motivation for low power VLSI circuits, historical context, and the current importance of low power design in various applications.
Section 2: Sources of Power Dissipation
Description 2: This section discusses the major sources of power dissipation in CMOS circuits, including leakage current, short circuit current, and logic transitions.
Section 3: Low Power Design Space
Description 3: This section explains the various approaches to achieve low power design, focusing on reducing voltage, physical capacitance, and logic transitions.
Section 4: Power Minimization Techniques
Description 4: This section elaborates on different techniques for minimizing power consumption at various levels of the design process, such as system level, logic synthesis, physical design level, and circuit level design.
Section 5: CAD Methodologies
Description 5: This section explains how CAD tools and methodologies support power savings during the different phases of VLSI design implementation.
Section 6: Power Management Strategies
Description 6: This section discusses specific strategies for power management, including multiple threshold voltage, clock gating, multiple supply voltage, power gating, body biasing, dynamic voltage and frequency scaling, and clock gating techniques.
Section 7: Conclusion
Description 7: This section summarizes the various strategies and methodologies discussed for power reduction in VLSI circuits and highlights the importance and impact of these techniques. |
IMAGE BASED 3D FACE RECONSTRUCTION: A SURVEY | 14 | ---
paper_title: A Survey of 3D Face Recognition Methods
paper_content:
Many researches in face recognition have been dealing with the challenge of the great variability in head pose, lighting intensity and direction,facial expression, and aging. The main purpose of this overview is to describe the recent 3D face recognition algorithms. The last few years more and more 2D face recognition algorithms are improved and tested on less than perfect images. However, 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. Another major advantage is that 3D face recognition is pose invariant. A disadvantage of most presented 3D face recognition methods is that they still treat the human face as a rigid object. This means that the methods aren't capable of handling facial expressions. Although 2D face recognition still seems to outperform the 3D face recognition methods, it is expected that this will change in the near future.
---
paper_title: A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition
paper_content:
This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology.
---
paper_title: Facial feature extraction for quick 3D face modeling
paper_content:
There are two main processes to create a 3D animatable facial model from photographs. The first is to extract features such as eyes, nose, mouth and chin curves on the photographs. The second is to create 3D individualized facial model using extracted feature information. The final facial model is expected to have an individualized shape, photograph-realistic skin color, and animatable structures. Here, we describe our novel approach to detect features automatically using a statistical analysis for facial information. We are not only interested in the location of the features but also the shape of local features. How to create 3D models from detected features is also explained and several resulting 3D facial models are illustrated and discussed.
---
paper_title: Automatic creation of 3D facial models
paper_content:
Model-based encoding of human facial features for narrowband visual communication is described. Based on an already prepared 3D human model, this coding method detects and understands a person's body motion and facial expressions. It expresses the essential information as compact codes and transmits it. At the receiving end, this code becomes the basis for modifying the 3D model of the person and thereby generating lifelike human images. The feature extraction used by the system to acquire data for regions or edges that express the eyes, nose, mouth, and outlines of the face and hair is discussed. The way in which the system creates a 3D model of the person by using the features extracted in the first part to modify a generic head model is also discussed. >
---
paper_title: Recovering non-rigid 3D shape from image streams
paper_content:
The paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, C. Tomasi and T. Kanades' (1992) factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.
---
paper_title: Stereo matching with energy-minimizing snake grid for 3D face modeling
paper_content:
An energy minimizing snake algorithm that runs over a grid is designed and used to reconstruct high resolution 3D human faces from pairs of stereo images. The accuracy of reconstructed 3D data from stereo depends highly on how well stereo correspondences are established during the feature matching step. Establishing stereo correspondences on human faces is often ill posed and hard to achieve because of uniform texture, slow changes in depth, occlusion, and lack of gradient. We designed an energy minimizing algorithm that accurately finds correspondences on face images despite the aforementioned characteristics. The algorithm helps establish stereo correspondences unambiguously by applying a coarse-to-fine energy minimizing snake in grid format and yields a high resolution reconstruction at nearly every point of the image. Initially, the grid is stabilized using matches at a few selected high confidence edge points. The grid then gradually and consistently spreads over the low gradient regions of the image to reveal the accurate depths of object points. The grid applies its internal energy to approximate mismatches in occluded and noisy regions and to maintain smoothness of the reconstructed surfaces. The grid works in such a way that with every increment in reconstruction resolution, less time is required to establish correspondences. The snake used the curvature of the grid and gradient of image regions to automatically select its energy parameters and approximate the unmatched points using matched points from previous iterations, which also accelerates the overall matching process. The algorithm has been applied for the reconstruction of 3D human faces, and experimental results demonstrate the effectiveness and accuracy of the reconstruction.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: Shape reconstruction of 3D bilaterally symmetric surfaces
paper_content:
The paper presents a new approach for shape recovery based on integrating geometric and photometric information. We consider 3D objects which are symmetric with respect to a plane (e.g., faces) and their reconstruction from a single image. Both the viewpoint and the illumination are not necessarily frontal. In principle, no correspondence between symmetric points is required, but knowledge about a few corresponding pairs accelerates the process. The basic idea is that an image taken from a general, non-frontal viewpoint, under non-frontal illumination can be regarded as a pair of images of half of the object, taken from two different viewing positions and two different lighting directions. We show that integrating the photometric and geometric information yields the unknown lighting and viewing parameters, as well as dense correspondence between pairs of symmetric points. As a result, a dense shape recovery of the object is computed. The method has been implemented and tested experimentally on simulated and real data.
---
paper_title: Shape from recognition and learning: recovery of 3-D face shapes
paper_content:
In this paper a novel framework for the recovery of 3D surfaces of faces from single images is developed. The underlying principle is shape from recognition, i.e. the idea that pre-recognizing face parts can constrain the space of possible solutions to the image irradiance equation, thus allowing robust recovery of the 3D structure of a specific part. Shape recovery of the recognized part is based on specialized backpropagation based neural networks, each of which is employed in the recovery of a particular face part. Representation using principal components allows to efficiently encode classes of objects such as nose, lips, etc. The specialized networks are designed and trained to map the principal component coefficients of the shading images to another set of principal component coefficients that represent the corresponding 3D surface shapes. A method for integrating recovered 3D surface regions by minimizing the sum squared error in overlapping areas is also derived. Quantitative analysis of the reconstruction of the surface parts show relatively small errors indicating that the method is robust and accurate. The recovery of a complete face is performed by minimal squared error merging efface parts.
---
paper_title: Illumination-insensitive face recognition using symmetric shape-from-shading
paper_content:
Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric shape-from-shading (SSFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the SSFS algorithm as a tool to obtain a prototype image which is illumination-normalized. It has been shown that the SSFS algorithm has a unique point-wise solution. But it is still difficult to recover accurate shape information given a single real face image with complex shape and varying albedo. In stead, we utilize the fact that all faces share a similar shape making the direct computation of the prototype image from a given face image feasible. Finally, to demonstrate the efficacy of our method, we have applied it to several publicly available face databases.
---
paper_title: Statistical symmetric shape from shading for 3d structure recovery of faces
paper_content:
In this paper, we aim to recover the 3D shape of a human face using a single image. We use a combination of symmetric shape from shading by Zhao and Chellappa and statistical approach for facial shape reconstruction by Atick, Griffin and Redlich. Given a single frontal image of a human face under a known directional illumination from a side, we represent the solution as a linear combination of basis shapes and recover the coefficients using a symmetry constraint on a facial shape and albedo. By solving a single least-squares system of equations, our algorithm provides a closed-form solution which satisfies both symmetry and statistical constraints in the best possible way. Our procedure takes only a few seconds, accounts for varying facial albedo, and is simpler than the previous methods. In the special case of horizontal illuminant direction, our algorithm runs even as fast as matrix-vector multiplication.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: Model-based 3D face capture with shape-from-silhouettes
paper_content:
We present a method for 3D face acquisition using a set or sequence of 2D binary silhouettes. Since silhouette images depend only on the shape and pose of an object, they are immune to lighting and/or texture variations (unlike feature or texture-based shape-from-correspondence). Our prior 3D face model is a linear combination of "eigenheads" obtained by applying PCA to a training set of laser-scanned 3D faces. These shape coefficients are the parameters for a near-automatic system for capturing the 3D shape as well as the 2D texture-map of a novel input face. Specifically, we use back-projection and a boundary-weighted XOR-based cost function for binary silhouette matching, coupled with a probabilistic "downhill-simplex" optimization for shape estimation and refinement. Experiments with a multicamera rig as well as monocular video sequences demonstrate the advantages of our 3D modeling framework and ultimately, its utility for robust face recognition with built-in invariance to pose and illumination.
---
paper_title: Interpreting face images using active appearance models
paper_content:
We demonstrate a fast, robust method of interpreting face images using an Active Appearance Model (AAM). An AAM contains a statistical model of shape and grey level appearance which can generalise to almost any face. Matching to an image involves finding model parameters which minimise the difference between the image and a synthesised face. We observe that displacing each model parameter from the correct value induces a particular pattern in the residuals. In a training phase, the AAM learns a linear model of the correlation between parameter displacements and the induced residuals. During search it measures the residuals and uses this model to correct the current parameters, leading to a better fit. A good overall match is obtained in a few iterations, even from poor starting estimates. We describe the technique in detail and show it matching to new face images.
---
paper_title: Active appearance models
paper_content:
We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.
---
paper_title: Face Reconstruction Across Different Poses and Arbitrary Illumination Conditions
paper_content:
In this paper, we present a novel method for face reconstruction from multi-posed face images taken under arbitrary unknown illumination conditions. Previous work shows that any face image can be represented by a set of low dimensional parameters: shape parameters, spherical harmonic basis (SHB) parameters, pose parameters and illumination coefficients. Thus, face reconstruction can be performed by recovering the set of parameters from the input images. In this paper, we demonstrate that the shape and SHB parameters can be estimated by minimizing the silhouettes errors and image intensity errors in a fast and robust manner. We propose a new algorithm to detect the corresponding points between the 3D face model and the input images by using silhouettes. We also apply a model-based bundle adjustment technique to perform this minimization. We provide a series of experiments on both synthetic and real data and experimental results show that our method can have an accurate face shape and texture reconstruction.
---
paper_title: Silhouette-Based 3D Face Shape Recovery
paper_content:
The creation of realistic 3D face models is still a fundamental problem in computer graphics. In this paper we present a novel method to obtain the 3D shape of an arbitrary human face using a sequence of silhouette images as input. Our face model is a linear combination of eigenheads, which are obtained by a Principal Component Analysis (PCA) of laser-scanned 3D human faces. The coefficients of this linear decomposition are used as our model parameters. We introduce a near-automatic method for reconstructing a 3D face model whose silhouette images match closest to the set of input silhouettes.
---
paper_title: Face image analysis using a multiple features fitting strategy
paper_content:
The main contribution of this thesis is a novel algorithm for fitting a Three-Dimensional Morphable Model of faces to a 2D input image. This fitting algorithm enables the estimation of the 3D shape, the texture, the 3D pose and the light direction from a single input image. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the Multi-Features Fitting algorithm that has a wider radius of convergence and a higher level of precision. The new Multi-Feature Fitting algorithm is applied for such tasks as face identification, facial expression transfer from one image to another image (of different individuals) and face tracking across 3D pose and expression variations. The second contribution of this thesis is a careful comparison of well known fitting algorithms used in the context of face modelling and recognition. It is shown that these algorithms achieve high run time efficiency at the cost of accuracy and generality (few face images may be analysed). The third and last contribution is the Matlab Morphable Model toolbox, a set of software tools developed in the Matlab programing environment. It allows (i) the generation of 3D faces from model parameters, (ii) the rendering of 3D faces, (iii) the fitting of an input image using the Multi-Features Fitting algorithm and (iv) identification from model parameters. The toolbox has a modular design that allows anyone to builds on it and, for instance, to improve the fitting algorithm by incorporating new features in the cost function.
---
paper_title: Estimation of 3 D Faces and Illumination from Single Photographs Using a Bilinear Illumination Model
paper_content:
3D Face modeling is still one of the biggest challenges in computer graphics. In this paper we present a novel framework that acquires the 3D shape, texture, pose and illumination of a face from a single photograph. Additionally, we show how we can recreate a face under varying illumination conditions. Or, essentially relight it. Using a custom-built face scanning system, we have collected 3D face scans and light reflection images of a large and diverse group of human subjects. We derive a morphable face model for 3D face shapes and accompanying textures by transforming the data into a linear vector sub-space. The acquired images of faces under variable illumination are then used to derive a bilinear illumination model that spans 3D face shape and illumination variations. Using both models we, in turn, propose a novel fitting framework that estimates the parameters of the morphable model given a single photograph. Our framework can deal with complex face reflectance and lighting environments in an efficient and robust manner. In the results section of our paper, we compare our methods to existing ones and demonstrate its efficacy in reconstructing 3D face models when provided with a single photograph. We also provide several examples of facial relighting (on 2D images) by performing adequate decomposition of the estimated illumination using our framework.
---
paper_title: Estimating Coloured 3D Face Models from Single Images: An Example Based Approach
paper_content:
In this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image.
---
paper_title: Automatic 3D face modeling from video
paper_content:
In this paper, we develop an efficient technique for fully automatic recovery of accurate 3D face shape from videos captured by a low cost camera. The method is designed to work with a short video containing a face rotating from frontal view to profile view. The whole approach consists of three components. First, automatic initialization is performed in the first frame with approximately frontal face. Then, to handle the case of low quality image captured by low cost camera, the 2D feature matching, head poses and underlying 3D face shape are estimated and refined iteratively in an efficient way based on image sequence segmentation. Finally, to take advantage of the sparse structure of the proposed algorithm, sparse bundle adjustment technique is further employed to speed up the computation. We demonstrate the accuracy and robustness of the algorithm using a set of experiments
---
paper_title: Statistical symmetric shape from shading for 3d structure recovery of faces
paper_content:
In this paper, we aim to recover the 3D shape of a human face using a single image. We use a combination of symmetric shape from shading by Zhao and Chellappa and statistical approach for facial shape reconstruction by Atick, Griffin and Redlich. Given a single frontal image of a human face under a known directional illumination from a side, we represent the solution as a linear combination of basis shapes and recover the coefficients using a symmetry constraint on a facial shape and albedo. By solving a single least-squares system of equations, our algorithm provides a closed-form solution which satisfies both symmetry and statistical constraints in the best possible way. Our procedure takes only a few seconds, accounts for varying facial albedo, and is simpler than the previous methods. In the special case of horizontal illuminant direction, our algorithm runs even as fast as matrix-vector multiplication.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: Surfaces over Dirichlet tessellations
paper_content:
Abstract We develop a new class of surfaces, based on the concept of Bezier simplices expressed in terms of Sibson's natural neighbor coordinates. We also show how these surfaces may be utilized for scattered data interpolation.
---
paper_title: Automatic creation of 3D facial models
paper_content:
Model-based encoding of human facial features for narrowband visual communication is described. Based on an already prepared 3D human model, this coding method detects and understands a person's body motion and facial expressions. It expresses the essential information as compact codes and transmits it. At the receiving end, this code becomes the basis for modifying the 3D model of the person and thereby generating lifelike human images. The feature extraction used by the system to acquire data for regions or edges that express the eyes, nose, mouth, and outlines of the face and hair is discussed. The way in which the system creates a 3D model of the person by using the features extracted in the first part to modify a generic head model is also discussed. >
---
paper_title: Which stereo matching algorithm for accurate 3d face creation
paper_content:
This paper compares the efficiency of several stereo matching algorithms in reconstructing 3D faces from both real and synthetic stereo pairs. The stereo image acquisition system setup and the creation of a face disparity map benchmark image are detailed. Ground truth is build by visual matching of corresponding nodes of a dense colour grid projected onto the faces. This experiment was also performed on a human face model created using OpenGL with mapped texture to create as perfect as possible a set for evaluation, instead of real human faces like our previous experiments. Performance of the algorithms is measured by deviations of the reconstructed surfaces from a ground truth prototype. This experiment shows that contrary to expectations, there is seemingly very little difference between the currently most known stereo algorithms in the case of the human face reconstruction. It is shown that by combining the most efficient but slow graph-cut algorithm with fast dynamic programming, more accurate reconstruction results can be obtained.
---
paper_title: From 2D images to 3D face geometry
paper_content:
This paper presents a global scheme for 3D face reconstruction and face segmentation into a limited number of analytical patches from stereo images. From a depth map, we generate a 3D model of the face which is iteratively deformed under stereo and shape-from-shading constraints as well as differential features. This model enables us to improve the quality of the depth map, from which we perform the segmentation and the approximation of the surface.
---
paper_title: Regularized Bundle-Adjustment to Model Heads from Image Sequences without Calibration Data
paper_content:
We address the structure-from-motion problem in the context of head modeling from video sequences for which calibration data is not available. This task is made challenging by the fact that correspondences are difficult to establish due to lack of texture and that a quasi-euclidean representation is required for realism.We have developed an approach based on regularized bundle-adjustment. It takes advantage of our rough knowledge of the head's shape, in the form of a generic face model. It allows us to recover relative head-motion and epipolar geometry accurately and consistently enough to exploit a previously-developed stereo-based approach to head modeling. In this way, complete and realistic head models can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera, with minimal manual intervention.We chose to demonstrate and evaluate our technique mainly in the context of head-modeling. We do so because it is the application for which all the tools required to perform the complete reconstruction are available to us. We will, however, argue that the approach is generic and could be applied to other tasks, such as body modeling, for which generic facetized models exist.
---
paper_title: Stereo matching with energy-minimizing snake grid for 3D face modeling
paper_content:
An energy minimizing snake algorithm that runs over a grid is designed and used to reconstruct high resolution 3D human faces from pairs of stereo images. The accuracy of reconstructed 3D data from stereo depends highly on how well stereo correspondences are established during the feature matching step. Establishing stereo correspondences on human faces is often ill posed and hard to achieve because of uniform texture, slow changes in depth, occlusion, and lack of gradient. We designed an energy minimizing algorithm that accurately finds correspondences on face images despite the aforementioned characteristics. The algorithm helps establish stereo correspondences unambiguously by applying a coarse-to-fine energy minimizing snake in grid format and yields a high resolution reconstruction at nearly every point of the image. Initially, the grid is stabilized using matches at a few selected high confidence edge points. The grid then gradually and consistently spreads over the low gradient regions of the image to reveal the accurate depths of object points. The grid applies its internal energy to approximate mismatches in occluded and noisy regions and to maintain smoothness of the reconstructed surfaces. The grid works in such a way that with every increment in reconstruction resolution, less time is required to establish correspondences. The snake used the curvature of the grid and gradient of image regions to automatically select its energy parameters and approximate the unmatched points using matched points from previous iterations, which also accelerates the overall matching process. The algorithm has been applied for the reconstruction of 3D human faces, and experimental results demonstrate the effectiveness and accuracy of the reconstruction.
---
paper_title: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
paper_content:
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.
---
paper_title: Realistic video avatar
paper_content:
We present a system for the implementation of a photorealistic avatar using video captured from a user. This is achieved by constructing the dynamic video texture map and combining it with the 3D mesh model of the user to render the photorealistic avatar. The dynamic video texture map reflects the user's facial expressions and is generated by composing the cylindrical texture map of the user with constant updates either directly from the input video or derived from a temporal and spatial interpolation scheme. To derive the 3D mesh model for the user, we define feature points on a generic model and use the feature points identified on the video sequence to deform and obtain the avatar model. The goal of this realistic video avatar project is to provide a vivid representation of participants with a more realistic quality avatar, as compared to using only the facial animation parameters defined in MPEG-4 without the corresponding image updates. This scheme is suitable for conferencing applications because it requires much lower bandwidth than live video, and yet provides a 3D avatar representation for any virtual environment.
---
paper_title: 3D face reconstruction from video using a generic model
paper_content:
Reconstructing a 3D model of a human face from a video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One common method of overcoming this problem is to use a generic model of a face. Existing work using this approach initializes the reconstruction algorithm with this generic model. The problem with this approach is that the algorithm can converge to a solution very close to this initial value, resulting in a reconstruction which resembles the generic model rather than the particular face in the video which needs to be modeled. We propose a method of 3D reconstruction of a human face from video in which the 3D reconstruction algorithm and the generic model are handled separately. A 3D estimate is obtained purely from the video sequence using SfM algorithms without use of the generic model. The final 3D model is obtained after combining the SfM estimate and the generic model using an energy function that corrects for the errors in the estimate by comparing local regions in the two models. The optimization is done using a Markov chain Monte Carlo (MCMC) sampling strategy. The main advantage of our algorithm over others is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of the generic model. The evolution of the 3D model through the various stages of the algorithm is presented.
---
paper_title: Rapid modeling of animated faces from video
paper_content:
Generating realistic 3D human face models and facial animations has been a persistent challenge in computer graphics. We have developed a system that constructs textured 3Dface models from videos with minimal user interaction. Our system takes images andvideo sequences of a face with an ordinary video camera. After five manual clicks ontwo images to tell the system where the eye corners, nose top and mouth corners are, thesystem automatically generates a realistic looking 3D human head model and the constructed model can be animated immediately. A user with a PC and an ordinary camera can use our system to generate his/her face model in a few minutes. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: Recovering non-rigid 3D shape from image streams
paper_content:
The paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, C. Tomasi and T. Kanades' (1992) factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.
---
paper_title: Automatic 3D face modeling from video
paper_content:
In this paper, we develop an efficient technique for fully automatic recovery of accurate 3D face shape from videos captured by a low cost camera. The method is designed to work with a short video containing a face rotating from frontal view to profile view. The whole approach consists of three components. First, automatic initialization is performed in the first frame with approximately frontal face. Then, to handle the case of low quality image captured by low cost camera, the 2D feature matching, head poses and underlying 3D face shape are estimated and refined iteratively in an efficient way based on image sequence segmentation. Finally, to take advantage of the sparse structure of the proposed algorithm, sparse bundle adjustment technique is further employed to speed up the computation. We demonstrate the accuracy and robustness of the algorithm using a set of experiments
---
paper_title: Extracting Structure from Optical Flow Using the Fast Error Search Technique
paper_content:
AbstractIn this paper, we present a globally optimal and computationally efficient technique for estimating the focus of expansion (FOE) of an optical flow field, using fast partial search. For each candidate location on a discrete sampling of the image area, we generate a linear system of equations for determining the remaining unknowns, viz. rotation and inverse depth. We compute the least squares error of the system without actually solving the equations, to generate an error surface that describes the goodness of fit across the hypotheses. Using Fourier techniques, we prove that given an N × N flow field, the FOE, and subsequently rotation and structure, can be estimated in ::: $$\mathcal{O}(N^2 \log N)$$ ::: operations. Since the resulting system is linear, bounded perturbations in the data lead to bounded errors.We support the theoretical development and proof of our technique with experiments on synthetic and real data. Through a series of experiments on synthetic data, we prove the correctness, robustness and operating envelope of our algorithm. We demonstrate the utility of our technique by applying it for detecting obstacles from a monocular sequence of images.
---
paper_title: Object tracking: A survey
paper_content:
The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: From Regular Images to Animated Heads: A Least Squares Approach
paper_content:
We show that we can effectively fit arbitrarily complex animation models to noisy image data. Our approach is based on leastsquares adjustment using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges and 2-D feature points.
---
paper_title: Model-based 3D face capture with shape-from-silhouettes
paper_content:
We present a method for 3D face acquisition using a set or sequence of 2D binary silhouettes. Since silhouette images depend only on the shape and pose of an object, they are immune to lighting and/or texture variations (unlike feature or texture-based shape-from-correspondence). Our prior 3D face model is a linear combination of "eigenheads" obtained by applying PCA to a training set of laser-scanned 3D faces. These shape coefficients are the parameters for a near-automatic system for capturing the 3D shape as well as the 2D texture-map of a novel input face. Specifically, we use back-projection and a boundary-weighted XOR-based cost function for binary silhouette matching, coupled with a probabilistic "downhill-simplex" optimization for shape estimation and refinement. Experiments with a multicamera rig as well as monocular video sequences demonstrate the advantages of our 3D modeling framework and ultimately, its utility for robust face recognition with built-in invariance to pose and illumination.
---
paper_title: Face Reconstruction Across Different Poses and Arbitrary Illumination Conditions
paper_content:
In this paper, we present a novel method for face reconstruction from multi-posed face images taken under arbitrary unknown illumination conditions. Previous work shows that any face image can be represented by a set of low dimensional parameters: shape parameters, spherical harmonic basis (SHB) parameters, pose parameters and illumination coefficients. Thus, face reconstruction can be performed by recovering the set of parameters from the input images. In this paper, we demonstrate that the shape and SHB parameters can be estimated by minimizing the silhouettes errors and image intensity errors in a fast and robust manner. We propose a new algorithm to detect the corresponding points between the 3D face model and the input images by using silhouettes. We also apply a model-based bundle adjustment technique to perform this minimization. We provide a series of experiments on both synthetic and real data and experimental results show that our method can have an accurate face shape and texture reconstruction.
---
paper_title: Silhouette-Based 3D Face Shape Recovery
paper_content:
The creation of realistic 3D face models is still a fundamental problem in computer graphics. In this paper we present a novel method to obtain the 3D shape of an arbitrary human face using a sequence of silhouette images as input. Our face model is a linear combination of eigenheads, which are obtained by a Principal Component Analysis (PCA) of laser-scanned 3D human faces. The coefficients of this linear decomposition are used as our model parameters. We introduce a near-automatic method for reconstructing a 3D face model whose silhouette images match closest to the set of input silhouettes.
---
paper_title: Animated Heads from Ordinary Images: A Least Squares Approach
paper_content:
We show that we can effectively fit arbitrarily complex animation models to noisy data extracted from ordinary face images. Our approach is based on least-squares adjustment, using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges, and 2D feature points. In this way, complete head models?including ears and hair?can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera. They can then be fed to existing animation software to produce synthetic sequences.
---
paper_title: A morphable model for the synthesis of 3D faces
paper_content:
In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.
---
paper_title: Automatic 3D reconstruction for face recognition
paper_content:
An analysis-by-synthesis framework for face recognition with variant pose, illumination and expression (PIE) is proposed in this paper. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) the proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with variant PIE.
---
paper_title: Illumination-insensitive face recognition using symmetric shape-from-shading
paper_content:
Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric shape-from-shading (SSFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the SSFS algorithm as a tool to obtain a prototype image which is illumination-normalized. It has been shown that the SSFS algorithm has a unique point-wise solution. But it is still difficult to recover accurate shape information given a single real face image with complex shape and varying albedo. In stead, we utilize the fact that all faces share a similar shape making the direct computation of the prototype image from a given face image feasible. Finally, to demonstrate the efficacy of our method, we have applied it to several publicly available face databases.
---
paper_title: Efficient 3D reconstruction for face recognition
paper_content:
Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient two-dimensional (2D)-to-three-dimensional (3D) integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and (3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.
---
paper_title: Face recognition from 2D and 3D images using 3D Gabor filters
paper_content:
To recognize faces with different facial expressions or varying views from only one stored prototype per person is challenging. This paper presents such a system based on both 3D range data as well as the corresponding 2D gray-level facial images. The traditional 3D Gabor filter (3D TGF) is explored in the face recognition domain to extract expression-invariant features. To extract view-invariant features, a rotation-invariant 3D spherical Gabor filter (3D SGF) is proposed. Furthermore, a two-dimensional (2D) Gabor histogram is employed to represent the Gabor responses of the 3D SGF for solving the missing-point problem caused by self-occlusions under large rotation angles. The choice of 3D Gabor filter parameters for face recognition is examined as well. To match a given test face with each model face, the Least Trimmed Square Hausdorff Distance (LTS-HD) is employed to tackle the possible partial-matching problem. Experimental results based on our face database involving 80 persons have demonstrated that our approach outperforms the standard Eigenface approach and the approach using the 2D Gabor-wavelets representation.
---
paper_title: Silhouette-Based 3D Face Shape Recovery
paper_content:
The creation of realistic 3D face models is still a fundamental problem in computer graphics. In this paper we present a novel method to obtain the 3D shape of an arbitrary human face using a sequence of silhouette images as input. Our face model is a linear combination of eigenheads, which are obtained by a Principal Component Analysis (PCA) of laser-scanned 3D human faces. The coefficients of this linear decomposition are used as our model parameters. We introduce a near-automatic method for reconstructing a 3D face model whose silhouette images match closest to the set of input silhouettes.
---
paper_title: Prototyping and Transforming Facial Textures for Perception Research
paper_content:
Transforming facial images along perceived dimensions (such as age, gender, race, or health) has application in areas as diverse as psychology, medicine, and forensics. We can use prototype images to define the salient features of a particular face classification (for example, European female adult or East-Asian male child). We then use the differences between two prototypes to define an axis of transformation, such as younger to older. By applying these changes to a given input face, we can change its apparent age, race, or gender. Psychological investigations reveal a limitation with existing methods that's particularly apparent when changing the age of faces. We relate the problem to the loss of facial textures (such as stubble and wrinkles) in the prototypes due to the blending process. We review the existing face prototyping and transformation methods and present a new, wavelet-based method for prototyping and transforming facial textures.
---
| Title: IMAGE BASED 3D FACE RECONSTRUCTION: A SURVEY
Section 1: Introduction
Description 1: Provide an overview of the significance of face image processing and the motivation behind transitioning from 2D to 3D face reconstruction.
Section 2: Terminology
Description 2: Define essential terminology and concepts related to 3D face reconstruction, including parametric surfaces, triangle meshes, range images, and texture mapping methods.
Section 3: General 3D Face Reconstruction Topics
Description 3: Discuss general aspects encountered in different 3D face reconstruction algorithms that are frequently referenced in literature.
Section 4: 3D Face Databases
Description 4: Highlight the importance, usage, and requirements of 3D face databases in 3D face reconstruction methods.
Section 5: Establishing 3D Face Correspondences
Description 5: Review techniques used for establishing vertex correspondences among 3D faces in a database to achieve alignment of training samples.
Section 6: Feature Detection
Description 6: Describe the process and importance of locating key facial features in 2D images for use in 3D reconstruction algorithms.
Section 7: Assumptions
Description 7: Outline common assumptions made during 3D face reconstruction, such as types of projection, albedo, and light sources.
Section 8: Deformable 3D Models
Description 8: Cover different types of deformable models used in 3D face reconstruction, including wire-frame and statistical models.
Section 9: 3D Face Reconstruction Methodologies
Description 9: Categorize and detail various 3D face reconstruction methods, such as example-based, stereo, video, and silhouette-based methods.
Section 10: Performance Evaluation
Description 10: Examine methodologies and results for evaluating the performance and accuracy of 3D face reconstruction techniques.
Section 11: Discussion
Description 11: Discuss the strengths, limitations, and application potential of different 3D face reconstruction techniques, and their compliance with key criteria for real-life applications.
Section 12: Future Directions
Description 12: Identify the main research areas that need to be addressed to advance the field of image-based 3D face reconstruction.
Section 13: Conclusion
Description 13: Summarize the survey’s findings and the remaining challenges in the development of image-based 3D face reconstruction techniques.
Section 14: Acknowledgments
Description 14: Acknowledge the contributions and support received for the work presented in the survey. |
SURVEY ON EVOLUTIONARY COMPUTATION TECH TECHNIQUES AND ITS APPLICATION IN DIFFERENT FIELDS | 8 | ---
paper_title: Swarm Intelligence and Image Segmentation
paper_content:
Image segmentation plays an essential role in the interpretation of various kinds of images. Image segmentation techniques can be grouped into several categories such as edge-based segmentation, region-oriented segmentation, histogram thresholding, and clustering algorithms (Gonzalez & Woods, 1992). The aim of a clustering algorithm is to aggregate data into groups such that the data in each group share similar features while the data clusters are being distinct from each other. The K-means algorithm is a widely used method used for finding the structure of data (Tou & Gonzalez 1974). This unsupervised clustering technique has a strong tendency to get stuck into local minima when finding an optimal solution. Therefore, clustering results are heavily dependent on the initial cluster centers distribution. Hence, the search for good initial parameters is a challenging issue and the clustering algorithms require a great deal of experimentation to determine the input parameters for the optimal or suboptimal clustering results. Competitive learning model introduced in (Rumelhart & Zipser, 1986) is an interesting and powerful learning algorithm which can be used in unsupervised training for image classification (Hung, 1993). Simple Competitive Learning (SCL), shows stability over different run trials but this stable result is not always the global optima. In fact, in some cases the SCL converges to local optima over all run trials and the learning rate needs to be adjusted in the course of experimentation so that the global optimization can be achieved. There are a number of techniques, developed for optimization, inspired by the behaviours of natural systems (Pham & Karaboga, 2000). Swarm intelligence (SI) including Ant Colony Optimization (ACO) introduced in (Dorigo et al., 1996) and Particle Swarm Optimization (PSO) introduced in (Kennedy & Eberhart, 1995) has been introduced in the literature as an optimization technique. There are several SI approaches for data clustering in the literature which use clustering techniques such as K-means algorithm. In most of these approaches ACO or PSO are used to obtain the initial cluster centers for the K-means algorithm. We propose a hybrid algorithm which combines SI with K-means. We also use the same method to combine SI with SCL. Our aim is to make segmentation results of both K-means and SCL less dependent on the initial cluster centers and learning rate respectively. Hence, their results are more accurate and stabilized by employing the ACO and PSO optimization techniques. This improvement is due to the larger search space provided by these techniques. In addition, our
---
paper_title: A Taxonomy and Survey of Cloud Computing Systems
paper_content:
The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google, force.com, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research.
---
paper_title: Process selection and tool assignment in automated cellular manufacturing using Genetic Algorithms
paper_content:
The purpose of this study is to develop an efficient heuristic for the process selection and part cell assignment problem. The study assumes a production environment where each part has several process plans, each manifested by a required set of tools. These tools can be assigned to different machines based on a tool-machine compatibility matrix. An additional assumption is that all relevant data such as periodic demand, processing time, processing cost, tool magazine capacity, tool changing time, tool life and tool cost are fixed and known. A mixed integer linear program which takes all relevant data into account is developed to minimize the production cost. The suggested solution approach to solve this model makes use of Genetic Algorithms: a class of heuristic search and optimization techniques that imitate the natural selection and evolutionary process. First, the encoding of the solutions into integer strings is presented, as well as the genetic operators used by the algorithm. Next, the efficiency and robustness of the solution procedure is demonstrated through several different examples.
---
paper_title: Genetic algorithms: principles of natural selection applied to computation
paper_content:
A genetic algorithm is a form of evolution that occurs on a computer. Genetic algorithms are a search method that can be used for both solving problems and modeling evolutionary systems. With various mapping techniques and an appropriate measure of fitness, a genetic algorithm can be tailored to evolve a solution for many types of problems, including optimization of a function of determination of the proper order of a sequence. Mathematical analysis has begun to explain how genetic algorithms work and how best to use them. Recently, genetic algorithms have been used to model several natural evolutionary systems, including immune systems.
---
paper_title: Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization
paper_content:
Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.
---
paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment
paper_content:
The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant.
---
paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment
paper_content:
The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant.
---
paper_title: Cloud Task Scheduling Based on Load Balancing Ant Colony Optimization
paper_content:
The cloud computing is the development of distributed computing, parallel computing and grid computing, or defined as the commercial implementation of these computer science concepts. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. This paper proposes a cloud task scheduling policy based on Load Balancing Ant Colony Optimization (LBACO) algorithm. The main contribution of our work is to balance the entire system load while trying to minimizing the make span of a given tasks set. The new scheduling strategy was simulated using the CloudSim toolkit package. Experiments results showed the proposed LBACO algorithm outperformed FCFS (First Come First Serve) and the basic ACO (Ant Colony Optimization).
---
paper_title: Swarm Intelligence and Image Segmentation
paper_content:
Image segmentation plays an essential role in the interpretation of various kinds of images. Image segmentation techniques can be grouped into several categories such as edge-based segmentation, region-oriented segmentation, histogram thresholding, and clustering algorithms (Gonzalez & Woods, 1992). The aim of a clustering algorithm is to aggregate data into groups such that the data in each group share similar features while the data clusters are being distinct from each other. The K-means algorithm is a widely used method used for finding the structure of data (Tou & Gonzalez 1974). This unsupervised clustering technique has a strong tendency to get stuck into local minima when finding an optimal solution. Therefore, clustering results are heavily dependent on the initial cluster centers distribution. Hence, the search for good initial parameters is a challenging issue and the clustering algorithms require a great deal of experimentation to determine the input parameters for the optimal or suboptimal clustering results. Competitive learning model introduced in (Rumelhart & Zipser, 1986) is an interesting and powerful learning algorithm which can be used in unsupervised training for image classification (Hung, 1993). Simple Competitive Learning (SCL), shows stability over different run trials but this stable result is not always the global optima. In fact, in some cases the SCL converges to local optima over all run trials and the learning rate needs to be adjusted in the course of experimentation so that the global optimization can be achieved. There are a number of techniques, developed for optimization, inspired by the behaviours of natural systems (Pham & Karaboga, 2000). Swarm intelligence (SI) including Ant Colony Optimization (ACO) introduced in (Dorigo et al., 1996) and Particle Swarm Optimization (PSO) introduced in (Kennedy & Eberhart, 1995) has been introduced in the literature as an optimization technique. There are several SI approaches for data clustering in the literature which use clustering techniques such as K-means algorithm. In most of these approaches ACO or PSO are used to obtain the initial cluster centers for the K-means algorithm. We propose a hybrid algorithm which combines SI with K-means. We also use the same method to combine SI with SCL. Our aim is to make segmentation results of both K-means and SCL less dependent on the initial cluster centers and learning rate respectively. Hence, their results are more accurate and stabilized by employing the ACO and PSO optimization techniques. This improvement is due to the larger search space provided by these techniques. In addition, our
---
paper_title: Honey bee behavior inspired load balancing of tasks in cloud computing environments
paper_content:
Scheduling of tasks in cloud computing is an NP-hard optimization problem. Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing (HBB-LB), which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue.
---
| Title: SURVEY ON EVOLUTIONARY COMPUTATION TECHNIQUES AND ITS APPLICATION IN DIFFERENT FIELDS
Section 1: INTRODUCTION
Description 1: Provide an introduction to evolutionary computation, explaining its basis in biological evolution and its applications in solving computational problems.
Section 2: EVOLUTIONARY ALGORITHM
Description 2: Describe the concept of evolutionary algorithms, their steps, and various types such as Gene Expressing Programming, Genetic Algorithm, Genetic Programming, Evolutionary Programming, Evolution Strategy, Differential Evolution, and Differential Search Algorithm.
Section 3: IMAGE PROCESSING, CLOUD COMPUTING AND GRID COMPUTING
Description 3: Discuss the applications of evolutionary computation techniques in the fields of image processing, cloud computing, and grid computing, including specific algorithms used and their benefits.
Section 4: PARTICLE SWARM OPTIMIZATION (PSO)
Description 4: Explain the particle swarm optimization technique, its inspiration from flocking behavior, and its application areas.
Section 5: HONEYBEE ALGORITHM
Description 5: Describe the honeybee algorithm, its basis in the foraging behavior of honey bees, the steps involved in the algorithm, and its applications.
Section 6: CUCKOO SEARCH (CS)
Description 6: Discuss the cuckoo search algorithm, its inspiration from cuckoo birds' brood parasitism, and its rules and applications.
Section 7: CONTRIBUTION OF SWARM INTELLIGENCE ALGORITHM IN CLOUD COMPUTING, GRID COMPUTING AND IMAGE PROCESSING
Description 7: Explore how swarm intelligence algorithms contribute to solving problems in cloud computing, grid computing, and image processing, including specific examples and research findings.
Section 8: CONCLUSION
Description 8: Summarize the discussions, highlighting the effectiveness and widespread use of evolutionary computation techniques in various fields. |
Mobile Augmented Reality Survey: From Where We Are to Where We Go | 20 | ---
paper_title: Fast Feature Pyramids for Object Detection
paper_content:
Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).
---
paper_title: Mobile Collaborative Augmented Reality: the Augmented Stroll
paper_content:
The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.
---
paper_title: Location-based Mobile Augmented Reality Applications - Challenges, Examples, Lessons Learned
paper_content:
The technical capabilities of modern smart mobile devices more and more enable us to run desktop-like applications with demanding resource requirements in mobile environments. Along this trend, numerous concepts, techniques, and prototypes have been introduced, focusing on basic implementation issues of mobile applications. However, only little work exists that deals with the design and implementation (i.e., the engineering) of advanced smart mobile applications and reports on the lessons learned in this context. In this paper, we give profound insights into the design and implementation of such an ::: advanced mobile application, which enables location-based mobile augmented reality on two different mobile operating systems (i.e., iOS and Android). In particular, this kind of mobile application is characterized by high resource ::: demands since various sensors must be queried at run time and numerous virtual objects may have to be drawn in realtime on the screen of the smart mobile device (i.e., a high frame count per second be caused). We focus on the ::: efficient implementation of a robust mobile augmented reality engine, which provides location-based functionality, as well as the implementation of mobile business applications based on this engine. In the latter context, we also discuss the lessons learned when implementing mobile business applications with our mobile augmented reality engine.
---
paper_title: CloudRidAR: a cloud-based architecture for mobile augmented reality
paper_content:
Mobile augmented reality (MAR) has exploded in popularity on mobile devices in various fields. However, building a MAR application from scratch on mobile devices is complicated and time-consuming. In this paper, we propose CloudRidAR, a framework for MAR developers to facilitate the development, deployment, and maintenance of MAR applications with little effort. Despite of advance in mobile devices as a computing platform, their performance for MAR applications is still very limited due to the poor computing capability of mobile devices. In order to alleviate the problem, our CloudRidAR is designed with cloud computing at the core. Computational intensive tasks are offloaded on cloud to accelerate computation in order to guarantee run-time performance. We also present two MAR applications built on CloudRidAR to evaluate our design.
---
paper_title: Telegeoinformatics: Location-based Computing and Services
paper_content:
THEORIES AND TECHNOLOGIES Telegeoinformatics: Current Trends and Future Direction Introduction Architecture Internet-Based GIS Spatial Databases Intelligent Query Analyzer (IQA) Predictive Computing Adaptation Final Remarks References Remote Sensing Introductory Concepts Remote Sensing Systems Imaging Characteristics of Remote Sensing Systems Active Microwave Remote Sensing Extraction of Thematic Information from Remotely Sensed Imagery Extraction of Metric Information from Remotely Sensed Imagery Remote Sensing in Telegeoinformatics References Positioning and Tracking Approaches and Technologies Introduction Global Positioning System Positioning Methods Based on Cellular Networks Other Positioning and Tracking Techniques: An Overview Hybrid Systems Summary References Wireless Communications Introduction Overview of Wireless Systems Radio Propagation and Physical Layer Issues Medium Access in Wireless Networks Network Planning, Design and Deployment Wireless Network Operations Conclusions and the Future References INTEGRATED DATA AND TECHNOLOGIES Chapter Five: Location-Based Computing Introduction LBC Infrastructure Location-Based Interoperability Location-Based Data Management Adaptive Location-Based Computing Location-Based Routing as Adaptive LBC Concluding Remarks References Location-Based Services Introduction Types of Location-Based Services What is Unique About Location-Based Services? Enabling Technologies Market for Location-Based Services Importance of Architecture and Standards Example Location-Based Services: J-Phone J-Navi (Japan) Conclusions References Wearable Tele-Informatic Systems for Personal Imaging Introduction Humanistic Intelligence as a Basis for Intelligent Image Processing Humanistic Intelligence 'WEARCOMP' as a Means of Realizing Humanistic Intelligence Where on the Body Should a Visual Tele-Informatic Device be Placed? Telepointer: Wearable Hands-Free Completely Self Contained Visual Augmented Reality Without Headwear and Without any Infrastructural Reliance Portable Personal Pulse Doppler Radar Vision System When Both the Camera and Display are Headword: Personal Imaging and Mediated Reality Personal Imaging for Location-Based Services Reality Window Manager (RWM) Personal Telegeoinformatics: Blocking Spam with a Photonic Filter Conclusion References Mobile Augmented Reality Introduction MARS: Promises, Applications, and Challenges Components and Requirements MARS UI Concepts Conclusions Acknowledgements References APPLICATIONS Emergency Response Systems Overview of Emergency Response Systems State-of-the-Art ERSs Examples of Developing ERSs for Earthquakes and Other Disasters Future Aspects of Emergency Response Systems Concluding Remarks References Location-Based Computing for Infrastructure Field Tasks Introduction LBC-Infra Concept Technological Components of LBC-Infra General Requirements of LBC-Infra Interaction Patterns and Framework of LBC-Infra Prototype System and Case Study Conclusions References The Role of Telegeoinformatics in ITS Introduction to Intelligent Tranaportation Systems Telegeoinformatics Within ITS The Role of Positioning Systems In ITS Geospatial Data for ITS Communication Systems in ITS ITS-Telegeoinformatics Applications Non-Technical Issues Impacting on ITS Concluding Remarks Remarks The Impact and Penetration of Location-Based Services The Definition of Technologies LBSs: Definitions, Software, and Usage The Market for LBSs: A Model of the Development of LBSs Penetration of Mobile Devices: Predictions of Future Markets Impacts of LBSs on Geographical Locations Conclusions References
---
paper_title: ReadMe: A Real-Time Recommendation System for Mobile Augmented Reality Ecosystems
paper_content:
We introduce ReadMe, a real-time recommendation system (RS) and an online algorithm for Mobile Augmented Reality (MAR) ecosystems. A MAR ecosystem is the one that contains mobile users and virtual objects. The role of ReadMe is to detect and present the most suitable virtual objects on the mobile user's screen. The selection of the proper virtual objects depends on the mobile users' context. We consider the user's context as a set of variables that can be either drawn directly by user's device or can be inferred by it or can be collected in collaboration with other mobile devices.
---
paper_title: A Survey of Augmented Reality
paper_content:
This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.
---
paper_title: Real-time computer vision with OpenCV
paper_content:
Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.
---
paper_title: OverLay: Practical Mobile Augmented Reality
paper_content:
The idea of augmented reality - the ability to look at a physical object through a camera and view annotations about the object - is certainly not new. Yet, this apparently feasible vision has not yet materialized into a precise, fast, and comprehensively usable system. This paper asks: What does it take to enable augmented reality (AR) on smartphones today? To build a ready-to-use mobile AR system, we adopt a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization. Our core contribution is in a novel location-free geometric representation of the environment - from smartphone sensors - and using this geometry to prune down the visual search space. Metrics of success include both accuracy and latency of object identification, coupled with the ease of use and scalability in uncontrolled environments. Our converged system, OverLay, is currently deployed in the engineering building and open for use to regular public; ongoing work is focussed on campus-wide deployment to serve as a "historical tour guide" of UIUC. Performance results and user responses thus far have been promising, to say the least.
---
paper_title: Offloading Guidelines for Augmented Reality Applications on Wearable Devices
paper_content:
As Augmented Reality (AR) gets popular on wearable devices such as Google Glass, various AR applications have been developed by leveraging synergetic benefits beyond the single technologies. However, the poor computational capability and limited power capacity of current wearable devices degrade runtime performance and sustainability. Computational offloading strategy has been proposed to outsource computation to remote cloud for improving performance. Nevertheless, comparing with mobile devices, the wearable devices have their specific limitations, which induce additional problems and require new thoughts of computational offloading. In this paper, we propose several guidelines of computational offloading for AR applications on wearable devices based on our practical experiences of designing and developing AR applications on Google Glass. The guidelines have been adopted and proved by our application prototypes.
---
paper_title: Real-time visual tracking of less textured three-dimensional objects on mobile platforms
paper_content:
Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background’s distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.
---
paper_title: From Virtuality to Reality and Back
paper_content:
There has been a growing research interest in investigating techniques to combine real and virtual spaces. A variety of “reality” concepts such as Virtual Reality and Augmented Reality and their supporting technologies have emerged in the field of design to adopt the task of replacing or merging our physical world with the virtual world. The different realities can be tailored to enhance comprehension for specific design activities along a design life-cycle. This paper presents stateof-the-art applications of these “reality” concepts in design and related areas, and proposes a classification of these realities to address suitability issues for the effective utilization of the concepts and technologies. Their potentials and implications in certain design activities are also discussed.
---
paper_title: Augmented reality navigation systems
paper_content:
The augmented reality (AR) research community has been developing a manifold of ideas and concepts to improve the depiction of virtual objects in a real scene. In contrast, current AR applications require the use of unwieldy equipment which discourages their use. In order to essentially ease the perception of digital information and to naturally interact with the pervasive computing landscape, the required AR equipment has to be seamlessly integrated into the user’s natural environment. Considering this basic principle, this paper proposes the car as an AR apparatus and presents an innovative visualization paradigm for navigation systems that is anticipated to enhance user interaction.
---
paper_title: Recent Advances in Augmented Reality
paper_content:
In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
---
paper_title: General-purpose modular hardware and software framework for mobile outdoor augmented reality applications in engineering
paper_content:
This paper presents a reusable, general-purpose, mobile augmented reality (AR) framework developed to address the critical and repetitive challenges specific to visualization in outdoor AR. In all engineering applications of AR developed thus far, basic functionality that supports accurate user registration, maximizes the range of user motion, and enables data input and output has had to be repeatedly re-implemented. This is primarily due to the fact that designed methods have been traditionally custom created for their respective applications and are not generic enough to be readily shared and reused by others. The objective of this research was to remedy this situation by designing and implementing a reusable and pluggable hardware and software framework that can be used in any AR application without the need to re-implement low-level communication interfaces with selected hardware. The underlying methods of hardware communication as well as the object-oriented design (OOD) of the reusable interface are presented. Details on the validation of framework reusability and pluggability are also described.
---
paper_title: Location-based Mobile Augmented Reality Applications - Challenges, Examples, Lessons Learned
paper_content:
The technical capabilities of modern smart mobile devices more and more enable us to run desktop-like applications with demanding resource requirements in mobile environments. Along this trend, numerous concepts, techniques, and prototypes have been introduced, focusing on basic implementation issues of mobile applications. However, only little work exists that deals with the design and implementation (i.e., the engineering) of advanced smart mobile applications and reports on the lessons learned in this context. In this paper, we give profound insights into the design and implementation of such an ::: advanced mobile application, which enables location-based mobile augmented reality on two different mobile operating systems (i.e., iOS and Android). In particular, this kind of mobile application is characterized by high resource ::: demands since various sensors must be queried at run time and numerous virtual objects may have to be drawn in realtime on the screen of the smart mobile device (i.e., a high frame count per second be caused). We focus on the ::: efficient implementation of a robust mobile augmented reality engine, which provides location-based functionality, as well as the implementation of mobile business applications based on this engine. In the latter context, we also discuss the lessons learned when implementing mobile business applications with our mobile augmented reality engine.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: Relevant Aspects for the Integration of Linked Data in Mobile Augmented Reality Applications for Tourism
paper_content:
Mobile augmented reality applications have seen tremendous growth in recent years and tourism is one of the fields in which this set of technologies has been proved to be a natural fit. Augmented reality has the potential of enhancing the surroundings of the tourist in a meaningful way. In order to provide personalized and rich content for the augmented reality application, researchers have explored the use of Semantic Web and especially Linked Data principles and technologies. In this paper we review existing projects at the intersection of these technologies and current aspects, not necessarily specific, but highly relevant to the integration of Linked Open Data in mobile augmented reality applications for tourism. In this respect, we discuss approaches in the area of geodata integration, quality of the open data, provenance information and trust. We conclude with recommendations regarding future research in this area.
---
paper_title: PhoneGuide: museum guidance supported by on-device object recognition on mobile phones
paper_content:
We present PhoneGuide -- an enhanced museum guidance system that uses camera-equipped mobile phones and on-device object recognition.Our main technical achievement is a simple and light-weight object recognition approach that is realized with single-layer perceptron neuronal networks. In contrast to related systems which perform computationally intensive image processing tasks on remote servers, our intention is to carry out all computations directly on the phone. This ensures little or even no network traffic and consequently decreases cost for online times. Our laboratory experiments and field surveys have shown that photographed museum exhibits can be recognized with a probability of over 90%.We have evaluated different feature sets to optimize the recognition rate and performance. Our experiments revealed that normalized color features are most effective for our method. Choosing such a feature set allows recognizing an object below one second on up-to-date phones. The amount of data that is required for differentiating 50 objects from multiple perspectives is less than 6KBytes.
---
paper_title: Archeoguide: system architecture of a mobile outdoor augmented reality system
paper_content:
We present the system architecture of a mobile outdoor augmented reality system for the Archeoguide project. We begin with a short introduction to the project. Then we present the hardware we chose for the mobile system and we describe the system architecture we designed for the software implementation. We conclude this paper with the first results obtained from experiments we made during our trials at ancient Olympia in Greece.
---
paper_title: OverLay: Practical Mobile Augmented Reality
paper_content:
The idea of augmented reality - the ability to look at a physical object through a camera and view annotations about the object - is certainly not new. Yet, this apparently feasible vision has not yet materialized into a precise, fast, and comprehensively usable system. This paper asks: What does it take to enable augmented reality (AR) on smartphones today? To build a ready-to-use mobile AR system, we adopt a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization. Our core contribution is in a novel location-free geometric representation of the environment - from smartphone sensors - and using this geometry to prune down the visual search space. Metrics of success include both accuracy and latency of object identification, coupled with the ease of use and scalability in uncontrolled environments. Our converged system, OverLay, is currently deployed in the engineering building and open for use to regular public; ongoing work is focussed on campus-wide deployment to serve as a "historical tour guide" of UIUC. Performance results and user responses thus far have been promising, to say the least.
---
paper_title: A Novel Individual Location Recommendation System Based on Mobile Augmented Reality
paper_content:
In this paper, a mobile augmented reality (AR) based system for individual location recommendation is proposed. The system consists of the modules of user location, marker detection, 3D display and location guide by combing AR with navigation technology. Experimental results demonstrate that the proposed system is both efficient and effective in helping people location their position, it also enhance users' senses experience by providing users with a more intuitive, three-dimensional, dynamic information display and sharing capabilities.
---
paper_title: 3DVN: A Mixed Reality Platform for Mobile Navigation Assistance
paper_content:
We present 3DVN, a Mixed Reality platform for navigation assistance in indoor environments. Built on top of a PC-based wearable computer running the Windows operating system, the platform provides a multimodal user interface for navigating in existing physical buildings, including wireless networking, data glove gestural input, voice communication, head-mounted display, and a 3DOF head tracker. Local positioning is performed using a software-only positioning engine making use of WLAN features for accurately pinpointing the mobile device in three dimensions. The 3DVN system has been implemented as a proof-of-concept of a navigation guidance support device for visitors, and has been field-tested with human subjects in the student center of our university. Informal evaluation of subject ratings indicate a strong interest for the new system, both for guests as well as people familiar with the campus and the building.
---
paper_title: User Interface Management Techniques for Collaborative Mobile Augmented Reality
paper_content:
Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.
---
paper_title: Face to face collaborative AR on mobile phones
paper_content:
Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications.
---
paper_title: Mobile Collaborative Augmented Reality: the Augmented Stroll
paper_content:
The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.
---
paper_title: Implementation of Augmented Reality System for Smartphone Advertisements
paper_content:
The recent development and popularization of personal mobile devices has brought a lot of changes into information search and offering. In particular, the newly emerged augmented reality system made possible an innovative way of information acquisition by providing additional information of the virtual world to the real world. This work used the markerless augmented reality system on smartphones to design and implement the smartphone application service aimed at efficiently conveying information on advertisements to users. The conventional advertising applications simply introduce and explain goods by inducing consumers to have an interest, whereas the markerless application service developed in this work interacts with the augmented reality system and database management system to quickly provide more accurate and diversified information to users.
---
paper_title: Do you remember that building? Exploring old Zakynthos through an augmented reality mobile game
paper_content:
This paper presents a mobile augmented reality application, that was designed with the objective of visitors, to experience the historical center of old Zakynthos, Greece, that was destroyed after an earthquake, and allow the users to re-live the atmosphere and life of the historic place. Special attention is given to the mental model of the landmarks developed by the users after interacting with the application, discussing some of the observed flaws of this model.
---
paper_title: Expected user experience of mobile augmented reality services: a user study in the context of shopping centres
paper_content:
The technical enablers for mobile augmented reality (MAR) are becoming robust enough to allow the development of MAR services that are truly valuable for consumers. Such services would provide a novel interface to the ubiquitous digital information in the physical world, hence serving in great variety of contexts and everyday human activities. To ensure the acceptance and success of future MAR services, their development should be based on knowledge about potential end users' expectations and requirements. We conducted 16 semi-structured interview sessions with 28 participants in shopping centres, which can be considered as a fruitful context for MAR services. We aimed to elicit new knowledge about (1) the characteristics of the expected user experience and (2) central user requirements related to MAR in such a context. From a pragmatic viewpoint, the participants expected MAR services to catalyse their sense of efficiency, empower them with novel context-sensitive and proactive functionalities and raise their awareness of the information related to their surroundings with an intuitive interface. Emotionally, MAR services were expected to offer stimulating and pleasant experiences, such as playfulness, inspiration, liveliness, collectivity and surprise. The user experience categories and user requirements that were identified can serve as targets for the design of user experience of future MAR services.
---
paper_title: Enabling smart retail settings via mobile augmented reality shopping apps
paper_content:
Retail settings are being challenged to become smarter and provide greater value to both consumers and retailers. An increasingly recognised approach having potential for enabling smart retail is mobile augmented reality (MAR) apps. In this research, we seek to describe and discover how, why and to what extent MAR apps contribute to smart retail settings by creating additional value to customers as well as benefiting retailers. In particular, by adopting a retail customer experience perspective on value creation, analysing the content of MAR shopping apps currently available, and conducting large-scale surveys on United States smartphone users representing early technology adopters, we assess level of use, experiential benefits offered, and retail consequences. Our findings suggest that take-up is set to go mainstream as user satisfaction is relatively high and their use provides systematic experiential benefits along with advantages to retailers. Despite some drawbacks, their use is positively associated with multiple retail consequences. MAR apps are seen as changing consumer behaviour and are associated with increasingly high user valuations of retailers offering them. Implications for more effective use to enable smart retail settings are discussed.
---
paper_title: Like bees around the hive: a comparative study of a mobile augmented reality map
paper_content:
We present findings from field trials of MapLens, a mobile augmented reality (AR) map using a magic lens over a paper map. Twenty-six participants used MapLens to play a location-based game in a city centre. Comparisons to a group of 11 users with a standard 2D mobile map uncover phenomena that arise uniquely when interacting with AR features in the wild. The main finding is that AR features facilitate place-making by creating a constant need for referencing to the physical, and in that it allows for ease of bodily configurations for the group, encourages establishment of common ground, and thereby invites discussion, negotiation and public problem-solving. The main potential of AR maps lies in their use as a collaborative tool.
---
paper_title: Human pacman: A mobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area
paper_content:
Human Pacman is an interactive role-playing game that envisions to bring the computer gaming experience to a new level of emotional and sensory gratification by setting the real world as a playground. This is a physical fantasy game integrated with human-social and mobile-gaming that emphasizes on collaboration and competition between players. By setting the game in a wide outdoor area, natural human-physical movements have become an integral part of the game. Pacmen and Ghosts are now human players in the real world experiencing mixed reality visualization from the wearable computers on them. Virtual cookies and actual physical objects are incorporated to provide novel experiences of seamless transitions between real and virtual worlds and tangible human computer interface respectively. We believe Human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
---
paper_title: A Survey of Augmented Reality
paper_content:
This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.
---
paper_title: Augmented Reality in the Classroom
paper_content:
Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use. A related video can be seen here: http://youtu.be/ndUjLwcBIOw. It shows examples of augmented reality experiences in an educational setting.
---
paper_title: The Use of Augmented Reality Games in Education: A Review of the Literature.
paper_content:
This paper provides a review of the literature about the use of augmented reality in education and specifically in the context of formal and informal environments. It examines the research that has been conducted up to date on the use of those games through mobile technology devices such as mobile phones and tablets, both in primary and secondary education. The review of the relative literature was carried out in the period 2000 to early 2014 in ScienceDirect and ERIC. The outcomes of this review illustrated a set of studies that provide evidence of positive outcomes regarding student learning. These studies, which focused mainly on the natural sciences and took place within informal learning environments, used both qualitative and quantitative data collection methods. The earliest study that was conducted about this topic used personal digital assistants, while the more recent one used smart (mobile) phones and tablets. The services of augmented reality focused on markers/quick response codes, virtual it...
---
paper_title: Construct3D: an augmented reality application for mathematics and geometry education
paper_content:
Construct3D is a three dimensional geometry construction tool specifically designed for mathematics and geometry education. It is based on the mobile collaborative augmented reality system "Studierstube". We describe our efforts in developing a system for the improvement of spatial abilities and maximization of transfer of learning. Means of application and integration in mathematics and geometry education at high school as well as university level are being discussed. Anecdotal evidence supports our claim that Construct3D is easy to learn, encourages experimentation with geometric constructions and improves spatial skills.
---
paper_title: Augmented Reality Trends in Education: A Systematic Review of Research and Applications
paper_content:
In recent years, there has been an increasing interest in applying Augmented Reality (AR) to create unique educational settings. So far, however, there is a lack of review studies with focus on investigating factors such as: the uses, advantages, limitations, effectiveness, challenges and features of augmented reality in educational settings. Personalization for promoting an inclusive learning using AR is also a growing area of interest. This paper reports a systematic review of literature on augmented reality in educational settings considering the factors mentioned before. In total, 32 studies published between 2003 and 2013 in 6 indexed journals were analyzed. The main findings from this review provide the current state of the art on research in AR in education. Furthermore, the paper discusses trends and the vision towards the future and opportunities for further research in augmented reality for educational settings.
---
paper_title: Current status, opportunities and challenges of augmented reality in education
paper_content:
Although augmented reality (AR) has gained much research attention in recent years, the term AR was given different meanings by varying researchers. In this article, we first provide an overview of definitions, taxonomies, and technologies of AR. We argue that viewing AR as a concept rather than a type of technology would be more productive for educators, researchers, and designers. Then we identify certain features and affordances of AR systems and applications. Yet, these compelling features may not be unique to AR applications and can be found in other technological systems or learning environments (e.g., ubiquitous and mobile learning environments). The instructional approach adopted by an AR system and the alignment among technology design, instructional approach, and learning experiences may be more important. Thus, we classify three categories of instructional approaches that emphasize the ''roles,'' ''tasks,'' and ''locations,'' and discuss what and how different categories of AR approaches may help students learn. While AR offers new learning opportunities, it also creates new challenges for educators. We outline technological, pedagogical, learning issues related to the implementation of AR in education. For example, students in AR environments may be cognitively overloaded by the large amount of information they encounter, the multiple technological devices they are required to use, and the complex tasks they have to complete. This article provides possible solutions for some of the challenges and suggests topics and issues for future research.
---
paper_title: Augmented Reality Teaching and Learning
paper_content:
This literature review focuses on augmented realities (AR) for learning that utilize mobile, context-aware technologies (e.g., smartphones, tablets), which enable participants to interact with digital information embedded within the physical environment. We summarize research findings about AR in formal and informal learning environments (i.e., schools, universities, museums, parks, zoos, etc.), with an emphasis on the affordances and limitations associated with AR as it relates to teaching, learning, and instructional design. As a cognitive tool and pedagogical approach, AR is primarily aligned with situated and constructivist learning theory, as it positions the learner within a real-world physical and social context while guiding, scaffolding and facilitating participatory and metacognitive learning processes such as authentic inquiry, active observation, peer coaching, reciprocal teaching and legitimate peripheral participation with multiple modes of representation.
---
paper_title: “Studierstube”: An environment for collaboration in augmented reality
paper_content:
We propose an architecture for multi-user augmented reality with applications in visualisation, presentation and education, which we call "Studierstube". Our system presents three-dimensional stereoscopic graphics simultaneously to a group of users wearing light weight see-through head mounted displays. The displays do not affect natural communication and interaction, making working together very effective. Users see the same spatially aligned model, but can independently control their viewpoint and different layers of the data to be displayed. The setup serves computer supported cooperative work and enhances cooperation of visualisation experts. This paper presents the client-server software architecture underlying this system and details that must be addressed to create a high-quality augmented reality setup.
---
paper_title: Mystery at the Museum – A Collaborative Game for Museum Education
paper_content:
Through an iterative design process involving museum educators, learning scientists and technologists, and drawing upon our previous experiences in handheld game design and a growing body of knowledge on learning through gaming, we designed an interactive mystery game called Mystery at the Museum (the High Tech Whodunnit), which was designed for synchronous play of groups of parents and children over a two to three hour period. The primary design goals were to engage visitors more deeply in the museum, engage visitors more broadly across museum exhibits, and encourage collaboration between visitors. The feedback from the participants suggested that the combination of depth and breadth was engaging and effective in encouraging them to think about the museum's exhibits. The roles that were an integral part of the game turned out to be extremely effective in engaging pairs of participants with one another. Feedback from parents was quite positive in terms of how they felt it engaged them and their children. These results suggest that further explorations of technology-based museum experiences of this type are wholly appropriate.
---
paper_title: Augmented Reality and Mobile Learning: the State of the Art
paper_content:
In this paper, the authors examine the state of the art in augmented reality AR for mobile learning. Previous work in the field of mobile learning has included AR as a component of a wider toolkit but little has been done to discuss the phenomenon in detail or to examine in a balanced fashion its potential for learning, identifying both positive and negative aspects. The authors seek to provide a working definition of AR and to examine how it can be embedded within situated learning in outdoor settings. The authors classify it according to key aspects device/technology, mode of interaction/learning design, type of media, personal or shared experiences, whether the experience is portable or static, and the learning activities/outcomes. The authors discuss the technical and pedagogical challenges presented by AR, before looking at ways in which it can be used for learning. Finally, the paper looks ahead to AR technologies that may be employed in the future.
---
paper_title: Augmented reality in education: a meta-review and cross-media analysis
paper_content:
Augmented reality (AR) is an educational medium increasingly accessible to young users such as elementary school and high school students. Although previous research has shown that AR systems have the potential to improve student learning, the educational community remains unclear regarding the educational usefulness of AR and regarding contexts in which this technology is more effective than other educational mediums. This paper addresses these topics by analyzing 26 publications that have previously compared student learning in AR versus non-AR applications. It identifies a list of positive and negative impacts of AR experiences on student learning and highlights factors that are potentially underlying these effects. This set of factors is argued to cause differences in educational effectiveness between AR and other media. Furthermore, based on the analysis, the paper presents a heuristic questionnaire generated for judging the educational potential of AR experiences.
---
paper_title: An Augmented Reality-Based Mobile Learning System to Improve Students' Learning Achievements and Motivations in Natural Science Inquiry Activities
paper_content:
Introduction Recently, the advancement and popularity of handheld devices and sensing technologies has enabled researchers to implement more effective learning methods (Ogata, Li, Hou, Uosaki, El-Bishouty, & Yano, 2011). Several studies have reported the importance of conducting contextual learning and experiential learning in real-world environments, encouraging the use of mobile and sensing technologies in outdoor learning activities (Chu, Hwang, Tsai, & Tseng, 2010; Hung, Hwang, Lin, Wu, & Su, 2013; Yang, 2006). For example, Chu, Hwang, Huang, and Wu (2008) developed a learning system that guided students to learn about the characteristics and life cycle of plants on a school campus using mobile communication and RFID (Radio Frequency Identification). Most of mobile learning studies emphasize the adoption of digital learning aids in real-life scenarios (Sharples, Milrad, Arnedillo-Sanchez, & Vavoula, 2009; Ogata & Yano, 2004; Wong & Looi, 2011). However, regarding supplementary mobile learning aids, the interaction between digital learning aids and the actual environment needs to be emphasized to enable students to effectively manage and incorporate personal knowledge (Wu, Lee, Chang, & Liang, 2013). For example, it is expiated that students can select a virtual learning object from the actual environment using a mobile learning aid, which allows them to obtain a first-hand understanding of the learning environment and, subsequently, increases their learning motivations and experiences. Such a learning support technology is achievable through the use of Augmented Reality (AR), which combines human senses (e.g., sight, sound, and touch) with virtual objects to facilitate real-world environment interactions for users to achieve an authentic perception of the environment (Azuma, 1997). For example, users who employ mobile devices with AR facilities to seek a target building on a street are able to see additional information surrounding individual buildings when they browse the buildings via the camera of their mobile device. Researchers have documented the potential of employing such facilities to assist students in learning in real-world environments in comparisons with traditional instructions (Andujar, Mejias, & Marquez, 2011; Chen, Chi, Hung, & Kang, 2011; Kamarainen, Metcalf, Grotzer, Browne, Mazzuca, Tutwiler, & Dede, 2013; Platonov, Heibel, Meier, & Grollmann, 2006), which showed that AR technology contributed to improve academic achievement compared to traditional teaching methods. On the other hand, numerous educators have contended that computer technology cannot support the learning process entirely; instead, the primary function of computer technology involves a knowledge building tool for students (Jonassen, Carr, & Yueh, 1998). Effective learning strategies remain the most crucial factor for increasing learning motivation. Therefore, effective learning strategies supplemented with appropriate computer technology can greatly enhance learning motivation (Chu, Hwang, & Tsai, 2010; Jonassen, 1999; Hwang, Tsai, Chu, Kinshuk, & Chen, 2012; Jonassen et al., 1998). Previous studies have highlighted that inquiry-based learning strategies supplemented by computer technology in a scenario-based learning environment can effectively increase learning motivation (Shih, Chuang, & Hwang, 2010; Soloway & Wallace, 1997). Inquiry-based learning strategies are student-centric knowledge exploration activities; the teacher serves as a guide, employing structured methods that train and encourage students to learn proactively (Hwang, Wu, Zhuang, & Huang, 2013; Soloway & Wallace, 1997). When students acquire the methods for problem-solving, they use the obtained information to establish a hypothesis or to plan solutions to the problem (Looi, 1998). Consequently, in this study, an innovative learning approach is proposed to support inquiry-based learning activities with mobile AR. …
---
paper_title: Mixed reality training application for an oil refinery: user requirements
paper_content:
Introducing mixed reality (MR) into safety-critical environment like oil refinery is difficult, since the environment and organization lays demanding restrictions for the application. In order to develop usable and safe MR application, we need to study the context of use and derive user requirements from it. This paper describes the user requirements for an MR based oil refinery training tool. The application is aimed to train employees of a specific process unit in the refinery. Training is currently done mainly in a classroom and on-site only when the process is closed down. On-site training is necessary, but expensive and rarely possible. The use of mixed reality offers a way to train employees on-site while the process is running. Users can virtually see "inside" the columns and can modify virtually the process.
---
paper_title: Impact of an augmented reality system on students' motivation for a visual art course
paper_content:
In this paper, the authors show that augmented reality technology has a positive impact on the motivation of middle-school students. The Instructional Materials Motivation Survey (IMMS) (Keller, 2010) based on the ARCS motivation model (Keller, 1987a) was used to gather information; it considers four motivational factors: attention, relevance, confidence, and satisfaction. Motivational factors of attention and satisfaction in an augmented-reality-based learning environment were better rated than those obtained in a slides-based learning environment. When the impact of the augmented reality system was analyzed in isolation, the attention and confidence factors were the best rated. The usability study showed that although this technology is not mature enough to be used massively in education, enthusiasm of middle-school students diminished most of the barriers found.
---
paper_title: Review of Augmented Paper Systems in Education: An Orchestration Perspective
paper_content:
Augmented paper has been proposed as a way to integrate more easily ICTs in settings like formal education, where paper has a strong presence. However, despite the multiplicity of educational applications using paper-based computing, their deployment in authentic settings is still marginal. To better understand this gap between research proposals and everyday classroom application, we surveyed the field of augmented paper systems applied to education, using the notion of "classroom orchestration" as a conceptual tool to understand its potential for integration in everyday educational practice. Our review organizes and classifies the affordances of these systems, and reveals that comparatively few studies provide evidence about the learning effects of system usage, or perform evaluations in authentic setting conditions. The analysis of those proposals that have performed authentic evaluations reveals how paper based-systems can accommodate a variety of contextual constraints and pedagogical approaches, but also highlights the need for further longitudinal, in-the-wild studies, and the existence of design tensions that make the conception, implementation and appropriation of this kind of systems still challenging.
---
paper_title: Development and behavioral pattern analysis of a mobile guide system with augmented reality for painting appreciation instruction in an art museum
paper_content:
A mobile guide system that integrates art appreciation instruction with augmented reality (AR) was designed as an auxiliary tool for painting appreciation, and the learning performance of three groups of visiting participants was explored: AR-guided, audio-guided, and nonguided (i.e., without carrying auxiliary devices). The participants were 135 college students, and a quasi-experimental research design was employed. Several learning performance factors of the museum visitors aided with different guided modes were evaluated, including their learning effectiveness, flow experience, the amount of time spent focusing on the paintings, behavioral patterns, and attitude of using the guide systems. The results showed that compared to the audio- and nonguided participants, the AR guide effectively enhanced visitors' learning effectiveness, promoted their flow experience, and extended the amount of time the visitors spent focusing on the paintings. In addition, the visitors' behavioral patterns were dependent upon the guided mode that they used; the visitors who were the most engaged in the gallery experience were those who were using the AR guide. Most of the visitors using the mobile AR-guide system elicited positive responses and acceptance attitudes.
---
paper_title: BARS: Battlefield Augmented Reality System
paper_content:
Abstract : Situational awareness needs cannot be met using traditional approaches such as radios, maps and handheld displays and more powerful display paradigms are needed. We are researching mobile augmented reality (AR) through the development of the Battlefield Augmented Reality System (BARS) in collaboration with Columbia University. The system consists of a wearable computer a wireless network system and a tracked see-through Head Mounted Display (HMD). The user's perception of the environment is enhanced by superimposing graphics onto the user's field of view. The graphics are registered (aligned) with the actual environment. For example an augmented view of a building could include a wireframe plan of its interior icons to represent reported locations of snipers and the names of adjacent streets.
---
paper_title: Augmented reality in Education — Cases, places, and potentials
paper_content:
Augmented Reality is poised to profoundly transform Education as we know it. The capacity to overlay rich media onto the real world for viewing through web-enabled devices such as phones and tablet devices means that information can be made available to students at the exact time and place of need. This has the potential to reduce cognitive overload by providing students with “perfectly situated scaffolding”, as well as enable learning in a range of other ways. This paper will review uses of Augmented Reality both in mainstream society and in education, and discuss the pedagogical potentials afforded by the technology. Based on the prevalence of information delivery uses of Augmented Reality in Education, we argue the merit of having students design Augmented Reality experiences in order to develop their higher order thinking capabilities. A case study of “learning by design” using Augmented Reality in high school Visual Art is presented, with samples of student work and their feedback indicating that the...
---
paper_title: Augmented reality in medical education?
paper_content:
Learning in the medical domain is to a large extent workplace learning and involves mastery of complex skills that require performance up to professional standards in the work environment. Since training in this real-life context is not always possible for reasons of safety, costs, or didactics, alternative ways are needed to achieve clinical excellence. Educational technology and more specifically augmented reality (AR) has the potential to offer a highly realistic situated learning experience supportive of complex medical learning and transfer. AR is a technology that adds virtual content to the physical real world, thereby augmenting the perception of reality. Three examples of dedicated AR learning environments for the medical domain are described. Five types of research questions are identified that may guide empirical research into the effects of these learning environments. Up to now, empirical research mainly appears to focus on the development, usability and initial implementation of AR for learning. Limited review results reflect the motivational value of AR, its potential for training psychomotor skills and the capacity to visualize the invisible, possibly leading to enhanced conceptual understanding of complex causality.
---
paper_title: Smartphones, Smart Objects, and Augmented Reality
paper_content:
Two major types of augmented reality seem most likely to see academic use in the coming five years, markerless and marked. Markerless augmented reality uses the location determined by a cell phone to serve as a basis for adding local information to the camera view. Marked augmented reality uses a two-dimensional barcode to connect a cell phone or personal computer to information, usually on a web site. Both approaches are already being used in museums and college libraries. Marked augmented reality is especially powerful because it makes physical objects clickable, such as a web page. Augmented reality creates some exciting new opportunities for libraries.
---
paper_title: A Design of Augmented Reality System based on Real-World Illumination Environment for Edutainment
paper_content:
Recently, edutainment system using augmented reality and transparent displays were researched. However, in the case of edutainment system using augmented reality due to the incongruity of environment light conditions, the virtual objects can’t fit to real world environment with natural-looking. In this paper, we designed a tangible edutainment system for synthesizing a render layer which contains real world background illumination information in order to solve virtual object’s low natural-looking problem.
---
paper_title: SMART: a SysteM of Augmented Reality for Teaching 2 nd Grade Students
paper_content:
In this paper, we describe the design and evaluation of SMART, an educational system that uses augmented reality for teaching 2nd grade-level concepts, adequate and integrated with national curriculum guidelines. SMART puts children exploring concepts like means of transportation, types of animals and similar semantic categories through the use of a set of racquets that are used to manipulate a TV-show style game with 3D models which are superimposed to the real time video feed of the whole class. ::: ::: Experiments were performed with several classes of students in three different, local primary schools. Results suggest that SMART is effective in maintaining high levels of motivation among children, and also that SMART has a positive impact on the students' learning experience, especially among the weaker students.
---
paper_title: In-place 3D sketching for authoring and augmenting mechanical systems
paper_content:
We present a framework for authoring three-dimensional virtual scenes for Augmented Reality (AR) which is based on hand sketching. Sketches consisting of multiple components are used to construct a 3D virtual scene augmented on top of the real drawing. Model structure and properties can be modified by editing the sketch itself and printed content can be combined with hand sketches to form a single scene. Authoring by sketching opens up new forms of interaction that have not been previously explored in Augmented Reality. To demonstrate the technology, we implemented an application that constructs 3D AR scenes of mechanical systems from freehand sketches, and animates the scenes using a physics engine. We provide examples of scenes composed from trihedral solid models, forces, and springs. Finally, we describe how sketch interaction can be used to author complicated physics experiments in a natural way.
---
paper_title: Authoring of physical models using mobile computers
paper_content:
Context-aware computers rely on user and physical models to describe the context of a user. In this paper, we focus on the problem of developing and maintaining a physical model of the environment using a mobile computer. We describe a set of tools for automatically creating and modifying three-dimensional contextual information. The tools can be utilized across multiple hardware platforms, with different capabilities, and operating in collaboration with one another. We demonstrate the capabilities of the tools using two mobile platforms. One of them, a mobile augmented reality system is used to construct a geometric model of an indoor environment which is then visualized on the same platform.
---
paper_title: In-Place Sketching for content authoring in Augmented Reality games
paper_content:
Sketching leverages human skills for various purposes. In-Place Augmented Reality Sketching experiences build on the intuitiveness and flexibility of hand sketching for tasks like content creation. In this paper we explore the design space of In-Place Augmented Reality Sketching, with particular attention to content authoring in games. We propose a contextual model that offers a framework for the exploration of this design space by the research community. We describe a sketch-based AR racing game we developed to demonstrate the proposed model. The game is developed on top of our shape recognition and 3D registration library for mobile AR.
---
paper_title: APRIL: a high-level framework for creating augmented reality presentations
paper_content:
While augmented reality (AR) technology is steadily maturing, application development is still lacking advanced authoring tools - even the simple presentation of information, which should not require any programming, is not systematically addressed by development tools. Moreover, there is also a severe lack of agreed techniques or best practices for the structuring of AR content. In this paper we present APRIL, the Augmented Presentation and Interaction Language, an authoring platform for AR presentations which provides concepts and techniques that are independent of specific applications or target hardware platforms, and should be suitable to raise the level of abstraction on which AR content creators can operate.
---
paper_title: Mobile phone based AR scene assembly
paper_content:
In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.
---
paper_title: Augmented maintenance of powerplants: a prototyping case study of a mobile AR system
paper_content:
Augmented reality (AR) research has progressed in great strides over the past few years. Most current demonstrations focus on providing robust tracking solutions since this is the most critical issue when demonstrating AR systems. An issue that is typically neglected concerns the online access, analysis and visualization of information. The information required by AR demonstration systems is kept to a minimum, is prepared ahead of time, and is stored locally in the form of three-dimensional geometric descriptions. In complex mobile settings, these simplifying assumptions do not work. The authors report on recent efforts at the TU Munich to analyze the information generation, retrieval, transmission, and visualization process in the context of maintenance procedures that are performed in nuclear power plants. The use of AR to present such information online has significant implications for the way information must be acquired, stored, and transmitted. The paper focuses on pointing out open questions, discussing options for addressing them, and evaluating them in prototypical implementations.
---
paper_title: Augmented reality in the psychomotor phase of a procedural task
paper_content:
Procedural tasks are common to many domains, ranging from maintenance and repair, to medicine, to the arts. We describe and evaluate a prototype augmented reality (AR) user interface designed to assist users in the relatively under-explored psychomotor phase of procedural tasks. In this phase, the user begins physical manipulations, and thus alters aspects of the underlying task environment. Our prototype tracks the user and multiple components in a typical maintenance assembly task, and provides dynamic, prescriptive, overlaid instructions on a see-through head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete psychomotor aspects of the assembly task significantly faster and with significantly greater accuracy than when using 3D-graphics-based assistance presented on a stationary LCD. Qualitative questionnaire results indicate that participants overwhelmingly preferred the AR condition, and ranked it as more intuitive than the LCD condition.
---
paper_title: An augmented reality training platform for assembly and maintenance skills
paper_content:
Training technicians to acquire new maintenance and assembly skills is important for various industries. Because maintenance and assembly tasks can be very complex, training technicians to efficiently perform new skills is challenging. Training of this type can be supported by Augmented Reality, a powerful industrial training technology that directly links instructions on how to perform the service tasks to the machine parts that require processing. Because of the increasing complexity of maintenance tasks, it is not sufficient to train the technicians in task execution. Instead, technicians must be trained in the underlying skills-sensorimotor and cognitive-that are necessary for the efficient acquisition and performance of new maintenance operations. These facts illustrate the need for efficient training systems for maintenance and assembly skills that accelerate the technicians' acquisition of new maintenance procedures. Furthermore, these systems should improve the adjustment of the training process for new training scenarios and enable the reuse of worthwhile existing training material. In this context, we have developed a novel concept and platform for multimodal Augmented Reality-based training of maintenance and assembly skills, which includes sub-skill training and the evaluation of the training system. Because procedural skills are considered as the most important skills for maintenance and assembly operations, we focus on these skills and the appropriate methods for improving them.
---
paper_title: Augmented assembly using a mobile phone
paper_content:
We present a mobile phone based augmented reality (AR) assembly system that enable users to view complex models on their mobile phones. It is based on a client-server architecture, where complex model information is located on a PC, and a mobile phone with the camera is used as a thin client access device to this information. With this system users are able to see an AR view that provides step by step guidance for a real world assembly task. We also present results from a pilot user study evaluating the system, showing that people felt the interface was intuitive and very helpful in supporting the assembly task.
---
paper_title: Comparative effectiveness of augmented reality in object assembly
paper_content:
Although there has been much speculation about the potential of Augmented Reality (AR), there are very few empirical studies about its effectiveness. This paper describes an experiment that tested the relative effectiveness of AR instructions in an assembly task. Task information was displayed in user's field of view and registered with the workspace as 3D objects to explicitly demonstrate the exact execution of a procedure step. Three instructional media were compared with the AR system: a printed manual, computer assisted instruction (CAI) using a monitor-based display, and CAI utilizing a head-mounted display. Results indicate that overlaying 3D instructions on the actual work pieces reduced the error rate for an assembly task by 82%, particularly diminishing cumulative errors - errors due to previous assembly mistakes. Measurement of mental effort indicated decreased mental effort in the AR condition, suggesting some of the mental calculation of the assembly task is offloaded to the system.
---
paper_title: Mobile augmented reality in the data center
paper_content:
Recent advances in compute power, graphics power, and cameras in mobile computing devices have facilitated the development of new augmented reality applications in the mobile device space. Mobile augmented reality provides a straightforward and natural way for users to understand complex data by overlaying visualization on top of a live video feed on their mobile device. In our data center mobile augmented reality project, the user points a mobile device camera at a rack of data center assets, and additional content about these assets is visually overlaid on top of each asset in the video stream from the mobile device camera. This correspondence between digital content and physical things or locations makes mobile augmented reality an intuitive user interface for interacting with the world around us. This paper describes augmented reality techniques for mobile devices and the motivations around using a multimarker computer vision-based technique for visualizing asset data in the data center. Our mobile augmented reality project enables system administrators to easily interact with hardware assets while they are in the data center, providing them with an additional tool to use in managing the data center.
---
paper_title: Virtual Vouchers: Prototyping a Mobile Augmented Reality User Interface for Botanical Species Identification
paper_content:
the tools that botanists require for field-work must evolve and take on new forms. Of particular importance is the ability to identify existing and new species in the field. Mobile augmented reality systems can make it possible to access, view, and inspect a large database of virtual species examples side-by-side with physical specimens. In this paper, we present prototypes of a mobile augmented reality electronic field guide and techniques for displaying and inspecting computer vision-based visual search results in the form of virtual vouchers. Our work addresses head-movement controlled augmented reality for hands-free interaction and tangible augmented reality. We describe results from our design and investigation process and discuss observations and feedback from lab trials by botanists.
---
paper_title: An Augmented Reality System for MR Image – guided Needle Biopsy : Initial Results in a Swine Model 1
paper_content:
Purpose: To evaluate an augmented reality (AR) system in combination with a 1.5-T closed-bore magnetic resonance (MR) imager as a navigation tool for needle biopsies. Materials and Methods: The experimental protocol had institutional animal care and use committee approval. Seventy biopsies were performed in phantoms by using 20 tube targets, each with a diameter of 6 mm, and 50 virtual targets. The position of the needle tip in AR and MR space was compared in multiple imaging planes, and virtual and real needle tip localization errors were calculated. Ten AR-guided biopsies were performed in three pigs, and the duration of each procedure was determined. After successful puncture, the distance to the target was measured on MR images. The confidence limits for the achieved in-plane hit rate and for lateral deviation were calculated. A repeated measures analysis of variance was used to determine whether the placement error in a particular dimension (x, y, or z) differed from the others. Results: For the 50 v...
---
paper_title: When Augmented Reality meets Big Data
paper_content:
With computing and sensing woven into the fabric of everyday life, we live in an era where we are awash in a flood of data from which we can gain rich insights. Augmented reality (AR) is able to collect and help analyze the growing torrent of data about user engagement metrics within our personal mobile and wearable devices. This enables us to blend information from our senses and the digitalized world in a myriad of ways that was not possible before. AR and big data have a logical maturity that inevitably converge them. The tread of harnessing AR and big data to breed new interesting applications is starting to have a tangible presence. In this paper, we explore the potential to capture value from the marriage between AR and big data technologies, following with several challenges that must be addressed to fully realize this potential.
---
paper_title: User Interface Management Techniques for Collaborative Mobile Augmented Reality
paper_content:
Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.
---
paper_title: Recent Advances in Augmented Reality
paper_content:
In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
---
paper_title: BARS: Battlefield Augmented Reality System
paper_content:
Abstract : Situational awareness needs cannot be met using traditional approaches such as radios, maps and handheld displays and more powerful display paradigms are needed. We are researching mobile augmented reality (AR) through the development of the Battlefield Augmented Reality System (BARS) in collaboration with Columbia University. The system consists of a wearable computer a wireless network system and a tracked see-through Head Mounted Display (HMD). The user's perception of the environment is enhanced by superimposing graphics onto the user's field of view. The graphics are registered (aligned) with the actual environment. For example an augmented view of a building could include a wireframe plan of its interior icons to represent reported locations of snipers and the names of adjacent streets.
---
paper_title: Collaborative use of mobile augmented reality with paper maps
paper_content:
The popularity of augmented reality (AR) applications on mobile devices is increasing, but there is as yet little research on their use in real-settings. We review data from two pioneering field trials where MapLens, a magic lens that augments paper-based city maps, was used in small-group collaborative tasks. The first study compared MapLens to a digital version akin to Google Maps, the second looked at using one shared mobile device vs. using multiple devices. The studies find place-making and use of artefacts to communicate and establish common ground as predominant modes of interaction in AR-mediated collaboration with users working on tasks together despite not needing to.
---
paper_title: Like bees around the hive: a comparative study of a mobile augmented reality map
paper_content:
We present findings from field trials of MapLens, a mobile augmented reality (AR) map using a magic lens over a paper map. Twenty-six participants used MapLens to play a location-based game in a city centre. Comparisons to a group of 11 users with a standard 2D mobile map uncover phenomena that arise uniquely when interacting with AR features in the wild. The main finding is that AR features facilitate place-making by creating a constant need for referencing to the physical, and in that it allows for ease of bodily configurations for the group, encourages establishment of common ground, and thereby invites discussion, negotiation and public problem-solving. The main potential of AR maps lies in their use as a collaborative tool.
---
paper_title: Mobile phone based AR scene assembly
paper_content:
In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.
---
paper_title: InfoSPOT: A mobile Augmented Reality method for accessing building information through a situation awareness approach
paper_content:
article i nfo Article history: Accepted 9 September 2012 Available online 6 October 2012 The Architecture, Engineering, Construction, and Owner/Operator (AECO) industry is constantly searching for new methods for increasing efficiency and productivity. Facility Managers (FMs), as a part of the owner/ operator role, work in complex and dynamic environments where critical decisions are constantly made. This decision-making process and its consequent performance can be improved by enhancing Situation Awareness (SA) of the FMs through new digital technologies. In this paper, InfoSPOT (Information Surveyed Point for Observation and Tracking), is recommended to FMs as a mobile Augmented Reality (AR) tool for accessing information about the facilities they maintain. AR has been considered as a viable option to reduce inefficiencies of data overload by providing FMs with a SA-based tool for visualizing their "real-world" envi- ronment with added interactive data. A prototype of the AR application was developed and a user participa- tion experiment and analysis conducted to evaluate the features of InfoSPOT. This innovative application of AR has the potential to improve construction practices, and in this case, facility management. Published by Elsevier B.V.
---
paper_title: Gesture-based interaction via finger tracking for mobile augmented reality
paper_content:
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.
---
paper_title: Media studies, mobile augmented reality, and interaction design
paper_content:
You are walking in the Sweetwater Creek State Park near Atlanta and using the Augmented Reality (AR) Trail Guide, a mobile application designed by Isaac Kulka for the Argon Browser (Figure 1). The application offers two views: a now familiar Google-style map, with points of interest marked on its surface, and an AR view, which shows these points located in space. You see the map view when you hold the screen parallel to the ground; when you turn the phone up to look at the world, you get the AR view with the points of interest floating in space in front of you. This simple gesture of raising the phone changes your relationship to the information. You pass from a fully symbolic form of representation to a form of perceiving symbolic information as part of your visual environment. The AR Trail Guide, developed in the Augmented Environments Lab at Georgia Tech [1], illustrates a new realm in AR design that goes beyond current commercial applications. In this article, we discuss some of these new areas, such as designing for experiences in cultural heritage, personal expression, and entertainment. At the same time, we want to address a larger issue. ACM interactions has often been a place for exploring new paradigms and the relevance for interaction design of unusual approaches from other disciplines. In that spirit, we pose the question: Can the humanistic discipline of media studies play a useful role in interaction design? Media studies looks at the history of media and their relationship to culture, and we will focus here on digital media and their relationship to other media, both present and past. Looking at digital media in a historical context is relevant because of the dynamic relationship between "traditional" media (film, television, radio, print) and their digital remediations. How can media studies be made to contribute to the productive work of interaction design? We believe one answer lies in using the historical understanding gained through media studies to develop a kind of media aesthetics that can guide designers as they explore new forms of digital media such as the mobile augmented reality application described above.
---
paper_title: Designing mobile augmented reality
paper_content:
The development of mobile Augmented Reality application became increasingly popular over the last few years. However, many of the existing solutions build on the reuse of available standard metaphors for visualization and interaction without considering the manifold contextual factors of their use. Within this workshop we want to discuss theoretical design approaches and practical tools which should help developers to make more informed choices when exploring the design space of Augmented Reality interfaces in mobile contexts.
---
paper_title: Augmenting Human Cognition with Adaptive Augmented Reality
paper_content:
Wearable Augmented Reality (AR) combines research in AR, mobile/ubiquitous computing, and human ergonomics in which a video or optical see-through head mounted display (HMD) facilitates multi-modal delivery of contextually relevant and computer generated visual and auditory data over a physical, real-world environment. Wearable AR has the capability of delivering on-demand assistance and training across a variety of domains. A primary challenge presented by such advanced HCI technologies is the development of scientifically-grounded methods for identifying appropriate information presentation, user input, and feedback modalities in order to optimize performance and mitigate cognitive overload. A proposed framework and research methodology are described to support instantiation of physiologically-driven, adaptive AR to assess and contextually adapt to an individual’s environmental and cognitive state in real time. Additionally a use case within the medical domain is presented, and future research is discussed.
---
paper_title: User Experience Evaluation of Mobile AR services
paper_content:
Mobile Augmented Reality (MAR) is state-of-the-art technology that has modernized the way of accessing and interacting with information thus invoking new experiences for users all around the world. However User Experience (UX) evaluation for mobile AR is still a widely unexplored area. In order to identify the potential problems faced by the end users while using MAR applications, it is important to evaluate the UX of current Mobile AR service. This paper narrates the findings and analysis from a pilot study. We report a cross-sectional survey for evaluating the user experience of latest mobile AR applications. A framework of qualities related to user experience is applied on the acquired results to understand and articulate different aspects of UX. The outcomes show a positive attitude of users towards mobile AR services. The results help in considering key acceptance issues and potential user expectations in the development of future mobile AR services.
---
paper_title: Preliminary user experience framework for designing mobile augmented reality technologies
paper_content:
User eXperience (UX) is identified as the characteristics of the designed system, the result of user's internal state, and the context within which the interaction between the system and user occurs. UX is becoming increasingly diverse and well established field especially in the context of its usage. Nevertheless, this field of research lack conceptual and practical frameworks to be followed while designing for emerging technologies like Augmented Reality (AR). This paper presents an early framework for designing and evaluating the UX of Mobile Augmented Reality (MAR) applications. The credibility of this work-in-progress UX frame-work is supported by recent validated research on UX and MAR studies.
---
paper_title: When Augmented Reality meets Big Data
paper_content:
With computing and sensing woven into the fabric of everyday life, we live in an era where we are awash in a flood of data from which we can gain rich insights. Augmented reality (AR) is able to collect and help analyze the growing torrent of data about user engagement metrics within our personal mobile and wearable devices. This enables us to blend information from our senses and the digitalized world in a myriad of ways that was not possible before. AR and big data have a logical maturity that inevitably converge them. The tread of harnessing AR and big data to breed new interesting applications is starting to have a tangible presence. In this paper, we explore the potential to capture value from the marriage between AR and big data technologies, following with several challenges that must be addressed to fully realize this potential.
---
paper_title: AR UX design: Applying AEIOU to handheld augmented reality browser
paper_content:
With maturing technologies and availability of the sensor-enriched device, the driving force behind handheld augmented reality (HAR) technology will be leaning towards the experience technology can bring. Though attentions are gathered on usability and conventions for this technology, the user experience cannot be ignored. It will be more commonly available in the hands of the public and become a technology that is not solely used by experts. It will be important to approach the technology with friendlier and more user-centered focus to bring it out of research labs and into people's life. In this paper, we will introduce design exploration constructs inspired from a method that is commonly used in the field of the industrial design to guide designers of HAR application to explore different aspects of user experience. The purpose of proposing AEIOU is to provide a platform to create encompassing user experience that is beyond the usability considerations or the existing design conventions. The advantage of having such a platform is to provide a starting ground for discussion and betterment of HAR application. The use of these constructs is discussed in relation to the design process of AR browser.
---
paper_title: Towards a shared large-area mixed reality system
paper_content:
In this paper we present a large-area interactive mixed reality system where multiple users can experience an event simultaneously. Through the combination of a number of innovative methods, the system can tackle common problems that are inherent in most existing mixed reality solutions, such as robustness against lighting conditions, static occlusion, illumination correction, registration and tracking etc. Most importantly, with our proposed experience server, a shared event among multiple users is seamless. The experience server tracks every user's position and experience state and presents a unique viewpoint of the event to multiple users simultaneously. The effectiveness of the system is demonstrated through an example application at a heritage site, where we perform user testing through multiple focus groups.
---
paper_title: Design Implications for Quality User eXperience in Mobile Augmented Reality Applications
paper_content:
Mobile augmented reality (MAR) technology is a new trend that provide users with the augmented view of digital information in real world. This paper contains design implications for improving the User eXperience (UX) of MAR applications. The scope is limited to MAR and its application in advertisement industry. Results from an experimental study are used to draw the guidelines that can be followed by the designers and practitioners to ensure a quality experience to their users.
---
paper_title: Multi-layered Mobile Augmented Reality Framework for Positive User Experience
paper_content:
Emerging technology like Mobile Augmented Reality (MAR) is becoming increasingly diverse and well established. However, this research field needs conceptual and practical frameworks to be followed while designing for a positive User Experience (UX). This paper presents a multi-layered conceptual MAR framework. It highlight the components needed to design MAR products and demonstrate different types of experiences invoked by the use of such products in a particular timespan. This work presents the important product aspects that need to be considered while designing for MAR to enhance its user experience.
---
paper_title: Demystifying the design of mobile augmented reality applications
paper_content:
This research proposes a set of interaction design principles for the development of mobile augmented reality (MAR) applications. The design recommendations adopt a user-centered perspective and, thus, they focus on the necessary actions to ensure high-quality MAR user experiences. To formulate our propositions we relied on theoretical grounding and an evaluation of eight MAR applications that provide published records of their design properties. The design principles have then been applied to guide the development of a MAR travel application. We performed a field study with 33 tourists in order to elicit whether our design choices effectively lead to enhanced satisfaction and overall user experience. Results suggest that the proposed principles contribute to ensuring high usability and performance of the MAR application as well as evoking positive feelings during user and system interactions. Our prescriptions may be employed either as a guide during the initial stages of the design process (ex-ante usage) or as a benchmark to assess the performance (ex-post usage) of MAR applications.
---
paper_title: Investigating the balance between virtuality and reality in mobile mixed reality UI design: user perception of an augmented city
paper_content:
Examples of mixed reality mobile applications and research combining virtual and real world data in the same view have emerged during recent years. However, currently there is little knowledge of users' perceptions comparing the role of virtual and real world representations in mobile user interfaces (UIs). In this paper, we investigate the initial user perceptions when comparing augmented reality and augmented virtuality UIs in a mobile application. To chart this, we conducted a field study with 35 participants, where they interacted with a simulated mobile mixed reality (MMR) application with two alternative UI designs, and an online survey completed by over a hundred people. Our findings reveal perceived differences e.g. in immersion, recognition, clarity and overall pleasantness, and provide insight to user interface design and methodological challenges of research in the area of mobile mixed reality.
---
paper_title: Integrating Linked Data in Mobile Augmented Reality Applications
paper_content:
Mobile devices are currently the most popular way of delivering ubiquitous augmented reality experiences. Traditionally, content sources for mobile augmented reality applications can be seen as isolated silos of information, being designed specifically for the intended purpose of the application. Recently, due to the raising in popularity and usage of the Semantic Web technologies and the Linked Data, some efforts have been made to overcome current augmented reality content sources limitations by integrating Linked Data principles and taking advantage of the significant increase in size and quality of the Linked Open Data cloud. This paper presents a literature review of the previous efforts in this respect, while highlighting in detail the limitations of current approaches, the advantages of integrating Linked Data principles in mobile augmented reality applications and up-to-date challenges in regarding this still novel approach. The authors conclude by suggesting some future research directions in this area.
---
paper_title: Designing backpacks for high fidelity mobile outdoor augmented reality
paper_content:
This paper presents the design for our latest backpack to support mobile outdoor augmented reality, and how it evolved from lessons learned with our previous designs. We present a number of novel features which help to reduce size and weight, improve reliability and ease of configuration, and reduce CPU usage on laptop computers.
---
paper_title: Structured visual markers for indoor pathfinding
paper_content:
We present a mobile augmented reality (AR) system to guide a user through an unfamiliar building to a destination room. The system presents a world-registered wireframe model of the building labeled with directional information in a see-through heads-up display, and a three-dimensional world-in-miniature (WIM) map on a wrist-worn pad that also acts as an input device. Tracking is done using a combination of wall-mounted ARToolkit markers observed by a head-mounted camera, and an inertial tracker. To allow coverage of arbitrarily large areas with a limited set of markers, a structured marker re-use scheme based on graph coloring has been developed.
---
paper_title: Human pacman: A mobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area
paper_content:
Human Pacman is an interactive role-playing game that envisions to bring the computer gaming experience to a new level of emotional and sensory gratification by setting the real world as a playground. This is a physical fantasy game integrated with human-social and mobile-gaming that emphasizes on collaboration and competition between players. By setting the game in a wide outdoor area, natural human-physical movements have become an integral part of the game. Pacmen and Ghosts are now human players in the real world experiencing mixed reality visualization from the wearable computers on them. Virtual cookies and actual physical objects are incorporated to provide novel experiences of seamless transitions between real and virtual worlds and tangible human computer interface respectively. We believe Human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
---
paper_title: TimeWarp: interactive time travel with a mobile Mixed Reality game
paper_content:
Mobile location-aware applications have become quite popular across a range of new areas such as pervasive games and mobile edutainment applications. However it is only recently, that approaches have been presented which combine gaming and education with mobile Augmented Reality systems. However they typically lack a close crossmedia integration of the surroundings, and often annotate or extend the environment rather than modifying and altering it. In this paper we present a mobile outdoor mixed reality game for exploring the history of a city in the spatial and the temporal dimension. We introduce the design and concept of the game and present a universal mechanism to define and setup multi-modal user interfaces for the game challenges. Finally we discuss the results of the user tests.
---
paper_title: Development of a real time image based object recognition method for mobile AR-devices
paper_content:
In this paper we describe an image based object recognition and tracking method for mobile AR-devices and the correlative process to generate the required data. The object recognition and tracking base on the 3D-geometries of the related objects. Correspondings between live camera images and 3D-models are generated and used to determine the location and orientation of objects in the current scene. The required data for the object recognition is generated from common 3D-CAD-files using a dedicated process model.The object recognition and tracking method as well as the correlative generation process for the needed data are developed within the AR-PDA project. The AR-PDA is a personal digital assistant (e.g. PDA or 3rd generation mobile phone with an integrated camera), which uses AR technology to efficiently support consumers and service forces during their daily tasks.
---
paper_title: ARToolKit on the PocketPC platform
paper_content:
In this paper, we describe the port of ARToolKit onto the PocketPC platform including optimizations that led to a three-fold speedup over the native cross-compiled version. The ported ARToolKit module was successfully used in the Handheld AR project.
---
paper_title: Implementation of an augmented reality system on a PDA
paper_content:
We present a client/server implementation for running demanding mobile AR application on a PDA device. The system incorporates various data compression methods to make it run as fast as possible on a wide range of communication networks, from GSM to WLAN.
---
paper_title: An Augmented Reality Presentation System for Remote Cultural Heritage Sites
paper_content:
Museums often lack the possibility to present archaeological or cultural heritage sites in a realistic and interesting way. Thus are proposing a new way to show augmented reality applications of cultural heritage sites at remote places like museums. In the exhibition space large wall-filling photographs of the real site are superimposed with interactive contextual annotations like 3D reconstructions, images and movies. Therefore we are using two different hardware setups for visualization: Standard UMPCs and a custom made revolving display. The setup has been installed and tested at SIGGRAPH 2008, Allard Pierson Museum in Amsterdam and CeBIT 2009. Museum visitors could experience Forum Romanum and Satricum in an informative and intuitive way by pointing the video see through devices on different areas of the photographs. The result is a more realistic and entertaining way for presenting cultural heritage sites in museums. Furthermore our solution is less expensive than comparable installations regarding content and hardware.
---
paper_title: Generic Interaction Techniques for Mobile Collaborative Mixed Systems
paper_content:
The main characteristic of a mobile collaborative mixed system is that augmentation of the physical environment of one user occurs through available knowledge of where the user is and what the other users are doing. Links between the physical and digital worlds are no longer static but dynamically defined by users to create a collaborative augmented environment. In this article we present generic interaction techniques for smoothly combining the physical and digital worlds of a mobile user in the context of a collaborative situation. We illustrate the generic nature of the techniques with two systems that we developed: MAGIC for archaeological fieldwork and TROC a mobile collaborative game.
---
paper_title: Wide Area Tracking Tools for Augmented Reality
paper_content:
We have developed a hand-held augmented reality platform exploiting a combination of multiple sensors built around an ultra-wideband tracking system. We demonstrate two applications illustrating how an environment exploiting this platform can be set up. Firstly, a technician-support application provides intuitive in-situ instructions on how a wide area tracking system should be configured. The use of 3D registered graphics greatly assists in the debugging of common awkward use cases involving reflections off metal surfaces. Secondly, a navigation application utilises this newly configured and calibrated tracker, as well as other sensors, adapting to whatever is available in a given locale.
---
paper_title: EyeGuardian: a framework of eye tracking and blink detection for mobile device users
paper_content:
Computer Vision Syndrome (CVS) is a common problem in the "Information Age", and it is becoming more serious as mobile devices (e.g. smartphones and tablet PCs) with small, low-resolution screens are outnumbering the home computers. The simplest way to avoid CVS is to blink frequently. However, most people do not realize that they blink less and some do not blink at all in front of the screen. In this paper, we present a mobile application that keeps track of the reader's blink rate and prods the user to blink if an exceptionally low blink rate is detected. The proposed eye detection and tracking algorithm is designed for mobile devices and can keep track of the eyes in spite of camera motion. The main idea is to predict the eye position in the camera frame using the feedback from the built-in accelerometer. The eye tracking system was built on a commercial Tablet PC. The experimental results consistently show that the scheme can withstand very aggressive mobility scenarios.
---
paper_title: Face to face collaborative AR on mobile phones
paper_content:
Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications.
---
paper_title: Sketching up the world: in situ authoring for mobile Augmented Reality
paper_content:
We present a novel system allowing in situ content creation for mobile Augmented Reality in unprepared environments. This system targets smartphones and therefore allows a spontaneous authoring while in place. We describe two different scenarios, which are depending on the size of the working environment and consequently use different tracking techniques. A natural feature-based approach for planar targets is used for small working spaces whereas for larger working environments, such as in outdoor scenarios, a panoramic-based orientation tracking is deployed. Both are integrated into one system allowing the user to use the same interaction for creating the content applying a set of simple, yet powerful modeling functions for content creation. The resulting content for Augmented Reality can be shared with other users using a dedicated content server or kept in a private inventory for later use.
---
paper_title: Location-based augmented reality on mobile phones
paper_content:
The computational capability of mobile phones has been rapidly increasing, to the point where augmented reality has become feasible on cell phones. We present an approach to indoor localization and pose estimation in order to support augmented reality applications on a mobile phone platform. Using the embedded camera, the application localizes the device in a familiar environment and determines its orientation. Once the 6 DOF pose is determined, 3D virtual objects from a database can be projected into the image and displayed for the mobile user. Off-line data acquisition consists of acquiring images at different locations in the environment. The online pose estimation is done by a feature-based matching between the cell phone image and an image selected from the precomputed database using the phone's sensors (accelerometer and magnetometer). The application enables the user both to visualize virtual objects in the camera image and to localize the user in a familiar environment. We describe in detail the process of building the database and the pose estimation algorithm used on the mobile phone. We evaluate the algorithm performance as well as its accuracy in terms of reprojection distance of the 3D virtual objects in the cell phone image.
---
paper_title: UMAR: Ubiquitous Mobile Augmented Reality
paper_content:
In this paper we discuss the prospects of using marker based Augmented Reality for context aware applications on mobile phones. We also present the UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using Mobile Augmented Reality. A step towards this we have successfully ported the ARToolkit to consumer mobile phones running on the Symbian platform and present results around this. We also present three sample applications based on UMAR and future case study work planned.
---
paper_title: Collaborative use of mobile augmented reality with paper maps
paper_content:
The popularity of augmented reality (AR) applications on mobile devices is increasing, but there is as yet little research on their use in real-settings. We review data from two pioneering field trials where MapLens, a magic lens that augments paper-based city maps, was used in small-group collaborative tasks. The first study compared MapLens to a digital version akin to Google Maps, the second looked at using one shared mobile device vs. using multiple devices. The studies find place-making and use of artefacts to communicate and establish common ground as predominant modes of interaction in AR-mediated collaboration with users working on tasks together despite not needing to.
---
paper_title: Like bees around the hive: a comparative study of a mobile augmented reality map
paper_content:
We present findings from field trials of MapLens, a mobile augmented reality (AR) map using a magic lens over a paper map. Twenty-six participants used MapLens to play a location-based game in a city centre. Comparisons to a group of 11 users with a standard 2D mobile map uncover phenomena that arise uniquely when interacting with AR features in the wild. The main finding is that AR features facilitate place-making by creating a constant need for referencing to the physical, and in that it allows for ease of bodily configurations for the group, encourages establishment of common ground, and thereby invites discussion, negotiation and public problem-solving. The main potential of AR maps lies in their use as a collaborative tool.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: Mobile phone based AR scene assembly
paper_content:
In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.
---
paper_title: An Augmented Reality Presentation System for Remote Cultural Heritage Sites
paper_content:
Museums often lack the possibility to present archaeological or cultural heritage sites in a realistic and interesting way. Thus are proposing a new way to show augmented reality applications of cultural heritage sites at remote places like museums. In the exhibition space large wall-filling photographs of the real site are superimposed with interactive contextual annotations like 3D reconstructions, images and movies. Therefore we are using two different hardware setups for visualization: Standard UMPCs and a custom made revolving display. The setup has been installed and tested at SIGGRAPH 2008, Allard Pierson Museum in Amsterdam and CeBIT 2009. Museum visitors could experience Forum Romanum and Satricum in an informative and intuitive way by pointing the video see through devices on different areas of the photographs. The result is a more realistic and entertaining way for presenting cultural heritage sites in museums. Furthermore our solution is less expensive than comparable installations regarding content and hardware.
---
paper_title: Telegeoinformatics: Location-based Computing and Services
paper_content:
THEORIES AND TECHNOLOGIES Telegeoinformatics: Current Trends and Future Direction Introduction Architecture Internet-Based GIS Spatial Databases Intelligent Query Analyzer (IQA) Predictive Computing Adaptation Final Remarks References Remote Sensing Introductory Concepts Remote Sensing Systems Imaging Characteristics of Remote Sensing Systems Active Microwave Remote Sensing Extraction of Thematic Information from Remotely Sensed Imagery Extraction of Metric Information from Remotely Sensed Imagery Remote Sensing in Telegeoinformatics References Positioning and Tracking Approaches and Technologies Introduction Global Positioning System Positioning Methods Based on Cellular Networks Other Positioning and Tracking Techniques: An Overview Hybrid Systems Summary References Wireless Communications Introduction Overview of Wireless Systems Radio Propagation and Physical Layer Issues Medium Access in Wireless Networks Network Planning, Design and Deployment Wireless Network Operations Conclusions and the Future References INTEGRATED DATA AND TECHNOLOGIES Chapter Five: Location-Based Computing Introduction LBC Infrastructure Location-Based Interoperability Location-Based Data Management Adaptive Location-Based Computing Location-Based Routing as Adaptive LBC Concluding Remarks References Location-Based Services Introduction Types of Location-Based Services What is Unique About Location-Based Services? Enabling Technologies Market for Location-Based Services Importance of Architecture and Standards Example Location-Based Services: J-Phone J-Navi (Japan) Conclusions References Wearable Tele-Informatic Systems for Personal Imaging Introduction Humanistic Intelligence as a Basis for Intelligent Image Processing Humanistic Intelligence 'WEARCOMP' as a Means of Realizing Humanistic Intelligence Where on the Body Should a Visual Tele-Informatic Device be Placed? Telepointer: Wearable Hands-Free Completely Self Contained Visual Augmented Reality Without Headwear and Without any Infrastructural Reliance Portable Personal Pulse Doppler Radar Vision System When Both the Camera and Display are Headword: Personal Imaging and Mediated Reality Personal Imaging for Location-Based Services Reality Window Manager (RWM) Personal Telegeoinformatics: Blocking Spam with a Photonic Filter Conclusion References Mobile Augmented Reality Introduction MARS: Promises, Applications, and Challenges Components and Requirements MARS UI Concepts Conclusions Acknowledgements References APPLICATIONS Emergency Response Systems Overview of Emergency Response Systems State-of-the-Art ERSs Examples of Developing ERSs for Earthquakes and Other Disasters Future Aspects of Emergency Response Systems Concluding Remarks References Location-Based Computing for Infrastructure Field Tasks Introduction LBC-Infra Concept Technological Components of LBC-Infra General Requirements of LBC-Infra Interaction Patterns and Framework of LBC-Infra Prototype System and Case Study Conclusions References The Role of Telegeoinformatics in ITS Introduction to Intelligent Tranaportation Systems Telegeoinformatics Within ITS The Role of Positioning Systems In ITS Geospatial Data for ITS Communication Systems in ITS ITS-Telegeoinformatics Applications Non-Technical Issues Impacting on ITS Concluding Remarks Remarks The Impact and Penetration of Location-Based Services The Definition of Technologies LBSs: Definitions, Software, and Usage The Market for LBSs: A Model of the Development of LBSs Penetration of Mobile Devices: Predictions of Future Markets Impacts of LBSs on Geographical Locations Conclusions References
---
paper_title: Location based Applications for Mobile Augmented Reality
paper_content:
In this work we investigate building indoor location based applications for a mobile augmented reality system. We believe that augmented reality is a natural interface to visualize spacial information such as position or direction of locations and objects for location based applications that process and present information based on the user's position in the real world. To enable such applications we construct an indoor tracking system that covers a substantial part of a building. It is based on visual tracking of fiducial markers enhanced with an inertial sensor for fast rotational updates. To scale such a system to a whole building we introduce a space partitioning scheme to reuse fiducial markers throughout the environment. Finally we demonstrate two location based applications built upon this facility, an indoor navigation aid and a library search application.
---
paper_title: Sensor fusion and occlusion refinement for tablet-based AR
paper_content:
This paper presents a set of technologies which enable robust, accurate, high resolution augmentation of live video, delivered via a tablet PC to which a video camera has been attached. By combining several technologies, this is achieved without the use of contrived markers in the environment: An outside-in tracker observes the tablet to generate robust, low-accuracy pose estimates. An inside-out tracker running on the tablet observes the video feed from the tablet-mounted camera and provides high-accuracy pose estimates by tracking natural features in the environment. Information from both of these trackers is combined in an extended Kalman filter. Finally, to maximise the quality of the augmented imagery, boundaries where the real world occludes the virtual imagery are identified and another tracker is used to refine the boundaries between real and virtual imagery so that their synthesis is as convincing as possible.
---
paper_title: ARToolKit on the PocketPC platform
paper_content:
In this paper, we describe the port of ARToolKit onto the PocketPC platform including optimizations that led to a three-fold speedup over the native cross-compiled version. The ported ARToolKit module was successfully used in the Handheld AR project.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: Mobile collaborative augmented reality
paper_content:
The combination of mobile computing and collaborative augmented reality into a single system makes the power of computer enhanced interaction and communication in the real world accessible anytime and everywhere. The paper describes our work to build a mobile collaborative augmented reality system that supports true stereoscopic 3D graphics, a pen and pad interface and direct interaction with virtual objects. The system is assembled from off-the-shelf hardware components and serves as a basic test bed for user interface experiments related to computer supported collaborative work in augmented reality. A mobile platform implementing the described features and collaboration between mobile and stationary users are demonstrated.
---
paper_title: “Studierstube”: An environment for collaboration in augmented reality
paper_content:
We propose an architecture for multi-user augmented reality with applications in visualisation, presentation and education, which we call "Studierstube". Our system presents three-dimensional stereoscopic graphics simultaneously to a group of users wearing light weight see-through head mounted displays. The displays do not affect natural communication and interaction, making working together very effective. Users see the same spatially aligned model, but can independently control their viewpoint and different layers of the data to be displayed. The setup serves computer supported cooperative work and enhances cooperation of visualisation experts. This paper presents the client-server software architecture underlying this system and details that must be addressed to create a high-quality augmented reality setup.
---
paper_title: Sketching up the world: in situ authoring for mobile Augmented Reality
paper_content:
We present a novel system allowing in situ content creation for mobile Augmented Reality in unprepared environments. This system targets smartphones and therefore allows a spontaneous authoring while in place. We describe two different scenarios, which are depending on the size of the working environment and consequently use different tracking techniques. A natural feature-based approach for planar targets is used for small working spaces whereas for larger working environments, such as in outdoor scenarios, a panoramic-based orientation tracking is deployed. Both are integrated into one system allowing the user to use the same interaction for creating the content applying a set of simple, yet powerful modeling functions for content creation. The resulting content for Augmented Reality can be shared with other users using a dedicated content server or kept in a private inventory for later use.
---
paper_title: Ubiquitous animated agents for augmented reality
paper_content:
Most of today's Augmented Reality (AR) systems operate as passive information browsers relying on a finite and deterministic world model and a predefined hardware and software infrastructure. We propose an AR framework that dynamically and proactively exploits hitherto unknown applications and hardware devices, and adapts the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi-user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts.
---
paper_title: An open software architecture for virtual reality interaction
paper_content:
This article describes OpenTracker, an open software architecture that provides a framework for the different tasks involved in tracking input devices and processing multi-modal input data in virtual environments and augmented reality application. The OpenTracker framework eases the development and maintenance of hardware setups in a more flexible manner than what is typically offered by virtual reality development packages. This goal is achieved by using an object-oriented design based on XML, taking full advantage of this new technology by allowing to use standard XML tools for development, configuration and documentation. The OpenTracker engine is based on a data flow concept for multi-modal events. A multi-threaded execution model takes care of tunable performance. Transparent network access allows easy development of decoupled simulation models. Finally, the application developer's interface features both a time-based and an event based model, that can be used simultaneously, to serve a large range of applications. OpenTracker is a first attempt towards a "'write once, input anywhere"' approach to virtual reality application development. To support these claims, integration into an existing augmented reality system is demonstrated. We also show how a prototype tracking equipment for mobile augmented reality can be assembled from consumer input devices with the aid of OpenTracker. Once development is sufficiently mature, it is planned to make Open-Tracker available to the public under an open source software license.
---
paper_title: APRIL: a high-level framework for creating augmented reality presentations
paper_content:
While augmented reality (AR) technology is steadily maturing, application development is still lacking advanced authoring tools - even the simple presentation of information, which should not require any programming, is not systematically addressed by development tools. Moreover, there is also a severe lack of agreed techniques or best practices for the structuring of AR content. In this paper we present APRIL, the Augmented Presentation and Interaction Language, an authoring platform for AR presentations which provides concepts and techniques that are independent of specific applications or target hardware platforms, and should be suitable to raise the level of abstraction on which AR content creators can operate.
---
paper_title: Muddleware for Prototyping Mixed Reality Multiuser Games
paper_content:
We present Muddleware, a communication platform designed for mixed reality multi-user games for mobile, lightweight clients. An approach inspired by Tuplespaces, which provides decoupling of sender and receiver is used to address the requirements of a potentially large number of mobile clients. A hierarchical database built on XML technology allows convenient prototyping and simple, yet powerful queries. Server side-extensions address persistence and autonomous behaviors through hierarchical state machines. The architecture has been tested with a number of multi-user games and is also used for non-entertainment applications
---
paper_title: OpenTracker-an open software architecture for reconfigurable tracking based on XML
paper_content:
This paper describes OpenTracker, an open software architecture that provides a generic solution to the different tasks involved in tracking input devices and processing tracking data for virtual environments. It combines a highly modular design with a configuration syntax based on XML, thus taking full advantage of this new technology. OpenTracker is a first attempt towards a "write once, track anywhere" approach to virtual reality application development.
---
paper_title: Nexus - an open global infrastructure for spatial-aware applications
paper_content:
Due to the lack of a generic platform for location- and spatial-aware systems, many basic services have to be reimplemented in each application that uses spatial-awareness. A cooperation among different applications is also difficult to achieve without a common platform. In this paper we present a platform that solves these problems. It provides an infrastructure that is based on computer models of regions of the physical world, which are augmented by virtual objects. We show how virtual objects make the integration of existing information systems and services in spatial-aware systems easier. Furthermore, our platform supports interactions between the computer models and the real world and integrates single models in a global 'Augmented World'.
---
paper_title: Marker tracking and HMD calibration for a video-based augmented reality conferencing system
paper_content:
We describe an augmented reality conferencing system which uses the overlay of virtual images on the real world. Remote collaborators are represented on virtual monitors which can be freely positioned about a user in space. Users can collaboratively view and interact with virtual objects using a shared virtual whiteboard. This is possible through precise virtual image registration using fast and accurate computer vision techniques and head mounted display (HMD) calibration. We propose a method for tracking fiducial markers and a calibration method for optical see-through HMD based on the marker tracking.
---
paper_title: UMAR: Ubiquitous Mobile Augmented Reality
paper_content:
In this paper we discuss the prospects of using marker based Augmented Reality for context aware applications on mobile phones. We also present the UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using Mobile Augmented Reality. A step towards this we have successfully ported the ARToolkit to consumer mobile phones running on the Symbian platform and present results around this. We also present three sample applications based on UMAR and future case study work planned.
---
paper_title: Tinmith-Metro: new outdoor techniques for creating city models with an augmented reality wearable computer
paper_content:
This paper presents new techniques for capturing and viewing on site 3D graphical models for large outdoor objects. Using an augmented reality wearable computer, we have developed a software system, known as Tinmith-Metro. Tinmith-Metro allows users to control a 3D constructive solid geometry modeller for building graphical objects of large physical artefacts, for example buildings, in the physical world. The 3D modeller is driven by a new user interface known as Tinmith-Hand, which allows the user to control the modeller using a set of pinch gloves and hand tracking. These techniques allow user to supply their AR renderers with models that would previously have to be captured with manual, time-consuming, and/or expensive methods.
---
paper_title: InfoSPOT: A mobile Augmented Reality method for accessing building information through a situation awareness approach
paper_content:
article i nfo Article history: Accepted 9 September 2012 Available online 6 October 2012 The Architecture, Engineering, Construction, and Owner/Operator (AECO) industry is constantly searching for new methods for increasing efficiency and productivity. Facility Managers (FMs), as a part of the owner/ operator role, work in complex and dynamic environments where critical decisions are constantly made. This decision-making process and its consequent performance can be improved by enhancing Situation Awareness (SA) of the FMs through new digital technologies. In this paper, InfoSPOT (Information Surveyed Point for Observation and Tracking), is recommended to FMs as a mobile Augmented Reality (AR) tool for accessing information about the facilities they maintain. AR has been considered as a viable option to reduce inefficiencies of data overload by providing FMs with a SA-based tool for visualizing their "real-world" envi- ronment with added interactive data. A prototype of the AR application was developed and a user participa- tion experiment and analysis conducted to evaluate the features of InfoSPOT. This innovative application of AR has the potential to improve construction practices, and in this case, facility management. Published by Elsevier B.V.
---
paper_title: KHARMA: An open KML/HTML architecture for mobile augmented reality applications
paper_content:
Widespread future adoption of augmented reality technology will rely on a broadly accessible standard for authoring and distributing content with, at a minimum, the flexibility and interactivity provided by current web authoring technologies. We introduce KHARMA, an open architecture based on KML for geospatial and relative referencing combined with HTML, JavaScript and CSS technologies for content development and delivery. This architecture uses lightweight representations that decouple infrastructure and tracking sources from authoring and content delivery. Our main contribution is a re-conceptualization of KML that turns HTML content formerly confined to balloons into first-class elements in the scene. We introduce the KARML extension that gives authors increase control over the presentation of HTML content and its spatial relationship to other content.
---
paper_title: Developing a Mobile, Service-Based Augmented Reality Tool for Modern Maintenance Work
paper_content:
In the VTT PLAMOS (Plant Model Services for Mobile Process Maintenance Engineer) project new tools were developed for modern maintenance work carried out in industrial plants by either the plant personnel or personnel of an industrial service provider. To formulate the requirements for new tools the work of a maintenance man was studied with a particular method, the Core-Task Analysis which has its roots in study and development of work in complex settings. The aim was to develop and create concepts for novel tools that would support the development of good work practices in a situation where the work is concurrently undergoing several transformations. Hence, the new tools should have potential to enable and affect new ways of working.
---
paper_title: Augmented reality for plant lifecycle management
paper_content:
Augmented reality is a technology expected to have a big impact as a natural and intuitive user interface to industrial applications. A lot of research has been done in industrial augmented reality but real life applications are still rare. One reason for the slow adoption seems to be that augmented reality applications have been developed without taking sufficiently into account the existing industrial processes in information management. This paper describes a prototype software which uses augmented reality as a user interface to plant lifecycle management applications in different lifecycle phases. Several use cases implemented show that augmented reality can be implemented cost efficiently without specific authoring tools if the design information is correctly structured already in the design phase, and if the design, maintenance and real-time information sources are accessed via standard application interfaces.
---
paper_title: Implementation of an augmented reality system on a PDA
paper_content:
We present a client/server implementation for running demanding mobile AR application on a PDA device. The system incorporates various data compression methods to make it run as fast as possible on a wide range of communication networks, from GSM to WLAN.
---
paper_title: CloudRidAR: a cloud-based architecture for mobile augmented reality
paper_content:
Mobile augmented reality (MAR) has exploded in popularity on mobile devices in various fields. However, building a MAR application from scratch on mobile devices is complicated and time-consuming. In this paper, we propose CloudRidAR, a framework for MAR developers to facilitate the development, deployment, and maintenance of MAR applications with little effort. Despite of advance in mobile devices as a computing platform, their performance for MAR applications is still very limited due to the poor computing capability of mobile devices. In order to alleviate the problem, our CloudRidAR is designed with cloud computing at the core. Computational intensive tasks are offloaded on cloud to accelerate computation in order to guarantee run-time performance. We also present two MAR applications built on CloudRidAR to evaluate our design.
---
paper_title: ARTiFICe-Augmented Reality Framework for Distributed Collaboration
paper_content:
This paper introduces a flexible and powerful software framework based on an off the shelf game engine which is used to develop distributed and collaborative virtual and augmented reality applications. We describe ARTiFICe's flexible design and implementation and demonstrate its use in research and teaching where 97 students in two lab courses developed AR applications with it. Applications are presented on mobile, desktop and immersive systems using low cost 6-DOF input devices (Microsoft Kinect, Razer Hydra, SpaceNavigator), that we integrated into our framework.
---
paper_title: Mobile Collaborative Augmented Reality: the Augmented Stroll
paper_content:
The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Human pacman: A mobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area
paper_content:
Human Pacman is an interactive role-playing game that envisions to bring the computer gaming experience to a new level of emotional and sensory gratification by setting the real world as a playground. This is a physical fantasy game integrated with human-social and mobile-gaming that emphasizes on collaboration and competition between players. By setting the game in a wide outdoor area, natural human-physical movements have become an integral part of the game. Pacmen and Ghosts are now human players in the real world experiencing mixed reality visualization from the wearable computers on them. Virtual cookies and actual physical objects are incorporated to provide novel experiences of seamless transitions between real and virtual worlds and tangible human computer interface respectively. We believe Human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: Map torchlight: a mobile augmented reality camera projector unit
paper_content:
The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.
---
paper_title: Mobile augmented reality in the data center
paper_content:
Recent advances in compute power, graphics power, and cameras in mobile computing devices have facilitated the development of new augmented reality applications in the mobile device space. Mobile augmented reality provides a straightforward and natural way for users to understand complex data by overlaying visualization on top of a live video feed on their mobile device. In our data center mobile augmented reality project, the user points a mobile device camera at a rack of data center assets, and additional content about these assets is visually overlaid on top of each asset in the video stream from the mobile device camera. This correspondence between digital content and physical things or locations makes mobile augmented reality an intuitive user interface for interacting with the world around us. This paper describes augmented reality techniques for mobile devices and the motivations around using a multimarker computer vision-based technique for visualizing asset data in the data center. Our mobile augmented reality project enables system administrators to easily interact with hardware assets while they are in the data center, providing them with an additional tool to use in managing the data center.
---
paper_title: Comparison of optical and video see-through, head-mounted displays
paper_content:
One of the most promising and challenging future uses of head-mounted displays (HMDs) is in applications where virtual environments enhance rather than replace real environments. To obtain an enhanced view of the real environment, the user wears a see-through HMD to see 3D computer-generated objects superimposed on his/her real-world view. This see-through capability can be accomplished using either an optical or a video see-through HMD. We discuss the tradeoffs between optical and video see-through HMDs with respect to technological, perceptual, and human factors issues, and discuss our experience designing, building, using, and testing these HMDs.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: GroundCam: A Tracking Modality for Mobile Mixed Reality
paper_content:
Anywhere augmentation pursues the goal of lowering the initial investment of time and money necessary to participate in mixed reality work, bridging the gap between researchers in the field and regular computer users. Our paper contributes to this goal by introducing the GroundCam, a cheap tracking modality with no significant setup necessary. By itself, the GroundCam provides high frequency, high resolution relative position information similar to an inertial navigation system, but with significantly less drift. When coupled with a wide area tracking modality via a complementary Kalman filter, the hybrid tracker becomes a powerful base for indoor and outdoor mobile mixed reality work
---
paper_title: BRIDGING THE GAPS : HYBRID TRACKING FOR ADAPTIVE MOBILE AUGMENTED REALITY
paper_content:
Tracking accuracy in a location-aware mobile system can change dynamically as a function of the user's location and other variables specific to the tracking technologies used. This is especially problematic for mobile augmented reality systems, which ideally require extremely precise position tracking for the user's head, but which may not always be able to achieve that level of accuracy. While it is possible to ignore variable positional accuracy in an augmented reality user interface, this can make for a confusing system; for example, when accuracy is low, virtual objects that are nominally registered with real ones may be too far off to be of use. To address this problem, we describe an experimental mobile augmented reality system that: (1) employs multiple position-tracking technologies, including ones that apply heuristics based on environmental knowledge; (2) coordinates these concurrently monitored tracking systems; and (3) automatically adapts the user interface to varying degrees of confidence in...
---
paper_title: Virtual Vouchers: Prototyping a Mobile Augmented Reality User Interface for Botanical Species Identification
paper_content:
the tools that botanists require for field-work must evolve and take on new forms. Of particular importance is the ability to identify existing and new species in the field. Mobile augmented reality systems can make it possible to access, view, and inspect a large database of virtual species examples side-by-side with physical specimens. In this paper, we present prototypes of a mobile augmented reality electronic field guide and techniques for displaying and inspecting computer vision-based visual search results in the form of virtual vouchers. Our work addresses head-movement controlled augmented reality for hands-free interaction and tangible augmented reality. We describe results from our design and investigation process and discuss observations and feedback from lab trials by botanists.
---
paper_title: User Interface Management Techniques for Collaborative Mobile Augmented Reality
paper_content:
Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.
---
paper_title: Mobile Collaborative Augmented Reality: the Augmented Stroll
paper_content:
The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.
---
paper_title: Bringing user-generated content from Internet services to mobile augmented reality clients
paper_content:
In this paper we describe a system for bringing usergenerated content to mobile augmented reality clients, taking in consideration the metadata required for visualizing it, at a sensor based tracking solution. Our proposal assumes that content is stored in multiple external Internet services, simply treated as a “cloud”, thus making the mobile client service agnostic. A prototype implementation was created for the Image Space service and the learnings of integrating with the popular Flickr service are discussed.
---
paper_title: Indoor location sensing using geo-magnetism
paper_content:
We present an indoor positioning system that measures location using disturbances of the Earth's magnetic field caused by structural steel elements in a building. The presence of these large steel members warps the geomagnetic field in a way that is spatially varying but temporally stable. To localize, we measure the magnetic field using an array of e-compasses and compare the measurement with a previously obtained magnetic map. We demonstrate accuracy within 1 meter 88% of the time in experiments in two buildings and across multiple floors within the buildings. We discuss several constraint techniques that can maintain accuracy as the sample space increases.
---
paper_title: Polaris: getting accurate indoor orientations for mobile devices using ubiquitous visual patterns on ceilings
paper_content:
Ubiquitous computing applications commonly use digital compass sensors to obtain orientation of a device relative to the magnetic north of the earth. However, these compass readings are always prone to significant errors in indoor environments due to presence of metallic objects in close proximity. Such errors can adversely affect the performance and quality of user experience of the applications utilizing digital compass sensors. In this paper, we propose Polaris, a novel approach to provide reliable orientation information for mobile devices in indoor environments. Polaris achieves this by aggregating pictures of the ceiling of an indoor environment and applies computer vision based pattern matching techniques to utilize them as orientation references for correcting digital compass readings. To show the feasibility of the Polaris system, we implemented the Polaris system on mobile devices, and field tested the system in multiple office buildings. Our results show that Polaris achieves 4.5° average orientation accuracy, which is about 3.5 times better than what can be achieved through sole use of raw digital compass readings.
---
paper_title: Aided eyes: eye activity sensing for daily life
paper_content:
Our eyes collect a considerable amount of information when we use them to look at objects. In particular, eye movement allows us to gaze at an object and shows our level of interest in the object. In this research, we propose a method that involves real-time measurement of eye movement for human memory enhancement; the method employs gaze-indexed images captured using a video camera that is attached to the user's glasses. We present a prototype system with an infrared-based corneal limbus tracking method. Although the existing eye tracker systems track eye movement with high accuracy, they are not suitable for daily use because the mobility of these systems is incompatible with a high sampling rate. Our prototype has small phototransistors, infrared LEDs, and a video camera, which make it possible to attach the entire system to the glasses. Additionally, the accuracy of this method is compensated by combining image processing methods and contextual information, such as eye direction, for information extraction. We develop an information extraction system with real-time object recognition in the user's visual attention area by using the prototype of an eye tracker and a head-mounted camera. We apply this system to (1) fast object recognition by using a SURF descriptor that is limited to the gaze area and (2) descriptor matching of a past-images database. Face recognition by using haar-like object features and text logging by using OCR technology is also implemented. The combination of a low-resolution camera and a high-resolution, wide-angle camera is studied for high daily usability. The possibility of gaze-guided computer vision is discussed in this paper, as is the topic of communication by the photo transistor in the eye tracker and the development of a sensor system that has a high transparency.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Bluetooth and WAP push based location-aware mobile advertising system
paper_content:
Advertising on mobile devices has large potential due to the very personal and intimate nature of the devices and high targeting possibilities. We introduce a novel B-MAD system for delivering permission-based location-aware mobile advertisements to mobile phones using Bluetooth positioning and Wireless Application Protocol (WAP) Push. We present a thorough quantitative evaluation of the system in a laboratory environment and qualitative user evaluation in form of a field trial in the real environment of use. Experimental results show that the system provides a viable solution for realizing permission-based mobile advertising.
---
paper_title: Mobile reality: A pda-based multimodal framework for synchronizing a hybrid tracking solution with 3d-graphics and location-sensitve speech interaction
paper_content:
A maintenance engineer who talks to pumps and pipes may not seem like the ideal person to entrust with keeping a factory running smoothly, but we hope that our Mobile Reality framework will enable such behavior in the future to be anything but suspicious! Described in this paper is how the Mobile Reality framework, running entirely on a Pocket PC, synchronizes a hybrid tracking solution to offer the user a seamless, location-dependent, mobile multimodal interface. The user interface juxtaposes a three-dimensional graphical view with a context-sensitive speech dialog centered upon objects located in the immediate vicinity of the mobile user. In addition, support for collaboration enables shared VRML browsing with annotation and a full-duplex voice channel.
---
paper_title: Indoor wayfinding: developing a functional interface for individuals with cognitive impairments.
paper_content:
PURPOSE ::: Assistive technology for wayfinding will significantly improve the quality of life for many individuals with cognitive impairments. The user interface of such a system is as crucial as the underlying implementation and localisation technology. We studied the user interface of an indoor navigation system for individuals with cognitive impairments. ::: ::: ::: METHOD ::: We built a system using the Wizard-of-Oz technique that let us experiment with many guidance strategies and interface modalities. Through user studies, we evaluated various configurations of the user interface for accuracy of route completion, time to completion, and user preferences. We used a counter-balanced design that included different modalities (images, audio, and text) and different routes. ::: ::: ::: RESULTS ::: We found that although users were able to use all types of modalities to find their way indoors, they varied significantly in their preferred modalities. We also found that timing of directions requires careful attention, as does providing users with confirmation messages at appropriate times. ::: ::: ::: CONCLUSIONS ::: Our findings suggest that the ability to adapt indoor wayfinding devices for specific users' preferences and needs will be particularly important.
---
paper_title: Scan and Tilt – Towards Natural Interaction for Mobile Museum Guides
paper_content:
This paper presents a new interaction technique - scan and tilt - aiming to enable a more natural interaction with mobile museum guides. Our work combines multiple modalities - gestures, physical selection, location, graphical and voice. In particular, physical selection is obtained by scanning RFID tags associated with the artworks, and tilt gestures are used to control and navigate the user interface and multimedia information. We report on how it has been applied to a mobile museum guide in order to enhance the user experience, providing details on a first user test carried out on our prototype.
---
paper_title: Mobile augmented reality in the data center
paper_content:
Recent advances in compute power, graphics power, and cameras in mobile computing devices have facilitated the development of new augmented reality applications in the mobile device space. Mobile augmented reality provides a straightforward and natural way for users to understand complex data by overlaying visualization on top of a live video feed on their mobile device. In our data center mobile augmented reality project, the user points a mobile device camera at a rack of data center assets, and additional content about these assets is visually overlaid on top of each asset in the video stream from the mobile device camera. This correspondence between digital content and physical things or locations makes mobile augmented reality an intuitive user interface for interacting with the world around us. This paper describes augmented reality techniques for mobile devices and the motivations around using a multimarker computer vision-based technique for visualizing asset data in the data center. Our mobile augmented reality project enables system administrators to easily interact with hardware assets while they are in the data center, providing them with an additional tool to use in managing the data center.
---
paper_title: Combination of UWB and GPS for indoor-outdoor vehicle localization
paper_content:
GPS receivers are satellite-based devices widely used for vehicle localization that, given their limitations, are not suitable for performing within indoor or dense urban environments. On the other hand ultra-wide band (UWB), a technology used for efficient wireless communication, has recently being used for vehicle localization in indoor environments with promising results. This paper focuses on the combination of both technologies for accurate positioning of vehicles in a mixed scenario (both indoor and outdoor situations), which is typical in some industrial applications. Our approach is based on combining sensor information in a Monte Carlo localization algorithm (also known as particle Filter), which has revealed its suitability for probabilistically coping with a variety of sensory data. The performance of our approach has been satisfactorily tested on a real robot, endowed with a UWB master antenna and a GPS receiver, within an indoor-outdoor scenario where three UWB slave antennas were placed in the indoor area.
---
paper_title: Archeoguide: system architecture of a mobile outdoor augmented reality system
paper_content:
We present the system architecture of a mobile outdoor augmented reality system for the Archeoguide project. We begin with a short introduction to the project. Then we present the hardware we chose for the mobile system and we describe the system architecture we designed for the software implementation. We conclude this paper with the first results obtained from experiments we made during our trials at ancient Olympia in Greece.
---
paper_title: An accurate ultra wideband (UWB) ranging for precision asset location
paper_content:
This paper investigates a ranging method employing ultra wideband (UWB) pulses under the existence of the line of sight (LOS) path in a multipath environment. Our method is based on the estimation of time of arrival of the first multipath. It averages the received pulses over multiple time frames, performs a correlation operation on the averaged signal, and detects the peak of the correlated signal. Our method reduces the ranging accuracy over conventional methods, and its accuracy is close to the Cramer-Rao lower bound (CRLB) on even for a low SNR.
---
paper_title: Implementation of an augmented reality system on a PDA
paper_content:
We present a client/server implementation for running demanding mobile AR application on a PDA device. The system incorporates various data compression methods to make it run as fast as possible on a wide range of communication networks, from GSM to WLAN.
---
paper_title: Structured visual markers for indoor pathfinding
paper_content:
We present a mobile augmented reality (AR) system to guide a user through an unfamiliar building to a destination room. The system presents a world-registered wireframe model of the building labeled with directional information in a see-through heads-up display, and a three-dimensional world-in-miniature (WIM) map on a wrist-worn pad that also acts as an input device. Tracking is done using a combination of wall-mounted ARToolkit markers observed by a head-mounted camera, and an inertial tracker. To allow coverage of arbitrarily large areas with a limited set of markers, a structured marker re-use scheme based on graph coloring has been developed.
---
paper_title: User Interface Management Techniques for Collaborative Mobile Augmented Reality
paper_content:
Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.
---
paper_title: LANDMARC: indoor location sensing using active RFID
paper_content:
Growing convergence among mobile computing devices and embedded technology sparks the development and deployment of “context-aware” applications, where location is the most essential context. In this paper we present LANDMARC, a location sensing prototype system that uses Radio Frequency Identification (RFID) technology for locating objects inside buildings. The major advantage of LANDMARC is that it improves the overall accuracy of locating objects by utilizing the concept of reference tags. Based on experimental analysis, we demonstrate that active RFID is a viable and cost-effective candidate for indoor location sensing. Although RFID is not designed for indoor location sensing, we point out three major features that should be added to make RFID technologies competitive in this new and growing market.
---
paper_title: Human pacman: A mobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area
paper_content:
Human Pacman is an interactive role-playing game that envisions to bring the computer gaming experience to a new level of emotional and sensory gratification by setting the real world as a playground. This is a physical fantasy game integrated with human-social and mobile-gaming that emphasizes on collaboration and competition between players. By setting the game in a wide outdoor area, natural human-physical movements have become an integral part of the game. Pacmen and Ghosts are now human players in the real world experiencing mixed reality visualization from the wearable computers on them. Virtual cookies and actual physical objects are incorporated to provide novel experiences of seamless transitions between real and virtual worlds and tangible human computer interface respectively. We believe Human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: Augmented reality in a wide area sentient environment
paper_content:
Augmented reality (AR) both exposes and supplements the user's view of the real world. Previous AR work has focussed on the close registration of real and virtual objects, which requires very accurate real-time estimates of head position and orientation. Most of these systems have been tethered and restricted to small volumes. In contrast, we have chosen to concentrate on allowing the AR user to roam freely within an entire building. At AT&T Laboratories Cambridge we provide personnel with AR services using data from an ultrasonic tracking system, called the Bat system, which has been deployed building-wide. We have approached the challenge of implementing a wide-area, in-building AR system in two different ways. The first uses a head-mounted display connected to a laptop, which combines sparse position measurements from the Bat system with more frequent rotational information from an inertial tracker to render annotations and virtual objects that relate to or coexist with the real world. The second uses a PDA to provide a convenient portal with which the user can quickly view the augmented world. These systems can be used to annotate the world in a more-or-less seamless way, allowing a richer interaction with both real and virtual objects.
---
paper_title: Mobile reality: A pda-based multimodal framework for synchronizing a hybrid tracking solution with 3d-graphics and location-sensitve speech interaction
paper_content:
A maintenance engineer who talks to pumps and pipes may not seem like the ideal person to entrust with keeping a factory running smoothly, but we hope that our Mobile Reality framework will enable such behavior in the future to be anything but suspicious! Described in this paper is how the Mobile Reality framework, running entirely on a Pocket PC, synchronizes a hybrid tracking solution to offer the user a seamless, location-dependent, mobile multimodal interface. The user interface juxtaposes a three-dimensional graphical view with a context-sensitive speech dialog centered upon objects located in the immediate vicinity of the mobile user. In addition, support for collaboration enables shared VRML browsing with annotation and a full-duplex voice channel.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: Scan and Tilt – Towards Natural Interaction for Mobile Museum Guides
paper_content:
This paper presents a new interaction technique - scan and tilt - aiming to enable a more natural interaction with mobile museum guides. Our work combines multiple modalities - gestures, physical selection, location, graphical and voice. In particular, physical selection is obtained by scanning RFID tags associated with the artworks, and tilt gestures are used to control and navigate the user interface and multimedia information. We report on how it has been applied to a mobile museum guide in order to enhance the user experience, providing details on a first user test carried out on our prototype.
---
paper_title: The Cricket location-support system
paper_content:
This paper presents the design, implementation, and evaluation of Cricket , a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration.
---
paper_title: Structured visual markers for indoor pathfinding
paper_content:
We present a mobile augmented reality (AR) system to guide a user through an unfamiliar building to a destination room. The system presents a world-registered wireframe model of the building labeled with directional information in a see-through heads-up display, and a three-dimensional world-in-miniature (WIM) map on a wrist-worn pad that also acts as an input device. Tracking is done using a combination of wall-mounted ARToolkit markers observed by a head-mounted camera, and an inertial tracker. To allow coverage of arbitrarily large areas with a limited set of markers, a structured marker re-use scheme based on graph coloring has been developed.
---
paper_title: Human pacman: A mobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area
paper_content:
Human Pacman is an interactive role-playing game that envisions to bring the computer gaming experience to a new level of emotional and sensory gratification by setting the real world as a playground. This is a physical fantasy game integrated with human-social and mobile-gaming that emphasizes on collaboration and competition between players. By setting the game in a wide outdoor area, natural human-physical movements have become an integral part of the game. Pacmen and Ghosts are now human players in the real world experiencing mixed reality visualization from the wearable computers on them. Virtual cookies and actual physical objects are incorporated to provide novel experiences of seamless transitions between real and virtual worlds and tangible human computer interface respectively. We believe Human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
---
paper_title: ARToolKitPlus for Pose Tracking on Mobile Devices
paper_content:
In this paper we present ARToolKitPlus, a successor to the popular ARToolKit pose tracking library. ARToolKitPlus has been optimized and extended for the usage on mobile devices such as smartphones, PDAs and Ultra Mobile PCs (UMPCs). We explain the need and specific requirements of pose tracking on mobile devices and how we met those requirements. To prove the applicability we performed an extensive benchmark series on a broad range of off-the-shelf handhelds.
---
paper_title: Mobile collaborative augmented reality
paper_content:
The combination of mobile computing and collaborative augmented reality into a single system makes the power of computer enhanced interaction and communication in the real world accessible anytime and everywhere. The paper describes our work to build a mobile collaborative augmented reality system that supports true stereoscopic 3D graphics, a pen and pad interface and direct interaction with virtual objects. The system is assembled from off-the-shelf hardware components and serves as a basic test bed for user interface experiments related to computer supported collaborative work in augmented reality. A mobile platform implementing the described features and collaboration between mobile and stationary users are demonstrated.
---
paper_title: Marker tracking and HMD calibration for a video-based augmented reality conferencing system
paper_content:
We describe an augmented reality conferencing system which uses the overlay of virtual images on the real world. Remote collaborators are represented on virtual monitors which can be freely positioned about a user in space. Users can collaboratively view and interact with virtual objects using a shared virtual whiteboard. This is possible through precise virtual image registration using fast and accurate computer vision techniques and head mounted display (HMD) calibration. We propose a method for tracking fiducial markers and a calibration method for optical see-through HMD based on the marker tracking.
---
paper_title: Shape recognition and pose estimation for mobile augmented reality
paper_content:
In this paper we present Nestor, a system for real-time recognition and camera pose estimation from planar shapes. The system allows shapes that carry contextual meanings for humans to be used as Augmented Reality (AR) tracking fiducials. The user can teach the system new shapes at runtime by showing them to the camera. The learned shapes are then maintained by the system in a shape library. Nestor performs shape recognition by analyzing contour structures and generating projective invariant signatures from their concavities. The concavities are further used to extract features for pose estimation and tracking. Pose refinement is carried out by minimizing the reprojection error between sample points on each image contour and its library counterpart. Sample points are matched by evolving an active contour in real time. Our experiments show that the system provides stable and accurate registration, and runs at interactive frame rates on a Nokia N95 mobile phone.
---
paper_title: UMAR: Ubiquitous Mobile Augmented Reality
paper_content:
In this paper we discuss the prospects of using marker based Augmented Reality for context aware applications on mobile phones. We also present the UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using Mobile Augmented Reality. A step towards this we have successfully ported the ARToolkit to consumer mobile phones running on the Symbian platform and present results around this. We also present three sample applications based on UMAR and future case study work planned.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: OpenTracker-an open software architecture for reconfigurable tracking based on XML
paper_content:
This paper describes OpenTracker, an open software architecture that provides a generic solution to the different tasks involved in tracking input devices and processing tracking data for virtual environments. It combines a highly modular design with a configuration syntax based on XML, thus taking full advantage of this new technology. OpenTracker is a first attempt towards a "write once, track anywhere" approach to virtual reality application development.
---
paper_title: Efficient Extraction of Robust Image Features on Mobile Devices
paper_content:
Recent convergence of imaging sensors and general purpose processors on mobile phones creates an opportunity for a new class of augmented reality applications. Robust image feature extraction is a crucial enabler of this type of systems. In this article, we discuss an efficient mobile phone implementation of a state-of-the-art algorithm for computing robust image features called SURF. We implement several improvements to the basic algorithm that significantly improve its performance and reduce its memory footprint making the use of this algorithm on the mobile phone practical. Our prototype implementation has been applied to several practical applications such as image search, object recognition and augmented reality applications.
---
paper_title: Fast Feature Pyramids for Object Detection
paper_content:
Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).
---
paper_title: BRIEF: Binary robust independent elementary features
paper_content:
We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. ::: ::: As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.
---
paper_title: Outdoors augmented reality on mobile phone using loxel-based visual feature organization
paper_content:
We have built an outdoors augmented reality system for mobile phones that matches camera-phone images against a large database of location-tagged images using a robust image retrieval algorithm. We avoid network latency by implementing the algorithm on the phone and deliver excellent performance by adapting a state-of-the-art image retrieval algorithm based on robust local descriptors. Matching is performed against a database of highly relevant features, which is continuously updated to reflect changes in the environment. We achieve fast updates and scalability by pruning of irrelevant features based on proximity to the user. By compressing and incrementally updating the features stored on the phone we make the system amenable to low-bandwidth wireless connections. We demonstrate system robustness on a dataset of location-tagged images and show a smart-phone implementation that achieves a high image matching rate while operating in near real-time.
---
paper_title: Robust Visual Tracking via Structured Multi-Task Sparse Learning
paper_content:
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing $$\ell _{p,q}$$ mixed norms $$(\text{ specifically} p\in \{2,\infty \}$$ and $$q=1),$$ we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular $$L_1$$ tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259---2272, 2011) is a special case of our MTT formulation (denoted as the $$L_{11}$$ tracker) when $$p=q=1.$$ Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.
---
paper_title: Pose tracking from natural features on mobile phones
paper_content:
In this paper we present two techniques for natural feature tracking in real-time on mobile phones. We achieve interactive frame rates of up to 20 Hz for natural feature tracking from textured planar targets on current-generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. We present evaluations on robustness and performance on various devices and finally discuss their appropriateness for augmented reality applications.
---
paper_title: Distinctive Image Features from Scale-Invariant Keypoints
paper_content:
The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.
---
paper_title: Surf: Speeded up robust features
paper_content:
In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. ::: ::: This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.
---
paper_title: Machine learning for high-speed corner detection
paper_content:
Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of tlie available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion.
---
paper_title: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
paper_content:
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
---
paper_title: Natural Feature Tracking for Augmented Reality
paper_content:
Natural scene features stabilize and extend the tracking range of augmented reality (AR) pose-tracking systems. We develop robust computer vision methods to detect and track natural features in video images. Point and region features are automatically and adaptively selected for properties that lead to robust tracking. A multistage tracking algorithm produces accurate motion estimates, and the entire system operates in a closed-loop that stabilizes its performance and accuracy. We present demonstrations of the benefits of using tracked natural features for AR applications that illustrate direct scene annotation, pose stabilization, and extendible tracking range. Our system represents a step toward integrating vision with graphics to produce robust wide-area augmented realities.
---
paper_title: A hybrid pose tracking approach for handheld augmented reality
paper_content:
With the rapid advances in mobile computing, handheld Augmented Reality draws increasing attention. Pose tracking of handheld devices is of fundamental importance to register virtual information with the real world and is still a crucial challenge. In this paper, we present a low-cost, accurate and robust approach combining fiducial tracking and inertial sensors for handheld pose tracking. Two LEDs are used as fiducial markers to indicate the position of the handheld device. They are detected by an adaptive thresholding method which is robust to illumination changes, and then tracked by a Kalman filter. By combining inclination information provided by the on-device accelerometer, 6 degree-of-freedom (DoF) pose is estimated. Handheld devices are freed from computer vision processing, leaving most computing power available for applications. When one LED is occluded, the system is still able to recover the 6-DoF pose. Performance evaluation of the proposed tracking approach is carried out by comparing with the ground truth data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved an accuracy of 1.77 cm in position estimation and 4.15 degrees in orientation estimation.
---
paper_title: Object Tracking by Oversampling Local Features
paper_content:
In this paper, we present the ALIEN tracking method that exploits oversampling of local invariant representations to build a robust object/context discriminative classifier. To this end, we use multiple instances of scale invariant local features weakly aligned along the object template. This allows taking into account the 3D shape deviations from planarity and their interactions with shadows, occlusions, and sensor quantization for which no invariant representations can be defined. A non-parametric learning algorithm based on the transitive matching property discriminates the object from the context and prevents improper object template updating during occlusion. We show that our learning rule has asymptotic stability under mild conditions and confirms the drift-free capability of the method in long-term tracking. A real-time implementation of the ALIEN tracker has been evaluated in comparison with the state-of-the-art tracking systems on an extensive set of publicly available video sequences that represent most of the critical conditions occurring in real tracking environments. We have reported superior or equal performance in most of the cases and verified tracking with no drift in very long video sequences.
---
paper_title: PCA-SIFT: a more distinctive representation for local image descriptors
paper_content:
Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching.
---
paper_title: Caffe: Convolutional Architecture for Fast Feature Embedding
paper_content:
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ($\approx$ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
---
paper_title: BRISK: Binary Robust invariant scalable keypoints
paper_content:
Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.
---
paper_title: Streaming mobile augmented reality on mobile phones
paper_content:
Continuous recognition and tracking of objects in live video captured on a mobile device enables real-time user interaction. We demonstrate a streaming mobile augmented reality system with 1 second latency. User interest is automatically inferred from camera movements, so the user never has to press a button. Our system is used to identify and track book and CD covers in real time on a phone's viewfinder. Efficient motion estimation is performed at 30 frames per second on a phone, while fast search through a database of 20,000 images is performed on a server.
---
paper_title: Feature Tracking for Mobile Augmented Reality Using Video Coder Motion Vectors
paper_content:
We propose a novel, low-complexity, tracking scheme that uses motion vectors directly from a video coder. We compare our tracking algorithm against ground truth data, and show that we can achieve a high level of accuracy, even though the motion vectors are rate-distortion optimized and do not represent true motion. We develop a framework for tracking in video sequences with various GOP structures. Such a scheme would find applications in the context of Mobile Augmented Reality. The proposed feature tracking algorithm can significantly reduce the required rate of feature extraction and matching.
---
paper_title: SURFTrac: Efficient tracking and continuous object recognition using local feature descriptors
paper_content:
We present an efficient algorithm for continuous image recognition and feature descriptor tracking in video which operates by reducing the search space of possible interest points inside of the scale space image pyramid. Instead of performing tracking in 2D images, we search and match candidate features in local neighborhoods inside the 3D image pyramid without computing their feature descriptors. The candidates are further validated by fitting to a motion model. The resulting tracked interest points are more repeatable and resilient to noise, and descriptor computation becomes much more efficient because only those areas of the image pyramid that contain features are searched. We demonstrate our method on real-time object recognition and label augmentation running on a mobile device.
---
paper_title: Scene modelling, recognition and tracking with invariant image features
paper_content:
We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object, and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.
---
paper_title: Real-Time Detection and Tracking for Augmented Reality on Mobile Phones
paper_content:
In this paper, we present three techniques for 6DOF natural feature tracking in real time on mobile phones. We achieve interactive frame rates of up to 30 Hz for natural feature tracking from textured planar targets on current generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns plus a template-matching-based tracker. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. The template-based tracker further increases the performance and robustness of the SIFT- and Ferns-based approaches. We present evaluations on robustness and performance and discuss their appropriateness for Augmented Reality applications.
---
paper_title: Multiple target detection and tracking with guaranteed framerates on mobile phones
paper_content:
In this paper we present a novel method for real-time pose estimation and tracking on low-end devices such as mobile phones. The presented system can track multiple known targets in real-time and simultaneously detect new targets for tracking. We present a method to automatically and dynamically balance the quality of detection and tracking to adapt to a variable time budget and ensure a constant frame rate. Results from real data of a mobile phone Augmented Reality system demonstrate the efficiency and robustness of the described approach. The system can track 6 planar targets on a mobile phone simultaneously at framerates of 23fps.
---
paper_title: CNN Features Off-the-Shelf: An Astounding Baseline for Recognition
paper_content:
Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.
---
paper_title: A Mobile Vision System for Urban Detection with Informative Local Descriptors
paper_content:
We present a computer vision system for the detection and identification of urban objects from mobile phone imagery, e.g., for the application of tourist information services. Recognition is based on MAP decision making over weak object hypotheses from local descriptor responses in the mobile imagery. We present an improvement over the standard SIFT key detector [7] by selecting only informative (i-SIFT) keys for descriptor matching. Selection is applied first to reduce the complexity of the object model and second to accelerate detection by selective filtering. We present results on the MPG-20 mobile phone imagery with severe illumination, scale and viewpoint changes in the images, performing with ≈ 98% accuracy in identification, efficient (100%) background rejection, efficient (0%) false alarm rate, and reliable quality of service under extreme illumination conditions, significantly improving standard SIFT based recognition in every sense, providing - important for mobile vision - runtimes which are ≈ 8 (≈24) times faster for the MPG-20 (ZuBuD) database.
---
paper_title: ORB: An efficient alternative to SIFT or SURF
paper_content:
Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.
---
paper_title: Online tracking and reacquisition using co-trained generative and discriminative trackers
paper_content:
Visual tracking is a challenging problem, as an object may change its appearance due to viewpoint variations, illumination changes, and occlusion. Also, an object may leave the field of view and then reappear. In order to track and reacquire an unknown object with limited labeling data, we propose to learn these changes online and build a model that describes all seen appearance while tracking. To address this semi-supervised learning problem, we propose a co-training based approach to continuously label incoming data and online update a hybrid discriminative generative model. The generative model uses a number of low dimension linear subspaces to describe the appearance of the object. In order to reacquire an object, the generative model encodes all the appearance variations that have been seen. A discriminative classifier is implemented as an online support vector machine, which is trained to focus on recent appearance variations. The online co-training of this hybrid approach accounts for appearance changes and allows reacquisition of an object after total occlusion. We demonstrate that under challenging situations, this method has strong reacquisition ability and robustness to distracters in background.
---
paper_title: InfoSPOT: A mobile Augmented Reality method for accessing building information through a situation awareness approach
paper_content:
article i nfo Article history: Accepted 9 September 2012 Available online 6 October 2012 The Architecture, Engineering, Construction, and Owner/Operator (AECO) industry is constantly searching for new methods for increasing efficiency and productivity. Facility Managers (FMs), as a part of the owner/ operator role, work in complex and dynamic environments where critical decisions are constantly made. This decision-making process and its consequent performance can be improved by enhancing Situation Awareness (SA) of the FMs through new digital technologies. In this paper, InfoSPOT (Information Surveyed Point for Observation and Tracking), is recommended to FMs as a mobile Augmented Reality (AR) tool for accessing information about the facilities they maintain. AR has been considered as a viable option to reduce inefficiencies of data overload by providing FMs with a SA-based tool for visualizing their "real-world" envi- ronment with added interactive data. A prototype of the AR application was developed and a user participa- tion experiment and analysis conducted to evaluate the features of InfoSPOT. This innovative application of AR has the potential to improve construction practices, and in this case, facility management. Published by Elsevier B.V.
---
paper_title: GroundCam: A Tracking Modality for Mobile Mixed Reality
paper_content:
Anywhere augmentation pursues the goal of lowering the initial investment of time and money necessary to participate in mixed reality work, bridging the gap between researchers in the field and regular computer users. Our paper contributes to this goal by introducing the GroundCam, a cheap tracking modality with no significant setup necessary. By itself, the GroundCam provides high frequency, high resolution relative position information similar to an inertial navigation system, but with significantly less drift. When coupled with a wide area tracking modality via a complementary Kalman filter, the hybrid tracker becomes a powerful base for indoor and outdoor mobile mixed reality work
---
paper_title: Model-based visual tracking for outdoor augmented reality applications
paper_content:
Outdoor augmented reality (AR) applications rely on hybrid tracking (GPS, digital compass, visual) for registration. RSC has developed a real-time visual tracking system that uses visual cues of buildings in an urban environment for correcting the results of a conventional tracking system. This approach relies on knowledge of a CAD model of the building. It not only provides motion estimation, but also absolute orientation/position. It is based on the "visual servoing" approach, originally developed for robotics tasks. We have demonstrated this approach in real-time at a building on the NRL campus This poster shows the approach and results. The concept can be generalized to any scenario where a CAD model is available. This system is being prepared for integration into the NRL system BARS (Battlefield Augmented Reality System).
---
paper_title: Sketching up the world: in situ authoring for mobile Augmented Reality
paper_content:
We present a novel system allowing in situ content creation for mobile Augmented Reality in unprepared environments. This system targets smartphones and therefore allows a spontaneous authoring while in place. We describe two different scenarios, which are depending on the size of the working environment and consequently use different tracking techniques. A natural feature-based approach for planar targets is used for small working spaces whereas for larger working environments, such as in outdoor scenarios, a panoramic-based orientation tracking is deployed. Both are integrated into one system allowing the user to use the same interaction for creating the content applying a set of simple, yet powerful modeling functions for content creation. The resulting content for Augmented Reality can be shared with other users using a dedicated content server or kept in a private inventory for later use.
---
paper_title: Going out: robust model-based tracking for outdoor augmented reality
paper_content:
This paper presents a model-based hybrid tracking system for outdoor augmented reality in urban environments enabling accurate, realtime overlays for a handheld device. The system combines several well-known approaches to provide a robust experience that surpasses each of the individual components alone: an edge-based tracker for accurate localisation, gyroscope measurements to deal with fast motions, measurements of gravity and magnetic field to avoid drift, and a back store of reference frames with online frame selection to re-initialize automatically after dynamic occlusions or failures. A novel edge-based tracker dispenses with the conventional edge model, and uses instead a coarse, but textured, 3D model. This yields several advantages: scale-based detail culling is automatic, appearance-based edge signatures can be used to improve matching and the models needed are more commonly available. The accuracy and robustness of the resulting system is demonstrated with comparisons to map-based ground truth data.
---
paper_title: A robust hybrid tracking system for outdoor augmented reality
paper_content:
We present a real-time hybrid tracking system that integrates gyroscopes and line-based vision tracking technology. Gyroscope measurements are used to predict orientation and image line positions. Gyroscope drift is corrected by vision tracking. System robustness is achieved by using a heuristic control system to evaluate measurement quality and select measurements accordingly. Experiments show that the system achieves robust, accurate, and real-time performance for outdoor augmented reality.
---
paper_title: Location-based augmented reality on mobile phones
paper_content:
The computational capability of mobile phones has been rapidly increasing, to the point where augmented reality has become feasible on cell phones. We present an approach to indoor localization and pose estimation in order to support augmented reality applications on a mobile phone platform. Using the embedded camera, the application localizes the device in a familiar environment and determines its orientation. Once the 6 DOF pose is determined, 3D virtual objects from a database can be projected into the image and displayed for the mobile user. Off-line data acquisition consists of acquiring images at different locations in the environment. The online pose estimation is done by a feature-based matching between the cell phone image and an image selected from the precomputed database using the phone's sensors (accelerometer and magnetometer). The application enables the user both to visualize virtual objects in the camera image and to localize the user in a familiar environment. We describe in detail the process of building the database and the pose estimation algorithm used on the mobile phone. We evaluate the algorithm performance as well as its accuracy in terms of reprojection distance of the 3D virtual objects in the cell phone image.
---
paper_title: Fusion of vision, GPS and 3D gyro data in solving camera registration problem for direct visual navigation
paper_content:
This paper presents a precise and robust camera registration solution for the novel vision-based road navigation system - VICNAS, which superimposes virtual 3D navigation indicators and traffic signs upon the real road view in an Augmented Reality (AR) space. Traditional vision based or inertial sensor based solutions of registration problem are mostly designed for well-structured environment, which is however unavailable in a wide-open uncontrolled road environment for navigation purposes. This paper proposed a hybrid system that combines computer vision, GPS and 3D inertial gyroscope technologies to provide precise and robust camera pose estimation. The fusion approach is based on our PMM (parameterized model matching) algorithm, in which the road shape model is derived from the digital map data, and matched with road features extracted from real images. Inertial data estimates the initial possible motion, and also serves as relative tolerance to stable the pose output. The algorithms proposed in this paper are validated with the experimental results of real road tests under different road conditions.
---
paper_title: Towards wearable cognitive assistance
paper_content:
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
---
paper_title: CloudRidAR: a cloud-based architecture for mobile augmented reality
paper_content:
Mobile augmented reality (MAR) has exploded in popularity on mobile devices in various fields. However, building a MAR application from scratch on mobile devices is complicated and time-consuming. In this paper, we propose CloudRidAR, a framework for MAR developers to facilitate the development, deployment, and maintenance of MAR applications with little effort. Despite of advance in mobile devices as a computing platform, their performance for MAR applications is still very limited due to the poor computing capability of mobile devices. In order to alleviate the problem, our CloudRidAR is designed with cloud computing at the core. Computational intensive tasks are offloaded on cloud to accelerate computation in order to guarantee run-time performance. We also present two MAR applications built on CloudRidAR to evaluate our design.
---
paper_title: Performance evaluation of computation offloading from mobile device to the edge of mobile network
paper_content:
Small Cell Cloud (SCC) consists of Cloud-enabled Small Cells (CeSCs), which serve as radio end-points for mobile user equipments (UEs) and host computation offloaded from mobile UEs. SCC hereby brings advantages of a centralized cloud computation to the users' vicinity. The SCC architecture provides a mechanism for distribution of computation demand across the CeSCs. An effectiveness of the offloading is determined based on quality of radio channel between the UEs and the CeSC and predicted computation complexity. In this paper, we introduce an implementation of an offloading framework to facilitate adaptation of mobile apps for the SCC and to handle low-level communication between the app and the SCC. An evaluation of the offloading framework is conducted using Augmented Reality (AR) app, which requires intensive computations and low latency. The offloading framework and the AR app are a basement for the SCC testbed used to proof the concept of the computation offloading. Various computation and radio parameters are investigated to reveal benefits of the SCC. According to the performed measurements, the computation offloading can decrease latency up to 88 % and energy consumption of the UEs up to 93 %.
---
paper_title: OverLay: Practical Mobile Augmented Reality
paper_content:
The idea of augmented reality - the ability to look at a physical object through a camera and view annotations about the object - is certainly not new. Yet, this apparently feasible vision has not yet materialized into a precise, fast, and comprehensively usable system. This paper asks: What does it take to enable augmented reality (AR) on smartphones today? To build a ready-to-use mobile AR system, we adopt a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization. Our core contribution is in a novel location-free geometric representation of the environment - from smartphone sensors - and using this geometry to prune down the visual search space. Metrics of success include both accuracy and latency of object identification, coupled with the ease of use and scalability in uncontrolled environments. Our converged system, OverLay, is currently deployed in the engineering building and open for use to regular public; ongoing work is focussed on campus-wide deployment to serve as a "historical tour guide" of UIUC. Performance results and user responses thus far have been promising, to say the least.
---
paper_title: Glimpse: Continuous, Real-Time Object Recognition on Mobile Devices
paper_content:
Glimpse is a continuous, real-time object recognition system for camera-equipped mobile devices. Glimpse captures full-motion video, locates objects of interest, recognizes and labels them, and tracks them from frame to frame for the user. Because the algorithms for object recognition entail significant computation, Glimpse runs them on server machines. When the latency between the server and mobile device is higher than a frame-time, this approach lowers object recognition accuracy. To regain accuracy, Glimpse uses an active cache of video frames on the mobile device. A subset of the frames in the active cache are used to track objects on the mobile, using (stale) hints about objects that arrive from the server from time to time. To reduce network bandwidth usage, Glimpse computes trigger frames to send to the server for recognizing and labeling. Experiments with Android smartphones and Google Glass over Verizon, ATT without Glimpse, continuous detection is non-functional (0.2%-1.9% precision).
---
paper_title: Augmenting 3D urban environment using mobile devices
paper_content:
We describe an augmented reality prototype for exploring a 3D urban environment on mobile devices. Our system utilizes the location and orientation sensors on the mobile platform as well as computer vision techniques to register the live view of the device with the 3D urban data. In particular, the system recognizes the buildings in the live video, tracks the camera pose, and augments the video with relevant information about the buildings in the correct perspective. The 3D urban data consist of 3D point clouds and corresponding geo-tagged RGB images of the urban environment. We also discuss the processing steps to make such 3D data scalable and usable by our system.
---
paper_title: Outdoors augmented reality on mobile phone using loxel-based visual feature organization
paper_content:
We have built an outdoors augmented reality system for mobile phones that matches camera-phone images against a large database of location-tagged images using a robust image retrieval algorithm. We avoid network latency by implementing the algorithm on the phone and deliver excellent performance by adapting a state-of-the-art image retrieval algorithm based on robust local descriptors. Matching is performed against a database of highly relevant features, which is continuously updated to reflect changes in the environment. We achieve fast updates and scalability by pruning of irrelevant features based on proximity to the user. By compressing and incrementally updating the features stored on the phone we make the system amenable to low-bandwidth wireless connections. We demonstrate system robustness on a dataset of location-tagged images and show a smart-phone implementation that achieves a high image matching rate while operating in near real-time.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: UMAR: Ubiquitous Mobile Augmented Reality
paper_content:
In this paper we discuss the prospects of using marker based Augmented Reality for context aware applications on mobile phones. We also present the UMAR, a conceptual framework for developing Ubiquitous Mobile Augmented Reality applications which consists of research areas identified as relevant for successfully bridging the physical world and the digital domain using Mobile Augmented Reality. A step towards this we have successfully ported the ARToolkit to consumer mobile phones running on the Symbian platform and present results around this. We also present three sample applications based on UMAR and future case study work planned.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Streaming mobile augmented reality on mobile phones
paper_content:
Continuous recognition and tracking of objects in live video captured on a mobile device enables real-time user interaction. We demonstrate a streaming mobile augmented reality system with 1 second latency. User interest is automatically inferred from camera movements, so the user never has to press a button. Our system is used to identify and track book and CD covers in real time on a phone's viewfinder. Efficient motion estimation is performed at 30 frames per second on a phone, while fast search through a database of 20,000 images is performed on a server.
---
paper_title: Face to face collaborative AR on mobile phones
paper_content:
Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications.
---
paper_title: The need for real time consistency management in P2P mobile gaming environments
paper_content:
The introduction of more powerful, feature rich, portable handsets is enabling more engaging mobile multimedia entertainment. Improvements in wireless technology infrastructure are enabling access to ubiquitous, always-on data networks. We believe that based on the popularity of multiplayer games over IP networks, there will be significant demand for mobile multiplayer games. Based upon our early trials, we also believe that the current trend for using centralised client/server models will not offer the most suitable architecture to enable them. To support our research we have built a novel augmented reality real time game as a requirements gatherer for gaming in a wireless environment. In this paper we indicate why a centralised approach will not be suitable for real-time interactions. As an alternative we propose a decentralised approach which includes support for consistency and interest management over heterogeneous networked wireless environments.
---
paper_title: Map torchlight: a mobile augmented reality camera projector unit
paper_content:
The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.
---
paper_title: Mobile phone based AR scene assembly
paper_content:
In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.
---
paper_title: Collecting big datasets of human activity one checkin at a time
paper_content:
A variety of cutting edge applications for mobile phones exploit the availability of phone sensors to accurately infer the user activity and location to offer more effective services. To validate and evaluate these new applications, appropriate and extensive datasets are needed: in particular, large sets of traces of sensor data (accelerometer, GPS, micro- phone, etc.), labelled with corresponding user activities. So far, such traces have only been collected in short-lived, small-scale setups. The primary reason for this is the difficulty in establishing accurate ground truth information outside the laboratory. Here, we present our vision of a system for large-scale sensor data capturing, leveraging all sensors of todays smart phones, with the aim of generating a large dataset that is augmented with appropriate ground-truth information. The primary challenges that we address consider the energy cost on the mobile device and the incentives for users to keep running the system on their device for longer. We argue for leveraging the concept of the checkin - as successfully introduced in online social networks (e.g. Foursquare) - for collecting activity and context related datasets. With a checkin, a user deliberately provides a small piece of data about their behaviour while enabling the system to adjust sensing and data collection around important activities. In this work we present up2, a mobile app letting users check in to their current activity (e.g., "waiting for the bus", "riding a bicycle", "having dinner"). After a checkin, we use the phone's sensors (GPS, accelerometer, microphone, etc.) to gather data about the user's activity and surrounding. This makes up2 a valuable tool for research in sensor based activity detection.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: Data Management Strategies for Mobile Augmented Reality
paper_content:
Any significant real-world application of mobile augmented reality will require a large model of location-bound data. While it may appear that a natural approach is to develop application-specific data formats and management strategies, we have found that such an approach actually prevents reuse of the data and ultimately produces additional complexity in developing the application. In contrast we describe a three-tier architecture to manage a common data model for a set of applications. It is inspired by current Internet application frameworks and consists of a central storage layer using a common data model, a transformation layer responsible for filtering and adapting the data to the requirements of a particular applications on request, and finally of the applications itself. We demonstrate our architecture in a scenario consisting of two multi-user capable mobile AR applications for collaborative navigation and annotation in a city environment.
---
paper_title: Managing Complex Augmented Reality Models
paper_content:
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
---
paper_title: Location based Applications for Mobile Augmented Reality
paper_content:
In this work we investigate building indoor location based applications for a mobile augmented reality system. We believe that augmented reality is a natural interface to visualize spacial information such as position or direction of locations and objects for location based applications that process and present information based on the user's position in the real world. To enable such applications we construct an indoor tracking system that covers a substantial part of a building. It is based on visual tracking of fiducial markers enhanced with an inertial sensor for fast rotational updates. To scale such a system to a whole building we introduce a space partitioning scheme to reuse fiducial markers throughout the environment. Finally we demonstrate two location based applications built upon this facility, an indoor navigation aid and a library search application.
---
paper_title: User Interface Management Techniques for Collaborative Mobile Augmented Reality
paper_content:
Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.
---
paper_title: Muddleware for Prototyping Mixed Reality Multiuser Games
paper_content:
We present Muddleware, a communication platform designed for mixed reality multi-user games for mobile, lightweight clients. An approach inspired by Tuplespaces, which provides decoupling of sender and receiver is used to address the requirements of a potentially large number of mobile clients. A hierarchical database built on XML technology allows convenient prototyping and simple, yet powerful queries. Server side-extensions address persistence and autonomous behaviors through hierarchical state machines. The architecture has been tested with a number of multi-user games and is also used for non-entertainment applications
---
paper_title: An adaptive training-free feature tracker for mobile phones
paper_content:
While tracking technologies based on fiducial markers have dominated the development of Augmented Reality (AR) applications for almost a decade, various real-time capable approaches to markerless tracking have recently been presented. However, most existing approaches do not yet achieve sufficient frame rates for AR on mobile phones or at least require an extensive training phase in advance. In this paper we will present our approach on feature based tracking applying robust SURF features. The implementation is more than one magnitude faster than previous ones, allowing running even on mobile phones at highly interactive rates. In contrast to other feature based approaches on mobile phones, our implementation may immediately track features captured from a photo without any training. Further, the approach is not restricted to planar surfaces, but may use features of 3D objects.
---
paper_title: 3D mobile augmented reality in urban scenes
paper_content:
In this paper, we present a large-scale mobile augmented reality system that recognizes the buildings in the mobile device's live video and registers this live view with the 3-dimensional models of the buildings. Having the camera pose estimated and tracked, the system adds relevant information about the buildings to the video in the correct perspective. We demonstrate the system on a large database of geo-tagged panoramic images of an urban environment with associated 3-dimensional planar models. The system uses the capabilities of emerging mobile platforms such as location and orientation sensors, and computational power to detect, track, and augment buildings in urban scenes.
---
paper_title: Design and optimization of image processing algorithms on mobile GPU
paper_content:
The advent of GPUs with programmable shaders on mobile phones has motivated developers to utilize GPU to offload computationally intensive tasks and relive the burden of embedded CPU. In this paper, we present a set of metrics to measure characteristics of a mobile phone GPU with the focus on image processing algorithms. These measures assist users in design and implementation stage and in classifying bottlenecks. We propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), belief propagation (BP) stereo matching [Yang et al. 2006], and speeded up robust features (SURF) detection [Bay et al. 2008] as our example algorithms.
---
paper_title: OpenCL embedded profile prototype in mobile device
paper_content:
Programmable Graphics Processing Unit (GPU) has over the years become an integral part of today's computing systems. The GPU use-cases have gradually been extended from graphics towards a wide range of applications. Since the programmable GPU is now making its way to mobile devices, it is interesting to study these new use-cases also there. To test this, we created a programming environment based on the embedded profile of the fresh Khronos OpenCL standard and ran it against an image processing workload in a mobile device with CPU and GPU back-ends. The early results on performance and energy consumption with CPU+GPU configuration were promising but also suggest there is room for optimization.
---
paper_title: CPU and GPU parallel processing for mobile Augmented Reality
paper_content:
This paper introduces a parallel processing scheme using CPU and GPU for the mobile Augmented Reality applications. Most of AR applications have to perform intensive image processing algorithms to detect specified objects on which virtual information is overlaid. The object detection generally consists of a feature extraction module and a feature description module. The proposed scheme distributes the feature extraction module and the feature description module to CPU and GPU respectively, and performs the modules in parallel. In experimental results, the proposed scheme outperforms the CPU only scheme and the sequential CPU-GPU execution scheme.
---
paper_title: Implementation and optimization of image processing algorithms on handheld GPU
paper_content:
The advent of GPUs with programmable shaders on handheld devices has motivated embedded application developers to utilize GPU to offload computationally intensive tasks and relieve the burden from embedded CPU. In this work, we propose an image processing toolkit on handheld GPU with programmable shaders using OpenGL ES 2.0 API. By using the image processing toolkit, we show that a range of image processing algorithms map readily to handheld GPU. We employ real-time video scaling, cartoon-style non-photorealistic rendering, and Harris corner detector as our example applications. In addition, we propose techniques to achieve increased performance with optimized shader design and efficient sharing of GPU workload between vertex and fragment shaders. Performance is evaluated in terms of frames per second at varying video stream resolution.
---
paper_title: Implementation of an augmented reality system on a PDA
paper_content:
We present a client/server implementation for running demanding mobile AR application on a PDA device. The system incorporates various data compression methods to make it run as fast as possible on a wide range of communication networks, from GSM to WLAN.
---
paper_title: Augmented assembly using a mobile phone
paper_content:
We present a mobile phone based augmented reality (AR) assembly system that enable users to view complex models on their mobile phones. It is based on a client-server architecture, where complex model information is located on a PC, and a mobile phone with the camera is used as a thin client access device to this information. With this system users are able to see an AR view that provides step by step guidance for a real world assembly task. We also present results from a pilot user study evaluating the system, showing that people felt the interface was intuitive and very helpful in supporting the assembly task.
---
paper_title: Video see-through AR on consumer cell-phones
paper_content:
We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.
---
paper_title: Extraction of Natural Feature Descriptors on Mobile GPUs
paper_content:
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
---
paper_title: Using mobile GPU for general-purpose computing – a case study of face recognition on smartphones
paper_content:
As GPU becomes an integrated component in handheld devices like smartphones, we have been investigating the opportunities and limitations of utilizing the ultra-low-power GPU in a mobile platform as a general-purpose accelerator, similar to its role in desktop and server platforms. The special focus of our investigation has been on mobile GPU's role for energy-optimized real-time applications running on battery-powered handheld devices. In this work, we use face recognition as an application driver for our study. Our implementations on a smartphone reveals that, utilizing the mobile GPU as a co-processor can achieve significant speedup in performance as well as substantial reduction in total energy consumption, in comparison with a mobile-CPU-only implementation on the same platform.
---
paper_title: Boosting mobile GPU performance with a decoupled access/execute fragment processor
paper_content:
Smartphones represent one of the fastest growing markets, providing significant hardware/software improvements every few months. However, supporting these capabilities reduces the operating time per battery charge. The CPU/GPU component is only left with a shrinking fraction of the power budget, since most of the energy is consumed by the screen and the antenna. In this paper, we focus on improving the energy efficiency of the GPU since graphical applications consist an important part of the existing market. Moreover, the trend towards better screens will inevitably lead to a higher demand for improved graphics rendering. We show that the main bottleneck for these applications is the texture cache and that traditional techniques for hiding memory latency (prefetching, multithreading) do not work well or come at a high energy cost. We thus propose the migration of GPU designs towards the decoupled access-execute concept. Furthermore, we significantly reduce bandwidth usage in the decoupled architecture by exploiting inter-core data sharing. Using commercial Android applications, we show that the end design can achieve 93% of the performance of a heavily multithreaded GPU while providing energy savings of 34%.
---
paper_title: Extraction of Natural Feature Descriptors on Mobile GPUs
paper_content:
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
---
paper_title: ARToolKitPlus for Pose Tracking on Mobile Devices
paper_content:
In this paper we present ARToolKitPlus, a successor to the popular ARToolKit pose tracking library. ARToolKitPlus has been optimized and extended for the usage on mobile devices such as smartphones, PDAs and Ultra Mobile PCs (UMPCs). We explain the need and specific requirements of pose tracking on mobile devices and how we met those requirements. To prove the applicability we performed an extensive benchmark series on a broad range of off-the-shelf handhelds.
---
paper_title: Design and optimization of image processing algorithms on mobile GPU
paper_content:
The advent of GPUs with programmable shaders on mobile phones has motivated developers to utilize GPU to offload computationally intensive tasks and relive the burden of embedded CPU. In this paper, we present a set of metrics to measure characteristics of a mobile phone GPU with the focus on image processing algorithms. These measures assist users in design and implementation stage and in classifying bottlenecks. We propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), belief propagation (BP) stereo matching [Yang et al. 2006], and speeded up robust features (SURF) detection [Bay et al. 2008] as our example algorithms.
---
paper_title: The State of the Art in Mobile Graphics Research
paper_content:
High-quality computer graphics let mobile-device users access more compelling content. Still, the devices' limitations and requirements differ substantially from those of a PC. This survey of mobile graphics research describes current solutions in terms of specialized hardware (including 3D displays), rendering and transmission, visualization, and user interfaces.
---
paper_title: iPACKMAN: high-quality, low-complexity texture compression for mobile phones
paper_content:
We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.
---
paper_title: Graphics Processing Units for Handhelds
paper_content:
During the past few years, mobile phones and other handheld devices have gone from only handling dull text-based menu systems to, on an increasing number of models, being able to render high-quality three-dimensional graphics at high frame rates. This paper is a survey of the special considerations that must be taken when designing graphics processing units (GPUs) on such devices. Starting off by introducing desktop GPUs as a reference, the paper discusses how mobile GPUs are designed, often with power consumption rather than performance as the primary goal. Lowering the bus traffic between the GPU and the memory is an efficient way of reducing power consumption, and therefore some high-level algorithms for bandwidth reduction are presented. In addition, an overview of the different APIs that are used in the handheld market to handle both two-dimensional and three-dimensional graphics is provided. Finally, we present our outlook for the future and discuss directions of future research on handheld GPUs.
---
paper_title: Graphics for the masses: a hardware rasterization architecture for mobile phones
paper_content:
The mobile phone is one of the most widespread devices with rendering capabilities. Those capabilities have been very limited because the resources on such devices are extremely scarce; small amounts of memory, little bandwidth, little chip area dedicated for special purposes, and limited power consumption. The small display resolutions present a further challenge; the angle subtended by a pixel is relatively large, and therefore reasonably high quality rendering is needed to generate high fidelity images.To increase the mobile rendering capabilities, we propose a new hardware architecture for rasterizing textured triangles. Our architecture focuses on saving memory bandwidth, since an external memory access typically is one of the most energy-consuming operations, and because mobile phones need to use as little power as possible. Therefore, our system includes three new key innovations: I) an inexpensive multisampling scheme that gives relatively high quality at the same cost of previous inexpensive schemes, II) a texture minification system, including texture compression, which gives quality relatively close to trilinear mipmapping at the cost of 1.33 32-bit memory accesses on average, III) a scanline-based culling scheme that avoids a significant amount of z-buffer reads, and that only requires one context. Software simulations show that these three innovations together significantly reduce the memory bandwidth, and thus also the power consumption.
---
paper_title: Pixel-planes 5: a heterogeneous multiprocessor graphics system using processor-enhanced memories
paper_content:
This paper introduces the architecture and initial algorithms for Pixel-Planes 5, a heterogeneous multi-computer designed both for high-speed polygon and sphere rendering (1M Phong-shaded triangles/second) and for supporting algorithm and application research in interactive 3D graphics. Techniques are described for volume rendering at multiple frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form-factors. The hardware consists of up to 32 math-oriented processors, up to 16 rendering units, and a conventional 1280 t 1024-pixel frame buffer, interconnected by a 5 gigabit ring network. Each rendering unit consists of a 128 t 128-pixel array of processors-with-memory with parallel quadratic expression evaluation for every pixel. Implemented on 1.6 micron CMOS chips designed to run at 40MHz, this array has 208 bits/pixel on-chip and is connected to a video RAM memory system that provides 4,096 bits of off-chip memory. Rendering units can be independently reasigned to any part of the screen or to non-screen-oriented computation. As of April 1989, both hardware and software are still under construction, with initial system operation scheduled for fall 1989.
---
paper_title: Flexible point-based rendering on mobile devices
paper_content:
We have seen the growing deployment of ubiquitous computing devices and the proliferation of complex virtual environments. As demand for detailed and high-quality geometric models increases, typical scene size (often including scanned 3D objects) easily reaches millions of geometric primitives. Traditionally, vertices and polygons (faces) represent 3D objects. These representations, coupled with the traditional rendering pipeline, don't adequately support display of complex scenes on different types of platforms with heterogeneous rendering capabilities. To accommodate these constraints, we use a packed hierarchical point-based representation for rendering. Point-based rendering offers a simple-to-use level-of-detail mechanism in which we can adapt the number of points rendered to the underlying object's screen size. Our work strives for flexible rendering - that is, rendering only the interior hierarchy nodes as representatives of the subtree. In particular, we avoid traversal of the entire hierarchy and reconstruction of model attributes (such as normals and color information) for interior nodes because both operations can be prohibitively expensive. Flexible rendering also lets us traverse the hierarchy in a specific order, resulting in a fast, one-pass shadow-mapping algorithm.
---
paper_title: Retargeting vector animation for small displays
paper_content:
We present a method that preserves the recognizability of key object interactions in a vector animation. The method allows an artist to author an animation once, and then output it to any display device. We specifically target mobile devices with small screen sizes. In order to adapt an animation, the author specifies an importance value for objects in the animation. The algorithm then identifies and categorizes the vector graphics objects that comprise the animation, leveraging the implicit relationship between extensible Markup Language (XML) and scalable vector graphics (SVG). Based on importance, the animation can then be automatically retargeted for any display using artistically motivated resizing and grouping algorithms that budget size and spatial detail for each object.
---
paper_title: PCU: the programmable culling unit
paper_content:
Culling techniques have always been a central part of computer graphics, but graphics hardware still lack efficient and flexible support for culling. To improve the situation, we introduce the programmable culling unit, which is as flexible as the fragment program unit and capable of quickly culling entire blocks of fragments. Furthermore, it is very easy for the developer to use the PCU as culling programs can be automatically derived from fragment programs containing a discard instruction. Our PCU can be integrated into an existing fragment program unit with a modest hardware overhead of only about 10%. Using the PCU, we have observed shader speedups between 1.4 and 2.1 for relevant scenes.
---
paper_title: Enhancing 3D Graphics on Mobile Devices by Image-Based Rendering
paper_content:
Compared to a personal computer, mobile devices typically have weaker processing power, less memory capacity, and lower resolution of display. While the former two factors are clearly disadvantages for 3D graphics applications running on mobile devices, the display factor could be turned into an advantage instead. However the traditional 3D graphics pipeline cannot take advantage of the smaller display because its run time depends mostly on the number of polygons to be rendered. In contrast, the run time of image-based rendering methods depends mainly on the display resolution. Therefore it is well suited for mobile devices. Furthermore, we may use the network connection to build a client-server framework, which allows us to integrate with nonimage-based rendering programs. We present our system framework and the experiment results on PocketPC® based devices in this work.
---
paper_title: Streaming mobile augmented reality on mobile phones
paper_content:
Continuous recognition and tracking of objects in live video captured on a mobile device enables real-time user interaction. We demonstrate a streaming mobile augmented reality system with 1 second latency. User interest is automatically inferred from camera movements, so the user never has to press a button. Our system is used to identify and track book and CD covers in real time on a phone's viewfinder. Efficient motion estimation is performed at 30 frames per second on a phone, while fast search through a database of 20,000 images is performed on a server.
---
paper_title: Recent Trends of Mobile Collaborative Augmented Reality Systems
paper_content:
The use of mobile collaborative AR has expended rapidly in recent years, due to the major advances in hardware and networking. The application areas are diverse and multidisciplinary. Recent Trends of Mobile Collaborative Augmented Reality Systems provides a historical overview of previous mobile collaborative AR systems, presents case studies of latest developments in current mobile collaborative AR systems, and latest technologies and system architectures used in this field. Recent Trends of Mobile Collaborative Augmented Reality Systems is designed for a professional audience composed of practitioners and researchers working in the field of augmented reality and human-computer interaction. Advanced-level students in computer science and electrical engineering focused on this topic will also find this book useful as a secondary text or reference.
---
paper_title: A Mobile Vision System for Urban Detection with Informative Local Descriptors
paper_content:
We present a computer vision system for the detection and identification of urban objects from mobile phone imagery, e.g., for the application of tourist information services. Recognition is based on MAP decision making over weak object hypotheses from local descriptor responses in the mobile imagery. We present an improvement over the standard SIFT key detector [7] by selecting only informative (i-SIFT) keys for descriptor matching. Selection is applied first to reduce the complexity of the object model and second to accelerate detection by selective filtering. We present results on the MPG-20 mobile phone imagery with severe illumination, scale and viewpoint changes in the images, performing with ≈ 98% accuracy in identification, efficient (100%) background rejection, efficient (0%) false alarm rate, and reliable quality of service under extreme illumination conditions, significantly improving standard SIFT based recognition in every sense, providing - important for mobile vision - runtimes which are ≈ 8 (≈24) times faster for the MPG-20 (ZuBuD) database.
---
paper_title: Parametric analysis for adaptive computation offloading
paper_content:
Many programs can be invoked under different execution options, input parameters and data files. Such different execution contexts may lead to strikingly different execution instances. The optimal code generation may be sensitive to the execution instances. In this paper, we show how to use parametric program analysis to deal with this issue for the optimization problem of computation offloading.Computation offloading has been shown to be an effective way to improve performance and energy saving on mobile devices. Optimal program partitioning for computation offloading depends on the tradeoff between the computation workload and the communication cost. The computation workload and communication requirement may change with different execution instances. Optimal decisions on program partitioning must be made at run time when sufficient information about workload and communication requirement becomes available.Our cost analysis obtains program computation workload and communication cost expressed as functions of run-time parameters, and our parametric partitioning algorithm finds the optimal program partitioning corresponding to different ranges of run-time parameters. At run time, the transformed program self-schedules its tasks on either the mobile device or the server, based on the optimal program partitioning that corresponds to the current values of run-time parameters. Experimental results on an HP IPAQ handheld device show that different run-time parameters can lead to quite different program partitioning decisions.
---
paper_title: A Streaming-Based Solution for Remote Visualization of 3D Graphics on Mobile Devices
paper_content:
Mobile devices such as personal digital assistants, tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications is now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, personal digital assistants (PDAs), and tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed
---
paper_title: Virtualized Screen: A Third Element for Cloud–Mobile Convergence
paper_content:
Mobile and cloud computing have emerged as the new computing platforms and are converging into a powerful cloud-mobile computing platform. This article envisions a virtualized screen as a new dimension in such a platform to further optimize the overall computing experience for users. In a virtualized screen, screen rendering is done in the cloud, and delivered as images to the client for interactive display. This enables thin-client mobile devices to enjoy many computationally intensive and graphically rich services. Technical challenges are discussed and addressed. Two novel cloud-mobile applications, Cloud Browser and Cloud Phone, are presented to demonstrate the advantages of such a virtualized screen.
---
paper_title: Using bandwidth data to make computation offloading decisions
paper_content:
We present a framework for making computation offloading decisions in computational grid settings in which schedulers determine when to move parts of a computation to more capable resources to improve performance. Such schedulers must predict when an offloaded computation will outperform one that is local by forecasting the local cost (execution time for computing locally) and remote cost (execution time for computing remotely and transmission time for the input/output of the computation to/from the remote system). Typically, this decision amounts to predicting the bandwidth between the local and remote systems to estimate these costs. Our framework unifies such decision models by formulating the problem as a statistical decision problem that can either be treated "classically" or using a Bayesian approach. Using an implementation of this framework, we evaluate the efficacy of a number of different decision strategies (several of which have been employed by previous systems). Our results indicate that a Bayesian approach employing automatic change-point detection when estimating the prior distribution is the best-performing approach.
---
paper_title: From Augmented Reality to Augmented Computing: A Look at Cloud-Mobile Convergence
paper_content:
There has been considerable number of virtual and augmented reality applications designed and developed for mobile devices. However the state-of-the-art systems are commonly confined by several limitations. In this position paper the concept ”Cloud-Mobile Convergence for Virtual Reality (CMCVR)” is presented. CMCVR envisions effective and user-friendly integration of the mobile device and cloud-based resources. Through the proposed framework, mobile devices could be augmented to deliver some user experiences comparable to those offered by fixed systems. A preliminary research that follows the CMCVR paradigm is also described.
---
paper_title: Implementation of an augmented reality system on a PDA
paper_content:
We present a client/server implementation for running demanding mobile AR application on a PDA device. The system incorporates various data compression methods to make it run as fast as possible on a wide range of communication networks, from GSM to WLAN.
---
paper_title: An accelerated remote graphics architecture for PDAS
paper_content:
A new category of devices, known as Personal Digital Assistant (PDA), has become increasingly widespread since the end of the nineties. A large number of software applications have been developed for PDAs, but high-quality 3D graphics still remain beyond the computational capability of these devices.This paper tackles this issue by proposing a generic solution for hardware-accelerated remote rendering on cluster. The rendering task is submitted to a PC/workstation cluster (each cluster machine is equipped by a graphics accelerator) by means of the Chromium architecture. Each machine renders a part of the image that is then reassembled and sent to the PDA via a software bridge. On the PDA side, the user can explore the scene using an ad-hoc navigation interface.The proposed solution allows to display extremely realistic and complex models in an interactive way. Moreover, our architecture does not depend on commercial solutions/products and can be easily modified in order to better fulfill requirements of specific applications.Computer GraphicsDistributed/network graphics Distributed SystemsDistributed applications
---
paper_title: Approximate Computing: A Survey
paper_content:
As one of the most promising energy-efficient computing paradigms, approximate computing has gained a lot of research attention in the past few years. This paper presents a survey of state-of-the-art work in all aspects of approximate computing and highlights future research challenges in this field.
---
paper_title: Approximate computing: An emerging paradigm for energy-efficient design
paper_content:
Approximate computing has recently emerged as a promising approach to energy-efficient design of digital systems. Approximate computing relies on the ability of many systems and applications to tolerate some loss of quality or optimality in the computed result. By relaxing the need for fully precise or completely deterministic operations, approximate computing techniques allow substantially improved energy efficiency. This paper reviews recent progress in the area, including design of approximate arithmetic blocks, pertinent error and quality measures, and algorithm-level techniques for approximate computing.
---
paper_title: SNNAP: Approximate computing on programmable SoCs via neural acceleration
paper_content:
Many applications that can take advantage of accelerators are amenable to approximate execution. Past work has shown that neural acceleration is a viable way to accelerate approximate code. In light of the growing availability of on-chip field-programmable gate arrays (FPGAs), this paper explores neural acceleration on off-the-shelf programmable SoCs. We describe the design and implementation of SNNAP, a flexible FPGA-based neural accelerator for approximate programs. SNNAP is designed to work with a compiler workflow that configures the neural network's topology and weights instead of the programmable logic of the FPGA itself. This approach enables effective use of neural acceleration in commercially available devices and accelerates different applications without costly FPGA reconfigurations. No hardware expertise is required to accelerate software with SNNAP, so the effort required can be substantially lower than custom hardware design for an FPGA fabric and possibly even lower than current “C-to-gates” high-level synthesis (HLS) tools. Our measurements on a Xilinx Zynq FPGA show that SNNAP yields a geometric mean of 3.8× speedup (as high as 38.1×) and 2.8× energy savings (as high as 28 x) with less than 10% quality loss across all applications but one. We also compare SNNAP with designs generated by commercial HLS tools and show that SNNAP has similar performance overall, with better resource-normalized throughput on 4 out of 7 benchmarks.
---
paper_title: Analysis and characterization of inherent application resilience for approximate computing
paper_content:
Approximate computing is an emerging design paradigm that enables highly efficient hardware and software implementations by exploiting the inherent resilience of applications to in-exactness in their computations. Previous work in this area has demonstrated the potential for significant energy and performance improvements, but largely consists of ad hoc techniques that have been applied to a small number of applications. Taking approximate computing closer to mainstream adoption requires (i) a deeper understanding of inherent application resilience across a broader range of applications (ii) tools that can quantitatively establish the inherent resilience of an application, and (iii) methods to quickly assess the potential of various approximate computing techniques for a given application. We make two key contributions in this direction. Our primary contribution is the analysis and characterization of inherent application resilience present in a suite of 12 widely used applications from the domains of recognition, data mining, and search. Based on this analysis, we present several new insights into the nature of resilience and its relationship to various key application characteristics. To facilitate our analysis, we propose a systematic framework for Application Resilience Characterization (ARC) that (a) partitions an application into resilient and sensitive parts and (b) characterizes the resilient parts using approximation models that abstract a wide range of approximate computing techniques. We believe that the key insights that we present can help shape further research in the area of approximate computing, while automatic resilience characterization frameworks such as ARC can greatly aid designers in the adoption approximate computing.
---
paper_title: Exploiting Significance of Computations for Energy-Constrained Approximate Computing
paper_content:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 kernels and applications shows that the proposed framework performs very close to an oracle always selecting the optimal configuration, both in terms of energy efficiency and quality of results. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
---
paper_title: The State of the Art in Mobile Graphics Research
paper_content:
High-quality computer graphics let mobile-device users access more compelling content. Still, the devices' limitations and requirements differ substantially from those of a PC. This survey of mobile graphics research describes current solutions in terms of specialized hardware (including 3D displays), rendering and transmission, visualization, and user interfaces.
---
paper_title: Performing computation offloading on multiple platforms
paper_content:
An offloading framework designed for supporting multiple platforms is proposed.The solution supports Android and Windows Phone mobile applications.Developers can use static or dynamic offloading decision.The offloading technique improves the performance of mobile applications.The type of serialization used impacts the offloading performance. Mobile devices such as smart phones and tablets are increasingly important tools in daily routine. These devices generally interact with more powerful machines usually hosted on public clouds. In this context, this paper presents MpOS (Multiplatform Offloading System), a framework that supports a method-based offloading technique for applications of different mobile platforms (Android and Windows Phone). In addition, details of MpOS main components and services as well as code examples are presented. To evaluate the proposed solution and to analyse the impact of different serialization types on the offloading performance, we developed two applications and performed several experiments on both Android and Windows Phone platforms using WiFi and 4G/LTE connections to access the remote execution environments. Our results state that offloading to a cloudlet has provided the best performance for both Android and Windows Phone platforms, beyond showing that the type of serialization used by the framework directly impacts on the offloading performance.
---
paper_title: Boosting mobile GPU performance with a decoupled access/execute fragment processor
paper_content:
Smartphones represent one of the fastest growing markets, providing significant hardware/software improvements every few months. However, supporting these capabilities reduces the operating time per battery charge. The CPU/GPU component is only left with a shrinking fraction of the power budget, since most of the energy is consumed by the screen and the antenna. In this paper, we focus on improving the energy efficiency of the GPU since graphical applications consist an important part of the existing market. Moreover, the trend towards better screens will inevitably lead to a higher demand for improved graphics rendering. We show that the main bottleneck for these applications is the texture cache and that traditional techniques for hiding memory latency (prefetching, multithreading) do not work well or come at a high energy cost. We thus propose the migration of GPU designs towards the decoupled access-execute concept. Furthermore, we significantly reduce bandwidth usage in the decoupled architecture by exploiting inter-core data sharing. Using commercial Android applications, we show that the end design can achieve 93% of the performance of a heavily multithreaded GPU while providing energy savings of 34%.
---
paper_title: ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading
paper_content:
Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.
---
paper_title: Offloading in Mobile Cloudlet Systems with Intermittent Connectivity
paper_content:
The emergence of mobile cloud computing enables mobile users to offload applications to nearby mobile resource-rich devices (i.e., cloudlets) to reduce energy consumption and improve performance. However, due to mobility and cloudlet capacity, the connections between a mobile user and mobile cloudlets can be intermittent. As a result, offloading actions taken by the mobile user may fail (e.g., the user moves out of communication range of cloudlets). In this paper, we develop an optimal offloading algorithm for the mobile user in such an intermittently connected cloudlet system, considering the users’ local load and availability of cloudlets. We examine users’ mobility patterns and cloudlets’ admission control, and derive the probability of successful offloading actions analytically. We formulate and solve a Markov decision process (MDP) model to obtain an optimal policy for the mobile user with the objective to minimize the computation and offloading costs. Furthermore, we prove that the optimal policy of the MDP has a threshold structure. Subsequently, we introduce a fast algorithm for energy-constrained users to make offloading decisions. The numerical results show that the analytical form of the successful offloading probability is a good estimation in various mobility cases. Furthermore, the proposed MDP offloading algorithm for mobile users outperforms conventional baseline schemes.
---
paper_title: Graphics Processing Units for Handhelds
paper_content:
During the past few years, mobile phones and other handheld devices have gone from only handling dull text-based menu systems to, on an increasing number of models, being able to render high-quality three-dimensional graphics at high frame rates. This paper is a survey of the special considerations that must be taken when designing graphics processing units (GPUs) on such devices. Starting off by introducing desktop GPUs as a reference, the paper discusses how mobile GPUs are designed, often with power consumption rather than performance as the primary goal. Lowering the bus traffic between the GPU and the memory is an efficient way of reducing power consumption, and therefore some high-level algorithms for bandwidth reduction are presented. In addition, an overview of the different APIs that are used in the handheld market to handle both two-dimensional and three-dimensional graphics is provided. Finally, we present our outlook for the future and discuss directions of future research on handheld GPUs.
---
paper_title: Signature-based workload estimation for mobile 3D graphics
paper_content:
Until recently, most 3D graphics applications had been regarded as too computationally intensive for devices other than desktop computers and gaming consoles. This notion is rapidly changing due to improving screen resolutions and computing capabilities of mass-market handheld devices such as cellular phones and PDAs. As the mobile 3D gaming industry is poised to expand, significant innovations are required to provide users with high-quality 3D experience under limited processing, memory and energy budgets that are characteristic of the mobile domain. Energy saving schemes such as dynamic voltage and frequency scaling (DVFS), as well as system-level power and performance optimization methods for mobile devices require accurate and fast workload prediction. In this paper, we address the problem of workload prediction for mobile 3D graphics. We propose and describe a signature-based estimation technique for predicting 3D graphics workloads. By analyzing a gaming benchmark, we show that monitoring specific parameters of the 3D pipeline provides better prediction accuracy over conventional approaches. We describe how signatures capture such parameters concisely to make accurate workload predictions. Signature-based prediction is computationally efficient because first, signatures are compact, and second, they do not require elaborate model evaluations. Thus, they are amenable to efficient, real-time prediction. A fundamental difference between signatures and standard history-based predictors is that signatures capture previous outcomes as well as the cause that led to the outcome, and use both to predict future outcomes. We illustrate the utility of signature-based workload estimation technique by using it as a basis for DVFS in 3D graphics pipelines.
---
paper_title: Low-power 3D graphics processors for mobile terminals
paper_content:
A full 3D graphics pipeline is investigated, and optimizations of graphics architecture are assessed for satisfying the performance requirements and overcoming the limited system resources found in mobile terminals. Two mobile 3D graphics processor architectures, RAMP and DigiAcc, are proposed based on the analysis, and a prototype development platform (REMY) is implemented. REMY includes a software graphics library and simulation environment developed for more flexible realization of mobile 3D graphics. The experimental results demonstrate the feasibility of mobile 3D graphics with 3.6 Mpolygons/s at 155 mW power consumption for full 3D operation.
---
paper_title: OPENRP: a reputation middleware for opportunistic crowd computing
paper_content:
The concepts of wisdom of crowd and collective intelligence have been utilized by mobile application developers to achieve large-scale distributed computation, known as crowd computing. The profitability of this method heavily depends on users' social interactions and their willingness to share resources. Thus, different crowd computing applications need to adopt mechanisms that motivate peers to collaborate and defray the costs of participating ones who share their resources. In this article, we propose OPENRP, a novel, lightweight, and scalable system middleware that provides a unified interface to crowd computing and opportunistic networking applications. When an application wants to perform a device-to-device task, it delegates the task to the middleware, which takes care of choosing the best peers with whom to collaborate and sending the task to these peers. OPENRP evaluates and updates the reputation of participating peers based on their mutual opportunistic interactions. To show the benefits of the middleware, we simulated the behavior of two representative crowdsourcing applications: message forwarding and task offloading. Through extensive simulations on real human mobility traces, we show that the traffic generated by the applications is lower compared to two benchmark strategies. As a consequence, we show that when using our middleware, the energy consumed by the nodes is reduced. Finally, we show that when dividing the nodes into selfish and altruistic, the reputation scores of the altruistic peers increase with time, while those of the selfish ones decrease.
---
paper_title: Energy Management Techniques in Modern Mobile Handsets
paper_content:
Managing energy efficiently is paramount in modern smartphones. The diverse range of wireless interfaces and sensors, and the increasing popularity of power-hungry applications that take advantage of these resources can reduce the battery life of mobile handhelds to few hours of operation. The research community, and operating system and hardware vendors found interesting optimisations and techniques to extend the battery life of mobile phones. However, the state of the art of lithium-ion batteries clearly indicates that energy efficiency must be achieved both at the hardware and software level. In this survey, we will cover the software solutions that can be found in the research literature between 1999 and May 2011 at six different levels: energy-aware operating systems, efficient resource management, the impact of users' interaction patterns with mobile devices and applications, wireless interfaces and sensors management, and finally the benefits of integrating mobile devices with cloud computing services.
---
paper_title: Comet: Code offload by migrating execution transparently
paper_content:
In this paper we introduce a runtime system to allow unmodified multi-threaded applications to use multiple machines. The system allows threads to migrate freely between machines depending on the workload. Our prototype, COMET (Code Offload by Migrating Execution Transparently), is a realization of this design built on top of the Dalvik Virtual Machine. COMET leverages the underlying memory model of our runtime to implement distributed shared memory (DSM) with as few interactions between machines as possible. Making use of a new VM-synchronization primitive, COMET imposes little restriction on when migration can occur. Additionally, enough information is maintained so one machine may resume computation after a network failure. ::: ::: We target our efforts towards augmenting smartphones or tablets with machines available in the network. We demonstrate the effectiveness of COMET on several real applications available on Google Play. These applications include image editors, turn-based games, a trip planner, and math tools. Utilizing a server-class machine, COMET can offer significant speed-ups on these real applications when run on a modern smartphone. With WiFi and 3G networks, we observe geometric mean speed-ups of 2.88× and 1.27× relative to the Dalvik interpreter across the set of applications with speed-ups as high as 15× on some applications.
---
paper_title: PCU: the programmable culling unit
paper_content:
Culling techniques have always been a central part of computer graphics, but graphics hardware still lack efficient and flexible support for culling. To improve the situation, we introduce the programmable culling unit, which is as flexible as the fragment program unit and capable of quickly culling entire blocks of fragments. Furthermore, it is very easy for the developer to use the PCU as culling programs can be automatically derived from fragment programs containing a discard instruction. Our PCU can be integrated into an existing fragment program unit with a modest hardware overhead of only about 10%. Using the PCU, we have observed shader speedups between 1.4 and 2.1 for relevant scenes.
---
paper_title: The State of the Art in Mobile Graphics Research
paper_content:
High-quality computer graphics let mobile-device users access more compelling content. Still, the devices' limitations and requirements differ substantially from those of a PC. This survey of mobile graphics research describes current solutions in terms of specialized hardware (including 3D displays), rendering and transmission, visualization, and user interfaces.
---
paper_title: Using mobile GPU for general-purpose computing – a case study of face recognition on smartphones
paper_content:
As GPU becomes an integrated component in handheld devices like smartphones, we have been investigating the opportunities and limitations of utilizing the ultra-low-power GPU in a mobile platform as a general-purpose accelerator, similar to its role in desktop and server platforms. The special focus of our investigation has been on mobile GPU's role for energy-optimized real-time applications running on battery-powered handheld devices. In this work, we use face recognition as an application driver for our study. Our implementations on a smartphone reveals that, utilizing the mobile GPU as a co-processor can achieve significant speedup in performance as well as substantial reduction in total energy consumption, in comparison with a mobile-CPU-only implementation on the same platform.
---
paper_title: An adaptive training-free feature tracker for mobile phones
paper_content:
While tracking technologies based on fiducial markers have dominated the development of Augmented Reality (AR) applications for almost a decade, various real-time capable approaches to markerless tracking have recently been presented. However, most existing approaches do not yet achieve sufficient frame rates for AR on mobile phones or at least require an extensive training phase in advance. In this paper we will present our approach on feature based tracking applying robust SURF features. The implementation is more than one magnitude faster than previous ones, allowing running even on mobile phones at highly interactive rates. In contrast to other feature based approaches on mobile phones, our implementation may immediately track features captured from a photo without any training. Further, the approach is not restricted to planar surfaces, but may use features of 3D objects.
---
paper_title: Graphics Processing Units for Handhelds
paper_content:
During the past few years, mobile phones and other handheld devices have gone from only handling dull text-based menu systems to, on an increasing number of models, being able to render high-quality three-dimensional graphics at high frame rates. This paper is a survey of the special considerations that must be taken when designing graphics processing units (GPUs) on such devices. Starting off by introducing desktop GPUs as a reference, the paper discusses how mobile GPUs are designed, often with power consumption rather than performance as the primary goal. Lowering the bus traffic between the GPU and the memory is an efficient way of reducing power consumption, and therefore some high-level algorithms for bandwidth reduction are presented. In addition, an overview of the different APIs that are used in the handheld market to handle both two-dimensional and three-dimensional graphics is provided. Finally, we present our outlook for the future and discuss directions of future research on handheld GPUs.
---
paper_title: Extraction of Natural Feature Descriptors on Mobile GPUs
paper_content:
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
---
paper_title: Recent Advances in Augmented Reality
paper_content:
In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
---
paper_title: Next Generation 5G Wireless Networks: A Comprehensive Survey
paper_content:
The vision of next generation 5G wireless communications lies in providing very high data rates (typically of Gbps order), extremely low latency, manifold increase in base station capacity, and significant improvement in users’ perceived quality of service (QoS), compared to current 4G LTE networks. Ever increasing proliferation of smart devices, introduction of new emerging multimedia applications, together with an exponential rise in wireless data (multimedia) demand and usage is already creating a significant burden on existing cellular networks. 5G wireless systems, with improved data rates, capacity, latency, and QoS are expected to be the panacea of most of the current cellular networks’ problems. In this survey, we make an exhaustive review of wireless evolution toward 5G networks. We first discuss the new architectural changes associated with the radio access network (RAN) design, including air interfaces, smart antennas, cloud and heterogeneous RAN. Subsequently, we make an in-depth survey of underlying novel mm-wave physical layer technologies, encompassing new channel model estimation, directional antenna design, beamforming algorithms, and massive MIMO technologies. Next, the details of MAC layer protocols and multiplexing schemes needed to efficiently support this new physical layer are discussed. We also look into the killer applications, considered as the major driving force behind 5G. In order to understand the improved user experience, we provide highlights of new QoS, QoE, and SON features associated with the 5G evolution. For alleviating the increased network energy consumption and operating expenditure, we make a detail review on energy awareness and cost efficiency. As understanding the current status of 5G implementation is important for its eventual commercialization, we also discuss relevant field trials, drive tests, and simulation experiments. Finally, we point out major existing research issues and identify possible future research directions.
---
paper_title: Cardea: Context-Aware Visual Privacy Protection from Pervasive Cameras
paper_content:
The growing popularity of mobile and wearable devices with built-in cameras, the bright prospect of camera related applications such as augmented reality and life-logging system, the increased ease of taking and sharing photos, and advances in computer vision techniques have greatly facilitated people's lives in many aspects, but have also inevitably raised people's concerns about visual privacy at the same time. Motivated by recent user studies that people's privacy concerns are dependent on the context, in this paper, we propose Cardea, a context-aware and interactive visual privacy protection framework that enforces privacy protection according to people's privacy preferences. The framework provides people with fine-grained visual privacy protection using: i) personal privacy profiles, with which people can define their context-dependent privacy preferences; and ii) visual indicators: face features, for devices to automatically locate individuals who request privacy protection; and iii) hand gestures, for people to flexibly interact with cameras to temporarily change their privacy preferences. We design and implement the framework consisting of the client app on Android devices and the cloud server. Our evaluation results confirm this framework is practical and effective with 86% overall accuracy, showing promising future for context-aware visual privacy protection from pervasive cameras.
---
paper_title: ReadMe: A Real-Time Recommendation System for Mobile Augmented Reality Ecosystems
paper_content:
We introduce ReadMe, a real-time recommendation system (RS) and an online algorithm for Mobile Augmented Reality (MAR) ecosystems. A MAR ecosystem is the one that contains mobile users and virtual objects. The role of ReadMe is to detect and present the most suitable virtual objects on the mobile user's screen. The selection of the proper virtual objects depends on the mobile users' context. We consider the user's context as a set of variables that can be either drawn directly by user's device or can be inferred by it or can be collected in collaboration with other mobile devices.
---
paper_title: When Augmented Reality meets Big Data
paper_content:
With computing and sensing woven into the fabric of everyday life, we live in an era where we are awash in a flood of data from which we can gain rich insights. Augmented reality (AR) is able to collect and help analyze the growing torrent of data about user engagement metrics within our personal mobile and wearable devices. This enables us to blend information from our senses and the digitalized world in a myriad of ways that was not possible before. AR and big data have a logical maturity that inevitably converge them. The tread of harnessing AR and big data to breed new interesting applications is starting to have a tangible presence. In this paper, we explore the potential to capture value from the marriage between AR and big data technologies, following with several challenges that must be addressed to fully realize this potential.
---
paper_title: Demo: Interactive Visual Privacy Control with Gestures
paper_content:
Built-in cameras of mobile and wearable devices enable a variety of applications such as augmented reality, continuous sensing, and life-logging systems, which bring joy and convenience to human lives. However, being recorded by unauthorized or unnoticed cameras have raised people's concerns about visual privacy. To address this problem, we propose a novel interactive method to control visual privacy. We allow individuals to interact with cameras using static tags and more flexible hand gestures for privacy control. By delivering privacy control messages via visual indicators, devices will automatically perform control operations according to privacy indicators detected and control rules.
---
| Title: Mobile Augmented Reality Survey: From Where We Are to Where We Go
Section 1: INTRODUCTION
Description 1: Introduce the concept of Mobile Augmented Reality (MAR), its historical development, and significance in various fields, along with the scope of this survey.
Section 2: APPLICATION FIELDS
Description 2: Discuss the various application domains of MAR, including Tourism and Navigation, Entertainment and Advertisement, Training and Education, Geometry Modeling and Scene Construction, Assembly and Maintenance, and Information Assistant Management.
Section 3: TOURISM AND NAVIGATION
Description 3: Provide examples and discussions of MAR applications in tourism and navigation, detailing specific projects and their implementations.
Section 4: ENTERTAINMENT AND ADVERTISEMENT
Description 4: Explore the use of MAR in entertainment and advertisement, including games and promotional activities.
Section 5: TRAINING AND EDUCATION
Description 5: Assess the impact of MAR on training and education, with specific examples and studies showing how MAR enhances learning experiences.
Section 6: GEOMETRY MODELING AND SCENE CONSTRUCTION
Description 6: Examine the role of MAR in geometry modeling and scene construction, detailing case studies and methodologies employed.
Section 7: ASSEMBLY AND MAINTENANCE
Description 7: Discuss MAR applications in assembly and maintenance, including studies that show its effectiveness in these areas.
Section 8: INFORMATION ASSISTANT MANAGEMENT
Description 8: Show how MAR is used in information assistant management, providing details on system implementations and use cases.
Section 9: BIG DATA DRIVEN MAR
Description 9: Explore how big data integration enhances MAR applications across various fields like Retail, Tourism, Healthcare, and Public Services.
Section 10: REPRESENTATIVE MAR SYSTEMS
Description 10: Present and analyze representative MAR systems, highlighting their components, functionalities, and enabling technologies.
Section 11: UI/UX
Description 11: Examine user interface (UI) and user experience (UX) considerations in MAR, discussing design principles and challenges.
Section 12: SYSTEM COMPONENTS
Description 12: Detail the components of MAR systems, including mobile computing platforms, software frameworks, and display technologies.
Section 13: TRACKING AND REGISTRATION
Description 13: Discuss tracking and registration methods in MAR, dividing them into sensor-based and vision-based methods, and exploring their applications.
Section 14: NETWORK CONNECTIVITY
Description 14: Analyze the role of network connectivity in MAR, detailing the types of networks used and their impact on performance.
Section 15: DATA MANAGEMENT
Description 15: Discuss data management strategies in MAR, focusing on data acquisition, modeling, and storage.
Section 16: SYSTEM PERFORMANCE AND SUSTAINABILITY
Description 16: Address runtime performance and energy efficiency in MAR applications, presenting various strategies and technological advancements.
Section 17: CHALLENGING PROBLEMS
Description 17: Identify and discuss the main challenges in MAR development, including energy efficiency, low-level MAR libraries, killer MAR applications, networking, data management, and UI/UX concerns.
Section 18: SECURITY AND PRIVACY
Description 18: Discuss security and privacy issues related to MAR applications and propose guidelines to address these concerns.
Section 19: SOCIAL ACCEPTANCE
Description 19: Address the social acceptance of MAR technologies, discussing factors such as device intrusion, privacy, and safety considerations.
Section 20: CONCLUSION
Description 20: Summarize the survey's findings, highlight future directions, and discuss the potential of MAR to transform user interactions with the real world. |
A Review of Three-Dimensional Imaging Technologies for Pavement Distress Detection and Measurements | 5 | ---
paper_title: Automated Imaging Technologies for Pavement Distress Surveys
paper_content:
This circular documents state-of-the-art techniques and technologies in the acquisition of pavement surface images, and basic requirements needed to automatically identify and classify pavement surface distresses. The basics of film or magnetic tape–based image acquisition are presented. The circular also discusses digital acquisition and reports on the new laser-based imaging system with its high-quality image and low-power usage, as well as the potential for using 3-D laser imaging technology for pavement surveys.
---
paper_title: Detection of Pavement Distresses Using 3D Laser Scanning Technology
paper_content:
The 3D laser scanning is one of the exceptionally versatile and efficient technologies for accurately capturing large sets of 3D coordinates. 3D laser scanner uses a technique that employs reflected laser pulses to create accurate digital models of existing objects. For 3D survey, detection of pavement distresses, such as potholes, large-area utility cuts or patches, is possible application where laser scanner technology excels. The traditional surveying and evaluation of distresses on pavement are extremely rough and restrictive as it implies lane or even entire road closures. In the study, the accurate 3D point-cloud points with their elevations were captured during scanning and extracted focusing on specific distress features by means of a grid-based processing approach. The experimental results indicate that the severity and coverage of distresses can be accurately and automatically quantified to calculate the needed amounts of filled materials. This application is the first attempt and can assist pavement engineers in monitoring pavement performance and estimating repair funding.
---
paper_title: Elements of automated survey of pavements and a 3D methodology
paper_content:
Sound transportation infrastructure is critical for economic development and sustainability. Pavement condition is a primary concern among agencies of the roadway infrastructure. Automation has become possible in recent years on collecting data and producing results for certain aspects of pavement performance, while challenges remain in several other categories, such as automated cracking survey. This paper reviews the technological advances on automated survey of pavements, and discusses the most recent breakthroughs by the team led by the author in using 3D laser imaging for capturing 1 mm surface images of pavements.
---
paper_title: Pavement cracking measurements using 3D laser-scan images
paper_content:
Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel−1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s−1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.
---
paper_title: Accurate and Robust Image Alignment for Road Profile Reconstruction
paper_content:
In this paper, we propose a novel approach of the two-image alignment problem based on a functional representation of images. This allows us to derive a one-to-several correspondence, multi-scale algorithm. At the same time, it also formalizes the problem as a robust estimation problem between possible matches. We then derive an accurate, robust and faster version for the alignment of edge images. The proposed algorithm is developed and tested in the context of off-line longitudinal road profile reconstruction from stereo images.
---
paper_title: Stereo-vision applications to reconstruct the 3D texture of pavement surface
paper_content:
Characterisation of pavement surface texture has significant effects on ride comfort and road safety. Pavement texture is typically reported as a single attribute, such as mean profile depth, root mean square roughness or hydraulic radius, which limits the usefulness of information extracted from texture measurements. Therefore, advanced methods that characterise pavement texture in three dimensions are needed. This paper reviews recent advances in the development of two imaging-based texture evaluation methods. The main objective of these methods is to recover the 3D heights of the pavement surface. Also, the validation of the proposed image-based texture indicators is examined. Results show that image-based techniques can be successfully applied to recover the 3D heights of pavement surface textures and provide substantial information on the friction and noise characteristics of the surface.
---
paper_title: 3D surface profile equipment for the characterization of the pavement texture – TexScan
paper_content:
Loads from vehicles alter the functional and structural characteristics of road pavements that directly affect the loss of resistance of the pavement and the users’ comfort and safety. Those alterations require constant observation and analysis of an extensive area of road surface with high precision. For such it was developed a new scanning prototype machine capable of acquiring the 3D road surface data and characterize the road texture through two algorithms that allows calculate the Estimated Texture Depth (ETD) and Texture Profile Level (L) indicators. The experimental results obtained from nine road samples validate the developed algorithms for the texture analysis and showed good agreement between the scanning prototype equipment and the traditional Sand Patch Method.
---
paper_title: Pavement Crack Detection Using High-Resolution 3D Line Laser Imaging Technology
paper_content:
With the advancement of 3D sensor and information technology, a high-resolution, high-speed 3D line laser imaging system has become available for pavement surface condition data collection. This paper presents preliminary results of a research project sponsored by the U. S. Department of Transportation (DOT) Research and Innovation Technology Administration (RITA) and the Commercial Remote Sensing and Spatial Information (CRSS however, the data resolution limits the detection of hairline cracks to approximately 1mm. The findings are crucial for transportation agencies to use when determining their automated pavement survey policies. Recommendations for future research are discussed in the paper.
---
paper_title: Laser Scanning on Road Pavements: A New Approach for Characterizing Surface Texture
paper_content:
The surface layer of road pavement has a particular importance in relation to the satisfaction of the primary demands of locomotion, such as security and eco-compatibility. Among those pavement surface characteristics, the “texture” appears to be one of the most interesting with regard to the attainment of skid resistance. Specifications and regulations, providing a wide range of functional indicators, act as guidelines to satisfy the performance requirements. This paper describes an experiment on the use of laser scanner techniques on various types of asphalt for texture characterization. The use of high precision laser scanners, such as the triangulation types, is proposed to expand the analysis of road pavement from the commonly and currently used two-dimensional method to a three-dimensional one, with the aim of extending the range of the most important parameters for these kinds of applications. Laser scanners can be used in an innovative way to obtain information on areal surface layer through a single measurement, with data homogeneity and representativeness. The described experience highlights how the laser scanner is used for both laboratory experiments and tests in situ, with a particular attention paid to factors that could potentially affect the survey.
---
paper_title: Pothole Properties Measurement through Visual 2D Recognition and 3D Reconstruction
paper_content:
Current pavement condition assessment methods are predominantly manual and time consuming. Existing pothole recognition and assessment methods rely on 3D surface reconstruction that requires high equipment and computational costs or relies on acceleration data which provides preliminary results. This paper presents an inexpensive solution that automatically detects and assesses the severity of potholes using vision-based data for both 2D recognition and for 3D reconstruction. The combination of these two techniques is used to improve recognition results by using visual and spatial characteristics of potholes and measure properties (width, number, and depth) that are used to assess severity of potholes. The number of potholes is deduced with 2D recognition whereas the width and depth of the potholes is obtained with 3D reconstruction. The proposed method is validated on several actual potholes. The results show that the proposed inexpensive and visual method holds promise to improve automated pothole detection and severity assessment.
---
paper_title: Measurement and Characterization of Asphalt Pavement Surface Macrotexture Using Three Dimensional Laser Scanning Technology
paper_content:
Pavement macrotexture plays a critical role in highway users' safety. It is also closely related to tire-pavement interaction noise. In recent years, continuous effort has been made toward three-dimensional (3D) macrotexture measurement in the field to accommodate pavement management. This study presents a newly developed macrotexture measuring system with 3D laser scanning technology. The underlying system demonstrates a series of desirable properties including: (1) 3D surface feature, (2) high speed data collection, (3) large sampling capacity, (4) high resolution and accuracy, and (5) ease of operation. Based on data collected on a real-world asphalt pavement by the system, it is demonstrated that the existing pavement macrotexture evaluation indexes are extended from two-dimensional (2D) to 3D. By comparison, it is shown that the 3D indexes are able to characterize pavement macrotexture in a more comprehensive and realistic manner. The correlation among the 3D indexes is also evaluated, which provides pavement researchers and engineers more flexibility in selecting relevant indexes in macrotexture property evaluation.
---
paper_title: Aggregate Surface Areas Quantified through Laser Measurements for South African Asphalt Mixtures
paper_content:
For several decades, efforts have been made by engineers and researchers in road and airfield pavements and railroads to develop methods/procedures for accurate quantification of aggregate shape and packing properties. The difficult part of the process has been the fact that aggregate particles have irregular and nonideal shapes. New research capabilities, including laser-based technology, can effectively address the difficulties associated with aggregate shape measurements to optimize asphalt mix design. This paper introduces the use of a three-dimensional (3D) laser scanning method to directly measure the surface area of aggregates used in road pavements in South Africa. As an application of the laser-based measurements, the asphalt film thicknesses of five typical South African mixtures were calculated and compared with the film thicknesses calculated from traditional methods. Based on the laser scanning method, new surface area factors were developed for coarse aggregates used in the asphalt mixtures. Overall, the study demonstrated applicability of 3D laser scanning method to characterize coarse aggregates.
---
paper_title: PRIMITIVE-BASED CLASSIFICATION OF PAVEMENT CRACKING IMAGES
paper_content:
Collection and analysis of pavement distress data are receiving attention for their potential to improve the quality of information on pavement condition. We present an approach for the automated classificaton of asphalt pavement distresses recorded on video or photographic film. Based on a model that describes the statistical properties of pavement images, we develop algorithms for image enhancement, segmentation, and distress classification. Image enhancement is based on subtraction of an “average” background: segmentation assigns one of four possible values to pixels based on their likelihood of belonging to the object. The classification approach proceeds in two steps: in the first step, the presence of primitives (building blocks of the various distresses) is identified, and in the second step, classification of images to a distress type (using the results from the first step) takes place. The system addresses the following distress types: longitudinal, transverse, block, alligator cracking, and plai...
---
paper_title: An Unmanned Aerial Vehicle-Based Imaging System for 3D Measurement of Unpaved Road Surface Distresses
paper_content:
Abstract:i¾?Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle UAV-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a three-dimensional 3D surface model over a road distress area for distress measurement. The system consists of a low-cost model helicopter equipped with a digital camera, a Global Positioning System GPS receiver and an Inertial Navigation System INS, and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.
---
paper_title: Comparison of pavement surface texture determination by sand patch test and 3D laser scanning
paper_content:
A modern highway must be capable of proving traffic safety, comfort to passengers as well as efficient and economical transportation. In view of the increase in the number of traffic accidents due to the developments in automotive industry, the traffic safety has gathered too much consideration in recent years. Skid resistance, on which road safety depends, is closely related to the pavement surface texture. The deterioration due to the traffic loads, especially polishing effect, involves a change in surface texture. In recent years, efforts are needed to develop more advanced technologies for evaluating pavement surface texture. In this study, the 3D laser scanner was utilized to quantify the mean profile depth (MPD) of a pavement at a static location. The surface texture of asphalt concrete pavements was scanned at 31 different locations and the results have been compared with the results of sand patch test. It was found that there is a good correlation between MPD as measured by 3D laser scanning and the m ::: ean texture depth (MTD) as measured by volumetric method (sand patch test).
---
paper_title: Texas Department of Transportation 3D Transverse Profiling System for High-Speed Rut Measurement
paper_content:
Pavement rutting is a critical measure of road condition. Severe rutting indicates road structure deformation and exposes drivers to hazards, especially when it holds rainwater. The Pavement Management Information System requires ruts to be measured regularly for pavement condition score calculation. In the past few decades, a number of automated rut-measurement devices have been developed and used for highway speed data collection. However, all these devices exhibit limitations on measuring ground truth in practice. This paper introduces a high-speed true three-dimensional (3D) pavement surface measurement tool that can produce accurate rut data at highway speeds. Network-level data collections were conducted and the results analyzed to evaluate and verify the system.
---
paper_title: CHARACTERIZATION OF ROAD MICROTEXTURE BY MEANS OF IMAGE ANALYSIS
paper_content:
Abstract Road surface microtexture (sub-millimeter scale) is essential for pavement skid-resistance. However, its measurement is only possible in the laboratory on cores taken from trafficked roads, and is time consuming. For efficient road monitoring, it is necessary to develop faster methods usable on-site. Collaboration has been developed for 2 years between LCPC and the laboratory Signal, Image and Communication (SIC) to develop a measurement and characterization method for road microtexture based on image analysis. This paper deals with too complementary works: • The measurement of road microtexture. Research is focused on the image measurement and extraction of roughness information from images. The prototype using a high-resolution camera is described. The procedure separating relief from aspect-information using a photometric model for the surface is given. Image-based relief variation is compared to relief variation obtained through a laser sensor. • The characterization of road microtexture. This characterization, obtained through a geometrical and frequential analysis of images, leads to descriptors related to the shape and the density of surface asperities. Experimental programmes were carried out to validate the feasibility of measuring on-site images and to correlate surface descriptors to friction. Results are presented and discussed. Perspective for future works is given.
---
paper_title: Road surface inspection using laser scanners adapted for the high precision 3D measurements of large flat surfaces
paper_content:
In this paper an optical configuration based on autosynchronized laser scanning is proposed for the 3D measurement of road surfaces. The advantages of this technique over classical triangulation methods are exposed. The road inspection system developed at the National Optics Institute (NOI) using this type of laser telemetry is also presented. This system uses two autosynchronized laser scanners in order to obtain transverse 3D and intensity profiles of road surfaces. Also described are simple algorithms which detect and measure rutting and cracking conditions. Results include rut measurements on both real and simulated 3D road profiles and a crack map of an actual pavement section.
---
paper_title: A New Rutting Measurement Method Using Emerging 3D Line-Laser-Imaging System
paper_content:
Rut depth is one of the important pavement performance measures. Rut depth has traditionally been measured using a manual rutting measurement, which is time-consuming, labor-intensive, and dangerous. More recently, point-based bar systems (e.g., 3, 5 points) have been used by some agencies. However, studies have shown these systems might not be able to accurately measure rut depth because of limited number of sample points. There is a need to improve the accuracy and reliability of rutting measurement. With the advances of sensing technology, emerging 3D line-laser-imaging system is now capable of acquiring high-resolution transverse profile of more than 4,000 points. This provides a great opportunity for developing a reliably and accurately rut measurement method. However, there is no framework to handle this overwhelming amount of 3D range-based pavement data. A framework is proposed in this paper to acquire, process, analyze, and visualize the high-resolution 3D pavement data collected using emerging 3D line-laser-imaging system. The proposed framework includes 1) data acquisition using the sensing system, 2) data processing, 3) data segmentation, 4) data statistical analysis, 5) data visualization, and 6) decision support. A case study carried on Interstate Highway 95 (I-95) near Savannah, Georgia, at highway speed is used to demonstrate of the applicability of the proposed framework.
---
paper_title: Critical Assessment of Measuring Concrete Joint Faulting Using 3D Continuous Pavement Profile Data
paper_content:
AbstractFaulting has traditionally been collected by using manual methods, which are labor intensive, time-consuming, and hazardous to workers and drivers. Therefore, alternative methods for effectively and safely collecting faulting data are needed. With emerging laser technology originally designed for crack detection, high-resolution, full lane-width coverage, three-dimensional (3D) continuous pavement profile data can now be acquired. This paper critically assesses the feasibility of using this 3D continuous pavement profile data for measuring faulting with a special focus on accuracy and repeatability. Controlled field tests were conducted to evaluate the accuracy for faulting in different ranges. Field tests were conducted at highway speeds on I-16 in Georgia to evaluate the repeatability and feasibility of the proposed method. Results show the proposed method can estimate faulting with an average error of less than 1 mm compared with those measured using the Georgia fault meter, and it can achieve ...
---
paper_title: Automated pavement distress inspection based on 2D and 3D information
paper_content:
During the last few decades, many efforts have been made to produce automatic inspection systems to meet the specific requirements in assessing distress on the road surfaces using video cameras and image processing algorithms. However, due to the noisy images from pavement surfaces, limited success was accomplished. One major issue with pure video based systems is their inability to discriminate dark areas not caused by pavement distress such as tire marks, oil spills, shadows, and recent fillings. To overcome the limitation of the conventional imaging based methods, a probabilistic relaxation technique based on 3-dimensional (3D) information is proposed in this paper. The primary goal of this technique is to integrate conventional image processing techniques with stereovision technology to obtain an accurate topological structure of the road defects. Simulation results show the proposed system is effective and robust on a variety of pavement surfaces.
---
paper_title: Using 3D laser profiling sensors for the automated measurement of road surface conditions
paper_content:
In order to maximize road maintenance funds and optimize the condition of road networks, pavement management systems need detailed and reliable data on the status of the road network. To date, reliable crack and raveling data has proven difficult and expensive to obtain. To solve this problem, over the last 10 years Pavemetrics inc. in collaboration with INO (National Optics Institute of Canada) and the MTQ (Ministere des Transports du Quebec) have been developing and testing a new 3D technology called the LCMS (Laser Crack Measurement System).
---
paper_title: 3D reconstruction of road surfaces using an integrated multi-sensory
paper_content:
In this paper, we present our experience in building a mobile imaging system that incorporates multi-modality sensors for road surface mapping and inspection applications. Our proposed system leverages 3D laser-range sensors, video cameras, global positioning systems (GPS) and inertial measurement units (IMU) towards the generation of photo-realistic, geometrically accurate, geo-referenced 3D models of road surfaces. Based on our summary of the state-of-the-art systems for a road distress survey, we identify several challenges in the real-time deployment, integration and visualization of the multi-sensor data. Then, we present our data acquisition and processing algorithms as a novel two-stage automation procedure that can meet the accuracy requirements with real-time performance. We provide algorithms for 3D surface reconstruction to process the raw data and deliver detail preserving 3D models that possess accurate depth information for characterization and visualization of cracks as a significant improvement over contemporary commercial video-based vision systems.
---
paper_title: A real-time 3D scanning system for pavement rutting and pothole detections
paper_content:
Rutting and pothole are the common pavement distress problems that need to be timely inspected and ::: repaired to ensure ride quality and safe traffic. This paper introduces a real-time, automated inspection system ::: devoted for detecting these distress features using high-speed transverse scanning. The detection principle is based ::: on the dynamic generation and characterization of 3D pavement profiles obtained from structured light ::: measurements. The system implementation mainly involves three tasks: multi-view coplanar calibration, sub-pixel ::: laser stripe location, and pavement distress recognition. The multi-view coplanar scheme was employed in the ::: calibration procedure to increase the feature points and to make the points distributed across the field of view of the ::: camera, which greatly improves the calibration precision. The laser stripe locating method was implemented in four ::: steps: median filtering, coarse edge detection, fine edge adjusting, stripe curve mending and interpolation by cubic ::: splines. The pavement distress recognition algorithms include line segment approximation of the profile, searching ::: for the feature points, and parameters calculations. The parameter data of a curve segment between two feature ::: points, such as width, depth and length, were used to differentiate rutting, pothole, and pothole under different ::: constraints. The preliminary experiment results show that the system is capable of locating these pavement ::: distresses, and meets the needs for real-time and accurate pavement inspection.
---
paper_title: Laser Scan System to Establish 3-D Surface Texture and Predict Friction of Pavement
paper_content:
Friction of pavement is the most popular tropic which is concerned by drivers and engineers, surface texture are also widely regarded as key factor to influence it. The brief object of this study is to establish the relationship between 3 D texture and friction. In the study, the mixtures include Dense Grade Asphalt Concrete (DGAC), Stone Mastic Asphalt (SMA), and Porous Asphalt (PA). High Definition Scan Texture Machine (HDSTM) with 2D Laser CCD was adopted to measure the 2 D texture of Asphalt concrete specimens, and British Portable Tester (BPN) was also used to evaluate friction of various mixture specimen surfaces. The study was attempted to further create initial 3 D model with data of HDSTM and Computer Simulation Program. Correlation coefficients between the ratio of Surface area in unit area (SA/A) and friction was up to 0.8. SA/A could be regards as the best feasible factor to estimate the mixture surface in the ability of skid resistance, and. Based on above results, 3D texture parameter is remarkable for evaluation of friction and is was worthy of further study.
---
paper_title: Pavement Distress Analysis Using Image Processing Techniques
paper_content:
This paper demonstrates the feasibility of applying image processing techniques to the analysis of pavement distress due to cracking. Pavement image samples were obtained using a custom-designed data acquisition system called the Automatic Crack Monitor (ACM). The image samples containing pavement cracks are analyzed, and quantitative measures, called crack parameters, are extracted using techniques described in this paper. The crack parameters are necessary measures used in calculations of the Pavement Serviceability Index (PSI), which is used by highway maintenance engineers to decide whether a certain pavement section needs to be repaired. Experimental results are shown and the potential hardware implementation of the developed techniques is also discussed.
---
paper_title: Three-View Stereo Analysis
paper_content:
This correspondence describes a new stereo analysis method using three views, in which correspondence is established among three images taken from triangularly configured viewpoints. Each match-point candidate obtained by the initial matching between the first and second images is easily examined independently of the other candidates using the third image. The correspondence determination is simple, fast, and reliable. Additionally, this analysis method allows occlusion to be dealt with explicitly. The effectiveness of the three-view stereo-analysis method is demonstrated by simulation and real object experiments. Ambiguous matches are sufficiently avoided for polyhedra. Position errors are less than 2.5 mm (about 0.4 percent) with camera-object spacing of 630 mm.
---
paper_title: Potential of Low-Cost, Close-Range Photogrammetry Toward Unified Automatic Pavement Distress Surveying
paper_content:
Automatic pavement distress detection and data collection is important for pavement management systems. It is estimated that pavement defects cause damage costing $10 billion/year in the US alone. Despite the importance of image-based distress detection systems, they are still semi-automatic to a great extent. They rely internally on one or more threshold values during processing or may need a pre-processing stage, and the quality is affected by shadows and low or extra illumination among other factors. After years of research, processing still typically relies heavily on global or in-context pixels content analysis. Such systems lack the robust sensor modeling, hence, robust detection and modeling which cannot be achieved directly through 2D image space analysis. The exploitation of arrays of laser profilers for 3D data acquisition is an expensive approach and has limitations for enhancing or replacing image-based output. Alternatively, 3D surfaces can be generated using stereo vision techniques. This research has investigated close range photogrammetry as a robust approach to overcome the above disadvantages. The experimental work is carried out using a non-metric DSLR camera with its built-in flash and natural daylight as sources of illumination. Initial investigations show significant potential for 3D distress detection and modeling with higher spatial precision and a higher level of automation, while retaining 2D color and shading information for data fusion. The output of automatic photogrammetric processing can be further exploited directly in existing automated and semi-automated systems for updating the content, analysis and visualization of pavement management system (PMS) and geographic information systems (GIS).
---
paper_title: Stereo-vision applications to reconstruct the 3D texture of pavement surface
paper_content:
Characterisation of pavement surface texture has significant effects on ride comfort and road safety. Pavement texture is typically reported as a single attribute, such as mean profile depth, root mean square roughness or hydraulic radius, which limits the usefulness of information extracted from texture measurements. Therefore, advanced methods that characterise pavement texture in three dimensions are needed. This paper reviews recent advances in the development of two imaging-based texture evaluation methods. The main objective of these methods is to recover the 3D heights of the pavement surface. Also, the validation of the proposed image-based texture indicators is examined. Results show that image-based techniques can be successfully applied to recover the 3D heights of pavement surface textures and provide substantial information on the friction and noise characteristics of the surface.
---
paper_title: 3D surface profile equipment for the characterization of the pavement texture – TexScan
paper_content:
Loads from vehicles alter the functional and structural characteristics of road pavements that directly affect the loss of resistance of the pavement and the users’ comfort and safety. Those alterations require constant observation and analysis of an extensive area of road surface with high precision. For such it was developed a new scanning prototype machine capable of acquiring the 3D road surface data and characterize the road texture through two algorithms that allows calculate the Estimated Texture Depth (ETD) and Texture Profile Level (L) indicators. The experimental results obtained from nine road samples validate the developed algorithms for the texture analysis and showed good agreement between the scanning prototype equipment and the traditional Sand Patch Method.
---
paper_title: Variable baseline/resolution stereo
paper_content:
We present a novel multi-baseline, multi-resolution stereo method, which varies the baseline and resolution proportionally to depth to obtain a reconstruction in which the depth error is constant. This is in contrast to traditional stereo, in which the error grows quadratically with depth, which means that the accuracy in the near range far exceeds that of the far range. This accuracy in the near range is unnecessarily high and comes at significant computational cost. It is, however, non-trivial to reduce this without also reducing the accuracy in the far range. Many datasets, such as video captured from a moving camera, allow the baseline to be selected with significant flexibility. By selecting an appropriate baseline and resolution (realized using an image pyramid), our algorithm computes a depthmap which has these properties: 1) the depth accuracy is constant over the reconstructed volume, 2) the computational effort is spread evenly over the volume, 3) the angle of triangulation is held constant w.r.t. depth. Our approach achieves a given target accuracy with minimal computational effort, and is orders of magnitude faster than traditional stereo.
---
paper_title: Discovering and exploiting 3D symmetries in structure from motion
paper_content:
Many architectural scenes contain symmetric or repeated structures, which can generate erroneous image correspondences during structure from motion (Sfm) computation. Prior work has shown that the detection and removal of these incorrect matches is crucial for accurate and robust recovery of scene structure. In this paper, we point out that these incorrect matches, in fact, provide strong cues to the existence of symmetries and structural regularities in the unknown 3D structure. We make two key contributions. First, we propose a method to recover various symmetry relations in the structure using geometric and appearance cues. A set of structural constraints derived from the symmetries are imposed within a new constrained bundle adjustment formulation, where symmetry priors are also incorporated. Second, we show that the recovered symmetries enable us to choose a natural coordinate system for the 3D structure where gauge freedom in rotation is held fixed. Furthermore, based on the symmetries, 3D structure completion is also performed. Our approach significantly reduces drift through ”structural” loop closures and improves the accuracy of reconstructions in urban scenes.
---
paper_title: A multiple-baseline stereo
paper_content:
A stereo matching method that uses multiple stereo pairs with various baselines generated by a lateral displacement of a camera to obtain precise distance estimates without suffering from ambiguity is presented. Matching is performed simply by computing the sum of squared-difference (SSD) values. The SSD functions for individual stereo pairs are represented with respect to the inverse distance and are then added to produce the sum of SSDs. This resulting function is called the SSSD-in-inverse-distance. It is shown that the SSSD-in-inverse-distance function exhibits a unique and clear minimum at the correct matching position, even when the underlying intensity patterns of the scene include ambiguities or repetitive patterns. The authors first define a stereo algorithm based on the SSSD-in-inverse-distance and present a mathematical analysis to show how the algorithm can remove ambiguity and increase precision. Experimental results with real stereo images are presented to demonstrate the effectiveness of the algorithm. >
---
paper_title: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
paper_content:
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.
---
paper_title: Automated Pavement Distress Survey: A Review and A New Direction
paper_content:
Pavement condition survey normally includes surface distresses, such as cracking, rutting, and other surface defects. Broadly, pavement roughness is also included as a condition survey item in some literature. This paper reviews past research efforts in this area conducted at several institutions, including the automated survey system of pavement surface cracking at the University of Arkansas. The paper also proposes a new direction of technology development through the use of stereovision technology for the comprehensive survey of pavement condition in its broad definition. The goal is to develop a working system that is able to establish three-dimensional (3D) surface model of pavements for the entire pavement lane-width at 1 to 2-millimeter resolution so that comprehensive condition information can be extracted from the 3D model.
---
paper_title: Single lens 3D-camera with extended depth-of-field
paper_content:
Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, ::: which also allows the user to change the focus and the point of view after a picture has been taken. While ::: the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of ::: low-cost hardware and the advances in micro lens array production, have made the application of plenoptic ::: cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type ::: of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the ::: sensor resolution.
---
paper_title: Light Field Photography with a Hand-held Plenoptic Camera
paper_content:
This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.
---
paper_title: Autofocusing algorithm selection in computer microscopy
paper_content:
Autofocusing is a fundamental technology for automated biological and biomedical analyses and is indispensable for routine use of microscopes on a large scale. This paper presents a comprehensive comparison study of 18 focus algorithms in which a total of 139,000 microscope images are analyzed. Six samples were used with three observation methods (bright field, phase contrast, an d differential interference contrast (DIC)) under two magnifications (100/spl times/ and 400/spl times/). A ranking methodology is proposed, based on which the 18 focus algorithms are ranked. Image pre-processing is also conducted to extensively reveal the performance and robustness of the focus algorithms. The presented guidelines allow for the selection of the optimal focus algorithm for different microscopy applications.
---
paper_title: Depth from Focusing and Defocusing
paper_content:
The problem of obtaining depth information from focusing and defocusing is studied. In depth from focusing, instead of the Fibonacci search, which is often trapped in local maxima, the combination of Fibonacci search and curve fitting is proposed. This combination leads to an unprecedentedly accurate result. A model of the blurring effect that takes geometric blurring as well as imaging blurring into consideration in the calibration of the blurring model is proposed. In spectrogram-based depth from defocusing, a maximal resemblance estimation method is proposed to decrease or eliminate the window effect. >
---
paper_title: Monocular 3D Scene Reconstruction at Absolute Scales by Combination of Geometric and Real-aperture Methods
paper_content:
We propose a method for combining geometric and real-aperture methods for monocular 3D reconstruction of static scenes at absolute scales. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from Bundle Adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about the structure of the scene. Evaluating the algorithm on real-world data we demonstrate that it yields typical relative errors between 2 and 3 percent. Possible applications of our approach are self-localisation and mapping for mobile robotic systems and pose estimation in industrial machine vision.
---
paper_title: Triangulation-Based Approaches to Three-Dimensional Scene Reconstruction
paper_content:
Triangulation-based approaches to three-dimensional scene reconstruction are primarily based on the concept of bundle adjustment, which allows the determination of the three-dimensional point coordinates in the world and the camera parameters based on the minimisation of the reprojection error in the image plane. A framework based on projective geometry has been developed in the field of computer vision, where the nonlinear optimisation problem of bundle adjustment can to some extent be replaced by linear algebra techniques. Both approaches are related to each other in this chapter. Furthermore, an introduction to the field of camera calibration is given, and an overview of the variety of existing methods for establishing point correspondences is provided, including classical and also new feature-based, correlation-based, dense, and spatiotemporal approaches.
---
paper_title: Numerical Shape from Shading and Occluding Boundaries
paper_content:
An iterative method for computing shape from shading using occluding boundary information is proposed. Some applications of this method are shown. ::: ::: We employ the stereographic plane to express the orientations of surface patches, rather than the more commonly used gradient space. Use of the stereographic plane makes it possible to incorporate occluding boundary information, but forces us to employ a smoothness constraint different from the one previously proposed. The new constraint follows directly from a particular definition of surface smoothness. ::: ::: We solve the set of equations arising from the smoothness constraints and the image-irradiance equation iteratively, using occluding boundary information to supply boundary conditions. Good initial values are found at certain points to help reduce the number of iterations required to reach a reasonable solution. Numerical experiments show that the method is effective and robust. Finally, we analyze scanning electron microscope (SEM) pictures using this method. Other applications are also proposed.
---
paper_title: Surface Reflection: Physical and Geometrical Perspectives
paper_content:
Reflectance models based on physical optics and geometrical optics are studied. Specifically, the authors consider the Beckmann-Spizzichino (physical optics) model and the Torrance-Sparrow (geometrical optics) model. These two models were chosen because they have been reported to fit experimental data well. Each model is described in detail, and the conditions that determine the validity of the model are clearly stated. By studying reflectance curves predicted by the two models, the authors propose a reflectance framework comprising three components: the diffuse lobe, the specular lobe, and the specular spike. The effects of surface roughness on the three primary components are analyzed in detail. >
---
paper_title: Triangulation-Based Approaches to Three-Dimensional Scene Reconstruction
paper_content:
Triangulation-based approaches to three-dimensional scene reconstruction are primarily based on the concept of bundle adjustment, which allows the determination of the three-dimensional point coordinates in the world and the camera parameters based on the minimisation of the reprojection error in the image plane. A framework based on projective geometry has been developed in the field of computer vision, where the nonlinear optimisation problem of bundle adjustment can to some extent be replaced by linear algebra techniques. Both approaches are related to each other in this chapter. Furthermore, an introduction to the field of camera calibration is given, and an overview of the variety of existing methods for establishing point correspondences is provided, including classical and also new feature-based, correlation-based, dense, and spatiotemporal approaches.
---
paper_title: Photometric method for determining surface orientation from multiple images
paper_content:
A novel technique called photometric stereo is introduced. The idea of photometric stereo is to vary the direction of incident illumi nation between successive images, while holding the viewing direction constant. It is shown that this provides sufficient information to deter mine surface orientation at each image point. Since the imaging geom etry is not changed, the correspondence between image points is known a priori. The technique is photometric because it uses the radiance values recorded at a single image location, in successive views, rather than the relative positions of displaced features. Photometric stereo is used in computer-based image understanding. It can be applied in two ways. First, it is a general technique for deter mining surface orientation at each image point. Second, it is a technique for determining object points that have a particular surface orientation. These applications are illustrated using synthesized examples.
---
paper_title: The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows
paper_content:
We present an algorithm for separating the local gradient information and Lambertian color by using 4-source color photometric stereo in the presence of highlights and shadows. We assume that the surface reflectance can be approximated by the sum of a Lambertian and a specular component. The conventional photometric method is generalized for color images. Shadows and highlights in the input images are detected using either spectral or directional cues and excluded from the recovery process, thus giving more reliable estimates of local surface parameters.
---
paper_title: Continuous Road Damage Detection Using Regular Service Vehicles
paper_content:
This paper outlines an affordable system that continuously monitors the road network for surface damage like potholes and cracks. The system consists of a structured light sensor and a camera mounted on vehicles that travel the roads on a regular basis. It makes use of sensors and equipment already present on the vehicle, like GPS on transit buses. The data is collected from many vehicles, aggregated and analyzed at a central location and the assessment results are displayed interactively to facilitate road maintenance operations. The key sensor, the data it collects and the algorithm to detect cracks and potholes are described in detail.
---
paper_title: Optical Coherence Tomography: An Emerging Technology for Biomedical Imaging and Optical Biopsy
paper_content:
Optical coherence tomography (OCT) is an emerging technology for performing high-resolution cross-sectional imaging. OCT is analogous to ultrasound imaging, except that it uses light instead of sound. OCT can provide cross-sectional images of tissue structure on the micron scale in situ and in real time. Using OCT in combination with catheters and endoscopes enables high-resolution intraluminal imaging of organ systems. OCT can function as a type of optical biopsy and is a powerful imaging technology for medical diagnostics because unlike conventional histopathology which requires removal of a tissue specimen and processing for microscopic examination, OCT can provide images of tissue in situ and in real time. OCT can be used where standard excisional biopsy is hazardous or impossible, to reduce sampling errors associated with excisional biopsy, and to guide interventional procedures. In this paper, we review OCT technology and describe its potential biomedical and clinical applications.
---
paper_title: High-Accuracy Stereo Depth Maps Using Structured Light
paper_content:
Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http://www.middlebury.edu/stereo/.
---
paper_title: Pattern Codification Strategies in Structured Light Systems
paper_content:
Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors. We have implemented a set of representative techniques in this field and present some comparative results. The advantages and constraints of the different patterns are also discussed.
---
paper_title: An all-solid-state optical range camera for 3D real-time imaging with sub-centimeter depth resolution (SwissRanger)
paper_content:
A new miniaturized camera system that is capable of 3-dimensional imaging in real-time is presented. The compact imaging device is able to entirely capture its environment in all three spatial dimensions. It reliably and simultaneously delivers intensity data as well as range information on the objects and persons in the scene. The depth measurement is based on the time-of-flight (TOF) principle. A custom solid-state image sensor allows the parallel measurement of the ::: phase, offset and amplitude of a radio frequency (RF) modulated light field that is emitted by the system and reflected back by the camera surroundings without requiring any mechanical scanning parts. In this paper, the theoretical background of the implemented TOF principle is presented, together with the technological requirements and detailed ::: practical implementation issues of such a distance measuring system. Furthermore, the schematic overview of the complete 3D-camera system is provided. The experimental test results are presented and discussed. The present camera system can achieve sub-centimeter depth resolution for a wide range of operating conditions. A miniaturized version of such a 3D-solid-state camera, the SwissRanger 2 , is presented as an example, illustrating the possibility of ::: manufacturing compact, robust and cost effective ranging camera products for 3D imaging in real-time.
---
paper_title: Unsupervised Approach for Autonomous Pavement-Defect Detection and Quantification Using an Inexpensive Depth Sensor
paper_content:
AbstractCurrent pavement condition–assessment procedures are extensively time consuming and laborious; in addition, these approaches pose safety threats to the personnel involved in the process. In this study, a RGB-D sensor is used to detect and quantify defects in pavements. This sensor system consists of a RGB color image, and an infrared projector and a camera that act as a depth sensor. An approach, which does not need any training, is proposed to interpret the data sensed by this inexpensive sensor. This system has the potential to be used for autonomous cost-effective assessment of road-surface conditions. Various road conditions including patching, cracks, and potholes are autonomously detected and, most importantly, quantified, using the proposed approach. Several field experiments have been carried out to evaluate the capabilities, as well as the limitations of the proposed system. The global positioning system information is incorporated with the proposed system to localize the detected defects...
---
paper_title: Outdoor RGB-D SLAM Performance in Slow Mine Detection
paper_content:
The introduction of Kinect-style RGB-D cameras has dramatically revolutionized robotics research in a very short span. However, despite their low cost and excellent indoor performance in comparison with other competing technologies, they face some basic limitations which prevent their use in outdoor environments. Perhaps for the first time, we report an outdoor application of Kinect-style cameras in a robotics application. We report the use of benchmark RGB-D SLAM algorithm in a land-mine detection application. We argue that due to the slow nature of mine-detection tasks, basic limitations of RGB-D outdoor performance are overcome or gracefully negotiated. We report results from extensive experimentation in a variety of terrains & lighting conditions for three different types of robots.
---
paper_title: Pothole tagging system
paper_content:
4th Robotics and Mechatronics Conference of South Africa (RobMech 2011), CSIR International Conference Centre, Pretoria, 23-25 November 2011
---
| Title: A Review of Three-Dimensional Imaging Technologies for Pavement Distress Detection and Measurements
Section 1: INTRODUCTION
Description 1: This section introduces the topic, mentions the motivation behind the study, and highlights the significance of using 3D imaging technologies for pavement distress detection and measurements.
Section 2: IMAGING PAVEMENTS IN 3D: STATE OF THE ART
Description 2: This section provides a summary of the current level of technology in 3D imaging of pavements for condition monitoring. It mainly focuses on the practical application of 3D imaging and compares accuracies obtained from different systems.
Section 3: A TAXONOMY OF 3D IMAGING TECHNOLOGIES
Description 3: This section provides a comprehensive review of various 3D imaging methods from the perspective of pavement imaging, including explanations and practical considerations for each technology.
Section 4: DISCUSSION
Description 4: This section compares different 3D imaging technologies considering the geometric shapes of pavement defects and their implications. It evaluates how suitable each technology is for imaging specific types of pavement distress.
Section 5: CONCLUSION
Description 5: This section summarizes the findings of the review, highlighting the predominant use of laser scanning and suggesting other potential techniques. It provides a selection procedure for different imaging methods based on their inherent properties and the dimensional details of the distresses. |
A survey of advanced ethernet forwarding approaches | 11 | ---
paper_title: Issues and approaches on extending Ethernet beyond LANs
paper_content:
Currently, LAN technology is predominantly Ethernet-based and offers packet-optimized switched technology. With more than 90 percent of Internet traffic originating from Ethernet-based LANs, efforts are underway to extend Ethernet beyond LANs into MANs and further into WANs. However, native Ethernet protocols need extensions or support from other technologies in order to succeed as MAN technology in terms of scalability, QoS, resiliency, OAM, and so on. The two emerging trends to carry Ethernet traffic across the MAN can be classified into native Ethernet (IEEE) protocol extensions, and encapsulation by another transportation technology such as MPLS networks. The goal is to offer new and challenging services such as virtual private LAN service, also known as transparent LAN service (TLS). This article presents a comprehensive overview of the required extensions/support of the Ethernet with an emphasis on the emerging provider bridge technology.
---
paper_title: Issues and approaches on extending Ethernet beyond LANs
paper_content:
Currently, LAN technology is predominantly Ethernet-based and offers packet-optimized switched technology. With more than 90 percent of Internet traffic originating from Ethernet-based LANs, efforts are underway to extend Ethernet beyond LANs into MANs and further into WANs. However, native Ethernet protocols need extensions or support from other technologies in order to succeed as MAN technology in terms of scalability, QoS, resiliency, OAM, and so on. The two emerging trends to carry Ethernet traffic across the MAN can be classified into native Ethernet (IEEE) protocol extensions, and encapsulation by another transportation technology such as MPLS networks. The goal is to offer new and challenging services such as virtual private LAN service, also known as transparent LAN service (TLS). This article presents a comprehensive overview of the required extensions/support of the Ethernet with an emphasis on the emerging provider bridge technology.
---
paper_title: Multiprotocol Label Switching Architecture
paper_content:
This document specifies the architecture for Multiprotocol Label ::: Switching (MPLS). [STANDARDS-TRACK]
---
paper_title: Internet Group Management Protocol, Version 3
paper_content:
This document specifies Version 3 of the Internet Group Management Protocol, IGMPv3. IGMP is the protocol used by IPv4 systems to report their IP multicast group memberships to neighboring multicast routers. Version 3 of IGMP adds support for "source filtering", that is, the ability for a system to report interest in receiving packets *only* from specific source addresses, or from *all but* specific source addresses, sent to a particular multicast address. That information may be used by multicast routing protocols to avoid delivering multicast packets from specific sources to networks where there are no interested receivers.
---
paper_title: Virtual Private LAN Service
paper_content:
Hydrogen and oxygen are produced from water in a process involving the photodissociation of molecular bromine with radiant energy at wavelengths within the visible light region and a subsequent electrolytic dissociation of hydrogen halides.
---
paper_title: Hierarchical MAC address space in public Ethernet networks
paper_content:
Service providers are showing strong interest in building all-Ethernet public metropolitan networks that would compete with (and eventually replace) the existing network infrastructures based on SONET, Frame Relay, ATM and similar technologies. The main driving factor is the cost, as Ethernet-based technology is typically cheaper than any others available on the market. Despite successful initial laboratory and field deployment tests, there are still many unresolved issues related to metropolitan-scale Ethernet, such as the appropriateness of a spanning tree algorithm, broadcast flooding and MAC address table explosion in core switches. This paper focuses on the problem of MAC address table explosion by introducing a hierarchy into the address space, through Ethernet-inside-Ethernet packet encapsulation. The encapsulation allows core switches to be standard Ethernet switches, while the edge switches implement concepts presented in this paper. The proposed concept is thus transparent to the existing infrastructure and thereby allows building the network using readily available low-cost layer-2 switches.
---
paper_title: On Count-to-Infinity Induced Forwarding Loops Ethernet Networks
paper_content:
Ethernet's high performance, low cost and ubiquity have made it the dominant networking technology for many application domains. Unfortunately, its distributed forwarding topology computation protocol - the Rapid Spanning Tree Proto- col (RSTP) - can suffer from a classic "count-to-infinity" problem that may lead to a forwarding loop under certain network failures. The consequences are serious. During the period of "count-to-infinity", which can last tens of seconds even in a small network, the network can become highly congested by packets that persist in cycles in the network, even packet forwarding can fail as the forwarding tables are polluted. In this paper, we explain the origin of this problem in detail and study its behavior. We find that simply tuning RSTP's parameter settings cannot adequately address the fundamental problem with "count-to- infinity". We propose a simple and effective solution called RSTP with Epochs. This approach uses epochs of sequence numbers in protocol messages to eliminate stale protocol information in the network and allows the forwarding topology to recover in merely one round-trip time across the network.
---
paper_title: Optimizing QoS aware Ethernet spanning trees
paper_content:
Ethernet is gaining in importance in both access and metro networks. As a layer 2 technology, Ethernet gives a basic framework for routing, QoS and traffic engineering (TE), as well as a protocol for building up trees. IEEE 802.1 standards define default configuration parameters considering the topology only. We propose methods for resource management in Ethernet networks through spanning tree optimization for both STP (spanning-tree protocol) (IEEE 802.1D) and MSTP (multiple spanning-tree protocol) (IEEE 802.1s). As a result of optimization, we assign costs to the bridge ports in the network to build trees based on these costs via STP and MSTP. These trees yield optimized routing, TE and support for different QoS classes. We show on typical metro-access networks that, through optimization, the total network throughput can be significantly increased for both enforcing fairness or allowing starvation of some demands. This gain can be realized by simultaneously assigning demands to trees and routing these trees.
---
paper_title: Metro Ethernet traffic engineering based on optimal multiple spanning trees
paper_content:
The flexibility, scalability, simplicity and low cost of Ethernet technology makes it an ideal networking technology for Metro networks. However, the new Ethernet-based solutions must be able to support the growing network needs of the enterprise including the various QoS requirements. Our focus in this paper is traffic engineering which is one of the integral components of QoS provisioning. We propose a scheme based on the generation and management of multiple spanning trees for near optimal traffic distribution.
---
paper_title: Traffic engineering in enterprise ethernet with multiple spanning tree regions
paper_content:
IEEE 802.1s multiple spanning tree protocol (MSTP) is part of a family of standards for local and metropolitan area networks based on Ethernet protocol. MSTP allows a set of regions to be defined whose logical union spans the entire network, which in turn defines the association between VLANs and Spanning Tree Instances. In this paper we propose an algorithmic approach for constructing multiple spanning tree regions in the enterprise network domain which will provide better convergence time, reusability of VLAN tags, protection from failures, and optimal broadcast domain size.
---
paper_title: Global open ethernet (GOE) system and its performance evaluation
paper_content:
This paper presents an overview of global open Ethernet (GOE) architecture as a cost-effective Ethernet-based virtual private network (VPN) solution, and discusses a hardware and software implementation of a prototype system. Three main approaches have been proposed for a VPN solution on metro-area network: resilient packet ring, Ethernet over multiprotocol label switching (EoMPLS), virtual bridged local area network-tag stacking (Q-in-Q). None of these schemes can satisfy the following requirements at the same time: network topology flexibility, affordable network functionalities, low equipment cost, and low operational cost. The proposed GOE system is designed to solve VPN management problems of these approaches with MPLS VPN functionality at a low cost of Ethernet-based solution. The key components of GOE are: 1) a novel GOE tag for high-speed switching and 2) a novel routing and protection module via per-destination multiple rapid spanning tree protocol (PD-MRSTP). Via the analytical performance evaluation of EoMPLS, Q-in-Q, and GOE, we show that the memory cost and the network utilization of GOE is two-three times smaller and 22% higher than the other approaches, respectively. We also have developed a GOE prototype system and obtained the following remarkable hardware and software performance results. The GOE core switch delivered 100% of theoretical maximum throughput (10 G) with zero packet loss even with the field programmable gate array platform, and its 10-G port density is 1.5 times denser than the best currently available products. The GOE switch using PD-MRSTP also delivered a significantly fast protection switching time (1.975 ms), which was significantly faster than legacy Ethernet switches. These performance evaluation results prove that the proposed GOE system can be used as a cost-effective high-performance Ethernet-based VPN solution.
---
paper_title: Alternative multiple spanning tree protocol (AMSTP) for optical Ethernet backbones
paper_content:
The availability and affordable cost of Gigabit and 10 Gigabit Ethernet switches has impacted the deployment of metropolitan area networks (MAN) and campus networks. This paper presents a new protocol, the alternative multiple spanning tree protocol (AMSTP), that uses multiple source based spanning trees for backbones using Ethernet switches. It provides minimum paths and more efficient usage of optical backbone infrastructure than currently proposed protocols such as resilient packet ring and rapid spanning tree. The protocol exhibits features similar to MAC routing protocols like Link State Over MAC (LSOM) such as optimum path and effective infrastructure usage, without requiring MAC routing due to the use of the spanning tree protocol paradigm. AMSTP is not restricted to specific topologies such as ring or tree, but performs efficiently in arbitrary topologies. Among the application areas are optical backbones of campus and MANs.
---
paper_title: OSI IS-IS Intra-domain Routing Protocol
paper_content:
This RFC is a republication of ISO DP 10589 as a service to the ::: Internet community. This is not an Internet standard.
---
paper_title: Viking: a multi-spanning-tree Ethernet architecture for metropolitan area and cluster networks
paper_content:
Simplicity, cost effectiveness, scalability, and the economies of scale make Ethernet a popular choice for local area networks, as well as for storage area networks and increasingly metropolitan-area networks. These applications of Ethernet elevate it from a LAN technology to a ubiquitous networking technology, thus prompting a rethinking of some of its architectural features. One weakness of existing Ethernet architecture is its use of single spanning tree, which, while useful at avoiding routing loops, leads to low link utilization and long failure recovery time. To apply Ethernet to cluster networks and MANs, these problems need to be addressed. We propose a multi-spanning-tree Ethernet architecture, called Viking, that improves both aggregate throughput and fault tolerance by exploiting standard virtual LAN technology in a novel way. By supporting multiple spanning trees through VLAN, Viking makes the most of the inherent redundancies in most mesh-like networks and delivers a multi-fold throughput gain over single-spanning-tree Ethernet with the same physical network topology. It also provides much faster failure recovery, reducing the down-time to a sub-second range from that of multiple seconds in single-spanning-tree Ethernet architecture. Finally, based only on standard mechanisms, Viking is readily implementable on commodity Ethernet switches without any firmware modifications.
---
paper_title: Simple Network Management Protocol (SNMP)
paper_content:
This RFC is a re-release of RFC 1098, with a changed "Status of this ::: Memo" section plus a few minor typographical corrections. This memo ::: defines a simple protocol by which management information for a ::: network element may be inspected or altered by logically remote users. ::: [STANDARDS-TRACK]
---
paper_title: SmartBridge: a scalable bridge architecture
paper_content:
As the number of hosts attached to a network increases beyond what can be connected by a single local area network (LAN), forwarding packets between hosts on different LANs becomes an issue. Two common solutions to the forwarding problem are IP routing and spanning tree bridging. IP routing scales well, but imposes the administrative burden of managing subnets and assigning addresses. Spanning tree bridging, in contrast, requires no administration, but often does not perform well in a large network, because too much traffic must detour toward the root of the spanning tree, wasting link bandwidth.
---
paper_title: Termination Detection for Diffusing Computations
paper_content:
This invention relates to an electronically rotatable antenna which includes several radially arranged Yagi antennae having a common drive element. Reflector and director elements of each Yagi antenna are sequentially rendered operative by biasing suitable diodes short-circuiting them to a ground-plate. The radiation pattern is step-by-step rotated. Directivity is increased by short-circuiting other elements belonging to other arrays than the main one, those elements defining generatrices of a parabola having the driver element as a focus and the reflector element as an apex.
---
paper_title: STAR: a transparent spanning tree bridge protocol with alternate routing
paper_content:
With increasing demand for multimedia applications, local area network (LAN) technologies are rapidly being upgraded to provide support for quality of service (QoS). In a network that consists of an interconnection of multiple LANs via bridges, the QoS of a flow depends on the length of an end-to-end forwarding path. In the IEEE 802.1D standard for bridges, a spanning tree is built among the bridges for loop-free frame forwarding. Albeit simple, this approach does not support all-pair shortest paths. In this paper, we present a novel bridge protocol, the Spanning Tree Alternate Routing (STAR) Bridge Protocol, that attempts to find and forward frames over alternate paths that are provably shorter than their corresponding tree paths. Being backward compatible to IEEE 802.1D, our bridge protocol allows cost-effective performance enhancement of an existing extended LAN by incrementally replacing a few bridges in the extended LAN by the new STAR bridges. We develop a strategy to ascertain bridge locations for maximum performance gain. Our study shows that we can significantly improve the end-to-end performance when deploying our bridge protocol.
---
paper_title: Rbridges: transparent routing
paper_content:
This work describes a method of interconnecting links that combines the advantages of bridging and routing. The basic design is a replacement for a transparent bridge and makes no assumption about higher layer protocols. It involves creating an infrastructure of switches (which we call Rbridges, for "routing bridges") in which packets are routed, although, as with bridges, layer 2 end code location is learned through receipt of data packets. It avoids the disadvantages of bridges, since packets within the infrastructure need not be confined to a spanning tree, and packets are protected with a hop count and not proliferated while in transit, so there is no need for any artificial startup delay on ports to avoid temporary loops. This allows IP nodes to travel within a multi-link campus without changing IP addresses. The paper introduces further optimizations for IP, such as avoiding hooding ARP messages through the infrastructure, and (for IP nodes), allowing Rbridges to avoid learning on data packets.
---
paper_title: Achieving sub-second IGP convergence in large IP networks
paper_content:
We describe and analyse in details the various factors that influence the convergence time of intradomain link state routing protocols. This convergence time reflects the time required by a network to react to the failure of a link or a router. To characterise the convergence process, we first use detailed measurements to determine the time required to perform the various operations of a link state protocol on currently deployed routers. We then build a simulation model based on those measurements and use it to study the convergence time in large networks. Our measurements and simulations indicate that sub-second link-state IGP convergence can be easily met on an ISP network without any compromise on stability.
---
paper_title: Generalized Multi-Protocol Label Switching (GMPLS) Signaling Functional Description
paper_content:
This document describes extensions to Multi-Protocol Label Switching (MPLS) signaling required to support Generalized MPLS. Generalized MPLS extends the MPLS control plane to encompass time-division (e.g., Synchronous Optical Network and Synchronous Digital Hierarchy, SONET/SDH), wavelength (optical lambdas) and spatial switching (e.g., incoming port or fiber to outgoing port or fiber). This document presents a functional description of the extensions. Protocol specific formats and mechanisms, and technology specific details are specified in separate documents.
---
paper_title: Carrier-grade Ethernet for packet core networks
paper_content:
Ethernet is a permanent success story, extending its reach from LAN and metro areas now also into core networks. 100 Gbit/s Ethernet will be the key enabler for a new generation of true end-to-end carrier grade Ethernet networks. This paper first focuses on functionality and standards required to enable carrier-grade Ethernet-based core networks and possible Ethernet backbone network architectures will be discussed. The second part then evaluates the CAPEX and OPEX performance of Ethernet core networks and competitive network architectures. The results propose that Ethernet will not only soon be mature enough for deployment in backbone networks but also provide huge cost advantages to providers. A novel complete, cost-effective and service-oriented infrastructure layer in the area of core networks will arise. The industry-wide efforts to cover remaining challenges also confirm this outlook.
---
| Title: A Survey of Advanced Ethernet Forwarding Approaches
Section 1: INTRODUCTION
Description 1: This section introduces the recent advances in Ethernet technology and its deployment in large-scale networks. It discusses the limitations of traditional Ethernet in terms of resilience, scalability, and integrated control features, and outlines the scope and organization of the survey.
Section 2: ETHERNET IN THE MAN CORE: METRO ETHERNET NOTIONS AND SERVICES
Description 2: This section provides an overview of Metro Ethernet (ME) notions and services, including a comparison with Asynchronous Transfer Mode (ATM) and a description of the different regions within a Metropolitan Area Network (MAN).
Section 3: Advantages Compared to ATM
Description 3: This section outlines the main advantages of deploying Ethernet in the MAN core compared to ATM, such as better quality/cost trade-off, higher flexibility, less overhead, BRAS decentralization, and true multipoint-to-multipoint connectivity.
Section 4: Challenges
Description 4: This section discusses the challenges Ethernet faces when applied to the MAN core, including issues with reliability, scalability, and the necessity for traffic segregation and control.
Section 5: Services
Description 5: This section covers the conceptual guidelines and standardization approaches for Ethernet services, defining categories such as E-LINE, E-LAN, E-TREE, and discussing the services defined by the IETF for Ethernet transport over Packet Switched Networks (PSNs).
Section 6: Achieving Scalability: Traffic Segregation and Control
Description 6: This section explains the methods used to achieve Ethernet scalability in the MAN, focusing on traffic segregation through tagging and encapsulation schemes, and techniques to control multicast traffic.
Section 7: IEEE SPANNING-TREE APPROACHES
Description 7: This section provides an overview of current IEEE spanning-tree standards, including STP, RSTP, and MSTP, and highlights their differences and evolution.
Section 8: NOVEL SPANNING-TREE BASED APPROACHES
Description 8: This section describes advanced Ethernet approaches that build upon spanning-tree protocols, such as Global Open Ethernet (GOE), Alternative Multiple Spanning Tree Protocol (AMSTP), and other novel solutions to improve convergence and resilience.
Section 9: ALTERNATIVE ETHERNET FORWARDING APPROACHES
Description 9: This section examines approaches that enhance Ethernet forwarding by departing from traditional spanning-tree protocols, focusing on methods that support shortest-path and multiple-path routing while maintaining the connectionless nature of Ethernet.
Section 10: CONNECTION-ORIENTED ETHERNET APPROACHES
Description 10: This section explores approaches that build carrier-grade Ethernet services by establishing connection-oriented tunneling and traffic engineering solutions, detailing methods such as PBB-TE, VLAN Cross-Connect (VXC), and T-MPLS.
Section 11: SUMMARY AND CONCLUSIONS
Description 11: This section summarizes the findings of the survey, discussing the potential and limitations of both connectionless and connection-oriented Ethernet approaches for large-scale network deployments, and concludes with insights on the future direction of Ethernet forwarding technologies. |
An Overview of Recent Application of Medical Infrared Thermography in Sports Medicine in Austria | 14 | ---
paper_title: The Role of Thermography in the Management of Equine Lameness
paper_content:
Equine thermography has increased in popularity recently because of improvements in thermal cameras and advances in image-processing software. The basic principle of thermography involves the transformation of surface heat from an object into a pictorial representation. The colour gradients generated reflect differences in the emitted heat. Variations from normal can be used to detect lameness or regions of inflammation in horses. Units can be so sensitive that flexor tendon injuries can be detected before the horse develops clinical lameness. Thermography has been used to evaluate several different clinical syndromes not only in the diagnosis of inflammation but also to monitor the progression of healing. Thermography has important applications in research for the detection of illegal performance-enhancing procedures at athletic events.
---
paper_title: Diagnosis of Raynaud's phenomenon by thermography
paper_content:
Background/aims: The aim was to clarify whether cold fingers before a moderate cold stress test can predict a prolonged delay (more than 20 min) in rewarming, as diagnostic for Raynaud's phenomenon. ::: ::: ::: ::: Methods: A retrospective study was conducted on 71 patients suspected of suffering from Raynaud's phenomenon. The thermal gradient from the metacarpophalangeal joints to the finger tips was calculated for each finger and cold fingers were defined by a temperature difference of more than −0.5°2C. ::: ::: ::: ::: Results: Combining the frequencies of cold fingers with a diagnosis Raynaud's phenomenon resulted in a sensitivity of 78.4%, a specificity of 72.4% and diagnostic accuracy of 74.0%. ::: ::: ::: ::: Conclusion: Based on a positive predictive value of 58.5%, it was concluded, that a prolonged delay of rewarming after a cold stress test cannot be predicted sufficiently by the presence of cold fingers alone, and that a cold stress test is necessary to confirm the diagnosis objectively.
---
paper_title: The effect of perineural anesthesia on infrared thermographic images of the forelimb digits of normal horses
paper_content:
Infrared thermography is an imaging modality gaining popularity as a diagnostic aid in the evaluation of equine lameness. Anecdotal reports of skin hyperthermia induced by local anesthesia, detected by thermography, have been made; however, no controlled studies have been reported. The purpose of this study was to examine the effects of perineural anesthesia on infrared thermographic images of the forelimb digits in normal horses. After environmental acclimation, infrared thermographs were made at intervals of 0, 5, 10, 15, 30, and 45 min from administration of mepivacaine hydrochloride or phosphate buffered saline in 6 adult horses with no clinical evidence of abnormality of the forelimb digits. The mean limb surface temperatures were compared by 2-factor ANOVA. Results indicated no significant difference between treatments, time after injection, or an interaction of time and treatment. Infrared thermographic imaging apparently can be performed within 45 min of perineural mepivacaine hydrochloride anesthesia without risk of artifactual changes in limb surface temperature.
---
paper_title: Infrared thermography: applications in heart surgery
paper_content:
Infrared thermography has become a way to monitor thermal abnormalities present in number of diseases and physical injuries. It is used as an aid to diagnosis, prognosis and therapy. Results obtained using the last generation of equipment (computer assisted thermographic systems, detectors without liquid nitrogen cooling system) and new techniques as dynamic thermography with independent source of driving radiation shows that it is a reliable tool for medical assessment and diagnosis. Most important--the Infrared Thermography is a non-invasive measurement technique, with non-stress for patients. This paper describes Intraoperative Thermoangiography during coronary bypass surgery.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Alpine skiing injuries. A nine-year study.
paper_content:
Injury patterns in alpine skiing have changed over time as ski, boot, binding, and slope-grooming technologies have evolved. We retrospectively examined injury patterns in alpine skiers over a 9-year period at the Mammoth and June mountains (California) ski area. A total of 24,340 injuries were reported for the 9 seasons studied, and total lift tickets sold numbered 9,201,486. The overall injury rate was 2.6 injuries per 1,000 skier days and increased slowly over the period studied. The knee was the most frequently injured area at 35% of all injuries. Increasing trends (P < .05) were noted for the rates of lower extremity injuries (37%) and knee injuries (45%). A decreasing trend was noted for the rate of lacerations (31% decrease). Slight increases were noted in upper extremity and axial injury rates. Skiing injuries continue to be a worrisome recreational problem despite improvements in ski equipment and slope-grooming techniques. The increasing trend in lower extremity, particularly knee, injury rates highlights the need for continued skier education and equipment innovation.
---
paper_title: Breast Thermography Is a Noninvasive Prognostic Procedure That Predicts Tumor Growth Rate in Breast Cancer Patients
paper_content:
Our recent retrospective analysis of the clinical records of patients who had breast thermography demonstrated that an abnormal thermogram was associated with an increased risk of breast cancer and a poorer prognosis for the breast cancer patient. This study included 100 normal patients, 100 living cancer patients, and 126 deceased cancer patients. Abnormal thermograms included asymmetric focal hot spots, areolar and periareolar heat, diffuse global heat, vessel discrepancy, or thermographic edge sign. Incidence and prognosis were directly related to thermographic results: only 28% of the noncancer patients had an abnormal thermogram, compared to 65% of living cancer patients and 88% of deceased cancer patients. Further studies were undertaken to determine if thermography is an independent prognostic indicator. Comparison to the components of the TNM classification system showed that only clinical size was significantly larger (p = 0.006) in patients with abnormal thermograms. Age, menopausal status, and location of tumor (left or right breast) were not related to thermographic results. Progesterone and estrogen receptor status was determined by both the cytosol-DCC and immunocytochemical methods, and neither receptor status showed any clear relationship to the thermographic results. Prognostic indicators that are known to be related to tumor growth rate were then compared to thermographic results. The concentration of ferritin in the tumor was significantly higher (p = 0.021) in tumors from patients with abnormal thermograms (1512 +/- 2027, n = 50) compared to tumors from patients with normal thermograms (762 +/- 620, n = 21). Both the proportion of cells in DNA synthesis (S-phase) and proliferating (S-phase plus G2M-phase, proliferative index) were significantly higher in patients with abnormal thermograms. The expression of the proliferation-associated tumor antigen Ki-67 was also associated with an abnormal thermogram. The strong relationships of thermographic results with these three growth rate-related prognostic indicators suggest that breast cancer patients with abnormal thermograms have faster-growing tumors that are more likely to have metastasized and to recur with a shorter disease-free interval.
---
paper_title: Thermography in the diagnosis of inflammatory processes in the horse.
paper_content:
To evaluate the use of thermography in equine medicine, a three-phase study was conducted. In the first phase, six horses were examined thermographically, before and after exercise, to determine a normal thermal pattern. In the second phase, nine horses with acute and chronic inflammatory processes were examined thermographically. In the third phase, thermography was used to evaluate the effectiveness of anti-inflammatory drugs on chemically induced inflammatory reactions. All normal horses tested had similar infrared emission patterns. There was a high degree of symmetry between right and left and between front (dorsal) to rear (palmar, plantar) in the legs distal to the carpus and the tarsus. The warmer areas of the thermogram tended to follow major vascular structures. The coronary band was the warmest area of the leg. Heat increase due to exercise did not substantially alter the normal thermographic pattern. Use of thermography in clinical cases successfully detected a subluxation of the third lumbar vertebra, a subsolar abscess, alveolar periostitis and abscess, laminitis, serous arthritis of the femoropatellar joint, and tendonitis. Thermography was effective in quantitative and qualitative evaluation of anti-inflammatory compounds in the treatment of chemically induced inflammation.
---
paper_title: Infrared thermography for examination of skin temperature in the dorsal hand of office workers
paper_content:
Reduced blood flow may contribute to the pathophysiology of upper extremity musculoskeletal disorders (UEMSD), such as tendinitis and carpal tunnel syndrome. The study objective was to characterize potential differences in cutaneous temperature, among three groups of office workers assessed by dynamic thermography following a 9-min typing challenge: those with UEMSD, with ( n=6) or without ( n=10) cold hands exacerbated by keyboard use, and control subjects ( n=12). Temperature images of the metacarpal region of the dorsal hand were obtained 1 min before typing, and during three 2-min sample periods [0-2 min (early), 3-5 min (middle), and 8-10 min (late)] after typing. Mean temperature increased from baseline levels immediately after typing by a similar magnitude, 0.7 (0.3) degrees C in controls and 0.6 (0.2) degrees C in UEMSD cases without cold hands, but only by 0.1 (0.3) degrees C in those with cold hands. Using paired t-tests for within group comparisons of mean dorsal temperature between successive imaging periods, three patterns of temperature change were apparent during 10 min following typing. Controls further increased mean temperature by 0.1 degrees C ( t-test, P=0.001) at 3-5 min post-typing before a late temperature decline of -0.3 degrees C ( t-test, P=0.04), while cases without cold hands showed no change from initial post-typing mean temperature rise during middle or late periods. In contrast, subjects with keyboard-induced cold hands had no change from initial post-typing temperature until a decrease at the late period of -0.3 degrees C ( t-test, P=0.06). Infrared thermography appears to distinguish between the three groups of subjects, with keyboard-induced cold hand symptoms presumably due, at least partially, to reduced blood flow.
---
paper_title: Human thermal models for evaluating infrared images
paper_content:
Discusses comparing infrared images under various thermal environmental conditions through normalization of skin surface temperature. To evaluate IR images obtained under various thermal environmental conditions, we proposed a human thermal model with which IR images obtained under certain thermal environmental conditions can be converted into images under other conditions. The model was based on a numerical calculation of the bio-heat transfer equations that express heat transfer phenomena within the human body. A 16-cylinder-segment model was used as the geometry of the human body. Comparisons of IR images with their converted images indicate that this method is effective in eliminating the influence of the thermal environmental conditions. However, the difference between the converted images and the original ones varies among segments. In future work, we will use this method to investigate the IR images of several subjects under various thermal environments.
---
paper_title: A perspective on medical infrared imaging.
paper_content:
Since the early days of thermography in the 1950s, image processing techniques, sensitivity of thermal sensors and spatial resolution have progressed greatly, holding out fresh promise for infrared (IR) imaging techniques. Applications in civil, industrial and healthcare fields are thus reaching a high level of technical performance. The relationship between body temperature and disease was documented since 400 bc. In many diseases there are variations in blood flow, and these in turn affect the skin temperature. IR imaging offers a useful and non-invasive approach to the diagnosis and treatment (as therapeutic aids) of many disorders, in particular in the areas of rheumatology, dermatology, orthopaedics and circulatory abnormalities. This paper reviews many usages (and hence the limitations) of thermography in biomedical fields.
---
paper_title: Spectral emissivity of skin and pericardium.
paper_content:
A monochromator was modified to measure the emissivity, e(λ), of living tissue in the infrared region between 1 and 14 μm. The infrared radiation from the tissue was compared with blackbody radiation and in this way e(λ) has been determined for white skin, black skin, burnt skin and pericardium. A compensating skin thermometer was constructed to measure the temperature of the surface of the tissue. The temperature difference before and after contact between a gold ring and the surface was made as small as possible (0.05 K). A reference radiator with the same spectral radiance (experimentally determined) mas used in compensating for the environment. It appeared that e(λ) for skin is independent of the wavelength and equal to 0.98+-0.01. These results contradict those of Elam, Goodwin and Lloyd Williams, but are in good agreement with those of Hardy and Watmough and Oliver. In addition there was no difference between e(λ) for normal skin and burnt skin. Epicardium values were found to lie between 0.83 (fresh heart) and 0.90 (after 7 h and after 9 d).
---
paper_title: Convective Heat Transfer and Infrared Thermography (IRTh)
paper_content:
The paper deals with the application of the infrared thermography to the determination of the convective heat transfer coefficient in complex flow configurations. The fundamental principles upon which the IRTh relies are reviewed. The different methods developed to evaluate the heat exchange are described and illustrated through applications to the aerospace and aeronautical field as well as to the industrial processes.
---
paper_title: An Accurate and Reliable Method of Thermal Data Analysis in Thermal Imaging of the Anterior Knee for Use in Cryotherapy Research
paper_content:
Abstract Selfe J, Hardaker N, Thewlis D, Karki A. An accurate and reliable method of thermal data analysis in thermal imaging of the anterior knee for use in cryotherapy research. Objective To develop an anatomic marker system (AMS) as an accurate, reliable method of thermal imaging data analysis, for use in cryotherapy research. Design Investigation of the accuracy of new thermal imaging technique. Setting Hospital orthopedic outpatient department in England. Participants Consecutive sample of 9 patients referred to anterior knee pain clinic. Interventions Not applicable. Main Outcome Measures Thermally inert markers were placed at specific anatomic locations, defining an area over the anterior knee of patients with anterior knee pain. A baseline thermal image was taken. Patients underwent a 3-minute thermal washout of the affected knee. Thermal images were collected at a rate of 1 image per minute for a 20-minute re-warming period. A Matlab (version 7.0) program was written to digitize the marker positions and subsequently calculate the mean of the area over the anterior knee. Virtual markers were then defined as 15% distal from the proximal marker, 30% proximal from the distal markers, 15% lateral from the medial marker, and 15% medial from the lateral marker. The virtual markers formed an ellipse, which defined an area representative of the patella shape. Within the ellipse, the mean value of the full pixels determined the mean temperature of this region. Ten raters were recruited to use the program and interrater reliability was investigated. Results The intraclass correlation coefficient produced coefficients within acceptable bounds, ranging from .82 to .97, indicating adequate interrater reliability. Conclusions The AMS provides an accurate, reliable method for thermal imaging data analysis and is a reliable tool with which to advance cryotherapy research.
---
paper_title: Reliability of Fingertip Skin-surface Temperature and its Related Thermal Measures as Indices of Peripheral Perfusion in the Clinical Setting of the Operating Theatre
paper_content:
During the perioperative period, evaluation of digital blood flow would be useful in early detection of decreased circulating volume, thermoregulatory responses or anaphylactoid reactions, and assessment of the effects of vasoactive agents. This study was designed to assess the reliability of fingertip temperature, core-fingertip temperature gradients and fingertip-forearm temperature gradients as indices of fingertip blood flow in the clinical setting of the operating theatre. In 22 adult patients undergoing abdominal surgery with general anaesthesia, fingertip skin-surface temperature, forearm skin-surface temperature, and nasopharyngeal temperature were measured every five minutes during the surgery. Fingertip skin-surface blood flow was simultaneously estimated using laser Doppler flowmetry. These measurements were made in the same upper limb with an IV catheter (+IV group, n = 11) or without an IV catheter (-IV group, n=11). Fingertip blood flow, transformed to a logarithmic scale, significantly correlated with any of the three thermal measures in both the groups. Their rank order as an index of fingertip blood flow in the -IV group was forearm-fingertip temperature gradient (r=-0.86) > fingertip temperature (r=0.83) > nasopharyngeal-fingertip temperature gradient (r=-0.82), while that in the +IV group was nasopharyngeal-fingertip temperature gradient (r=-0.77) > fingertip temperature (r=0.71) > forearm-fingertip temperature gradient (r=-0.66). The relation of fingertip blood flow to each thermal measure in the -IV group was stronger (P<0.05) than that in the +IV group. In the clinical setting of the operating theatre, using the upper limb without IV catheters, fingertip skin-surface temperature, nasopharyngeal-fingertip temperature gradients, and forearm-fingertip temperature gradients are almost equally reliable measures of fingertip skin-surface blood flow.
---
paper_title: Reproducibility of infrared thermography measurements in healthy individuals.
paper_content:
The aim of this study was to investigate the reproducibility of skin surface infrared thermography (IRT) measurements and determine the factors influencing the variability of the measured values. While IRT has been widely utilized in different clinical conditions, there are few available data on the values of the skin temperature patterns of healthy subjects and their reproducibility. We recorded the whole body skin temperatures of sixteen healthy young men with two observers on two consecutive days. The results were compared using intra-class correlations analyses (ICC). The inter-examiner reproducibility of the IRT measurements was high: mean ICC 0.88 (0.73-0.99). The day-to-day stability of thermal patterns varied depending on the measured area: it was high in the core and poor in distal areas. The reproducibility of the side-to-side temperature differences (deltaT) was moderately good between the two observers (mean ICC 0.68) but it was reduced with time, especially in the extremities, mean ICC 0.4 (-0.01-0.83). The results suggest that the IRT technique may represent an objective quantifiable indicator of autonomic disturbances although there are considerable temporal variations in the measured values which are due to both technical factors such as equipment accuracy, measurement environment and technique, and physiological variability of the blood flow, and these factors should be taken into account.
---
paper_title: An Accurate and Reliable Method of Thermal Data Analysis in Thermal Imaging of the Anterior Knee for Use in Cryotherapy Research
paper_content:
Abstract Selfe J, Hardaker N, Thewlis D, Karki A. An accurate and reliable method of thermal data analysis in thermal imaging of the anterior knee for use in cryotherapy research. Objective To develop an anatomic marker system (AMS) as an accurate, reliable method of thermal imaging data analysis, for use in cryotherapy research. Design Investigation of the accuracy of new thermal imaging technique. Setting Hospital orthopedic outpatient department in England. Participants Consecutive sample of 9 patients referred to anterior knee pain clinic. Interventions Not applicable. Main Outcome Measures Thermally inert markers were placed at specific anatomic locations, defining an area over the anterior knee of patients with anterior knee pain. A baseline thermal image was taken. Patients underwent a 3-minute thermal washout of the affected knee. Thermal images were collected at a rate of 1 image per minute for a 20-minute re-warming period. A Matlab (version 7.0) program was written to digitize the marker positions and subsequently calculate the mean of the area over the anterior knee. Virtual markers were then defined as 15% distal from the proximal marker, 30% proximal from the distal markers, 15% lateral from the medial marker, and 15% medial from the lateral marker. The virtual markers formed an ellipse, which defined an area representative of the patella shape. Within the ellipse, the mean value of the full pixels determined the mean temperature of this region. Ten raters were recruited to use the program and interrater reliability was investigated. Results The intraclass correlation coefficient produced coefficients within acceptable bounds, ranging from .82 to .97, indicating adequate interrater reliability. Conclusions The AMS provides an accurate, reliable method for thermal imaging data analysis and is a reliable tool with which to advance cryotherapy research.
---
paper_title: Reproducibility of infrared thermography measurements in healthy individuals.
paper_content:
The aim of this study was to investigate the reproducibility of skin surface infrared thermography (IRT) measurements and determine the factors influencing the variability of the measured values. While IRT has been widely utilized in different clinical conditions, there are few available data on the values of the skin temperature patterns of healthy subjects and their reproducibility. We recorded the whole body skin temperatures of sixteen healthy young men with two observers on two consecutive days. The results were compared using intra-class correlations analyses (ICC). The inter-examiner reproducibility of the IRT measurements was high: mean ICC 0.88 (0.73-0.99). The day-to-day stability of thermal patterns varied depending on the measured area: it was high in the core and poor in distal areas. The reproducibility of the side-to-side temperature differences (deltaT) was moderately good between the two observers (mean ICC 0.68) but it was reduced with time, especially in the extremities, mean ICC 0.4 (-0.01-0.83). The results suggest that the IRT technique may represent an objective quantifiable indicator of autonomic disturbances although there are considerable temporal variations in the measured values which are due to both technical factors such as equipment accuracy, measurement environment and technique, and physiological variability of the blood flow, and these factors should be taken into account.
---
paper_title: Activity-related knee injuries and pain in athletic adolescents
paper_content:
By collecting data from 45 students at a ski high school, we found that a total of 73% of the students reported activity-related pain/injuries of the knee. Sixty-one percent had overuse injuries, 27% malalignment, and 12% had indistinct knee pain. Females suffered more knee pain/injuries (88%) than males (57%). Significantly higher Q-angle degrees were recorded for females (16 ) than for males (10 ). "Jumper's knee" was found in all competitive students with a KT manual maximum difference (MMD) of 3 mm or more (mean 4 mm), with a hard endpoint, whereas this was less common among the other competitive students (P<0.05). The students were given counselling about training and physiotherapy. In the follow-up study 1 year later, a significant reduction of knee pain/overuse injuries, from 73% to 35%, was recorded. This may be related to better equipment, the development of techniques, and training of the muscles. A high volume of training and knee instability, with MMD of 3 mm or more, seemed to be correlated with an increased risk for "jumper's knee" and, possibly, for skiing injuries. By identifying those at increased risk, pre-season recommendations can be made and ski injuries may be prevented.
---
paper_title: A 7-year study on risks and costs of knee injuries in male and female youth participants in 12 sports
paper_content:
Knee injuries are common and account in various sports for 15-50% of all sports injuries. The cost of knee injuries is therefore a large part of the cost for medical care of sports injuries. Furthermore, the risk of acquiring a knee injury during sports is considered higher for females than for males. The nationwide organization "Youth and Sports" represents the major source of organized sports and recreation for Swiss youth and engages annually around 370000 participants in the age group of 14 to 20 years. The purpose of this study was to combine data on knee injuries from two sources, the first being data on the exposure to risk found in the activity registration in "Youth and Sports" and the second injuries with their associated costs resulting from the activities and filed at the Swiss Military Insurance. This allowed calculation of knee injury incidences, to compare risks between males and females and to estimate the costs of medical treatment. The study comprises 3864 knee injuries from 12 sports during 7 years. Females were significantly more at risk in six sports: alpinism, downhill skiing, gymnastics, volleyball, basketball and team handball. The incidences of knee injuries and of cruciate ligament injuries in particular, together with the costs per hour of participation, all displayed the same sports as the top five for both females and males: ice hockey, team handball, soccer, downhill skiing and basketball. Female alpinism and gymnastics had also high rankings. Knee injuries comprised 10% of all injuries in males and 13% in females, but their proportional contribution to the costs per hour of participation was 27% and 33%, respectively. From this study it can be concluded that females were significantly more at risk for knee injuries than males in six sports and that knee injuries accounted for a high proportion of the costs of medical treatment.
---
paper_title: Dynamic Thermography: Analysis of Hand Temperature During Exercise
paper_content:
AbstractExercise has a noted effect on skin blood flow and temperature. We aimed to characterize the normal skin temperature response to exercise by thermographic imaging. A study was conducted on ten healthy and active subjects (age=25.8 ± 0.7 years) who were exposed to graded exercise for determination of maximal oxygen consumption (VO2 max), and subsequently to constant loads corresponding to 50%, 70%, and 90% of VO2 max. The skin temperature response during 20 min of constant load exercise is characterized by an initial descending limb, an ascending limb and a quasi-steady-state period. For 50% VO2 the temperature decrease rate was --0.0075±0.001°C/s during a time interval of 390 ±47 s and the temperature increase rate was 0.0055 ± 0.0031 °C/s during a time interval of 484 ±99 s. The level of load did not influence the temperature decrease and increase rates. In contrast, during graded load exercise, a continuous temperature decrease of --0.0049 ± 0.0032 °C/s was observed throughout the test. In summary, the thermographic skin response to exercise is characterized by a specific pattern which reflects the dynamic balance between hemodynamic and thermoregulatory processes. © 1998 Biomedical Engineering Society. ::: PAC98: 8722Pg, 8759Wc, 8745Dr, 0180+b, 8745Hw
---
paper_title: Mathematical modeling of temperature mapping over skin surface and its implementation in thermal disease diagnostics
paper_content:
Abstract In non-invasive thermal diagnostics, accurate correlations between the thermal image on skin surface and interior human pathophysiology are often desired, which require general solutions for the bioheat equation. In this study, the Monte Carlo method was implemented to solve the transient three-dimensional bio-heat transfer problem with non-linear boundary conditions (simultaneously with convection, radiation and evaporation) and space-dependent thermal physiological parameters. Detailed computations indicated that the thermal states of biological bodies, reflecting physiological conditions, could be correlated to the temperature or heat flux mapping recorded at the skin surface. The effect of the skin emissivity and humidity, the convective heat transfer coefficient, the relative humidity and temperature of the surrounding air, the metabolic rate and blood perfusion rate in the tumor, and the tumor size and number on the sensitivity of thermography are comprehensively investigated. Moreover, several thermal criteria for disease diagnostic were proposed based on statistical principles. Implementations of this study for the clinical thermal diagnostics are discussed.
---
paper_title: A perspective on medical infrared imaging.
paper_content:
Since the early days of thermography in the 1950s, image processing techniques, sensitivity of thermal sensors and spatial resolution have progressed greatly, holding out fresh promise for infrared (IR) imaging techniques. Applications in civil, industrial and healthcare fields are thus reaching a high level of technical performance. The relationship between body temperature and disease was documented since 400 bc. In many diseases there are variations in blood flow, and these in turn affect the skin temperature. IR imaging offers a useful and non-invasive approach to the diagnosis and treatment (as therapeutic aids) of many disorders, in particular in the areas of rheumatology, dermatology, orthopaedics and circulatory abnormalities. This paper reviews many usages (and hence the limitations) of thermography in biomedical fields.
---
paper_title: Exercise-Associated Thermographic Changes in Young and Elderly Subjects
paper_content:
This study aimed at evaluating the thermographic changes associated with localized exercise in young and elderly subjects. An exercise protocol using 1 kg load was applied during 3 min to the knee flexors of 14 elderly (67 +/- 5 years) and 15 young (23 +/- 2 years) healthy subjects. The posterior thigh's skin temperature of the exercised limb and contralateral limb were measured by infrared thermography on pre-exercise, immediately post-exercise, and during the 10-min period post-exercise. Difference (p < 0.01) between elderly and young subjects was observed on pre-exercise temperature. Although differences were not observed between pre-exercise and immediately post-exercise temperature in the exercised limb, thermographic profile displayed heat concentration in exercised areas for both groups. Temperature reduction was only observed for the young group on the 10-min post-exercise (p < 0.05) in the exercised limb (30.7 +/- 1.7 to 30.3 +/- 1.5 degrees C). In contrast, there was a temperature reduction post-exercise (p < 0.01) in the contralateral limb for both groups. These results present new evidences that elderly and young subjects display similar capacity of heat production; however, the elderly subjects presented a lower resting temperature and slower heat dissipation. This work contributes to improve the understanding about temperature changes in elderly subjects and may present implications to the sports and rehabilitation programs.
---
| Title: An Overview of Recent Application of Medical Infrared Thermography in Sports Medicine in Austria
Section 1: Introduction
Description 1: Introduce medical infrared thermography, its principles, and its relevance in sports medicine, particularly focusing on Austria.
Section 2: International Status of Medical Infrared Imaging
Description 2: Discuss the recognition of MIT by medical associations worldwide and its application in sports medicine.
Section 3: Electromagnetic Spectrum
Description 3: Explain the basics of the electromagnetic spectrum and its significance in medical imaging, particularly MIT.
Section 4: Infrared Radiation
Description 4: Describe the principles of infrared radiation, its measurement, and the thermal imaging process in medical applications.
Section 5: The 21st Century Technique
Description 5: Highlight the advancements in MIT technology and their impact on diagnostic capabilities in sports medicine.
Section 6: Recommended Requirements for Human Medicine
Description 6: Outline the technical and quality requirements for infrared cameras used in human medical assessments.
Section 7: Reliability Study
Description 7: Discuss the reliability of MIT measurements, focusing on the studies conducted and their significance in sports medicine.
Section 8: Methods of Reliability Study
Description 8: Provide details of the methodology used in reliability studies of MIT, including subject examination and data analysis.
Section 9: Results
Description 9: Present the findings of the reliability studies, including variability and reproducibility of thermal measurements.
Section 10: Clinical Application in Alpine Skiing
Description 10: Explore the use of MIT in diagnosing and preventing injuries in alpine skiing, including case studies and practical applications.
Section 11: Overuse Injuries
Description 11: Discuss the detection and management of overuse injuries using MIT, with specific examples from alpine skiing.
Section 12: Traumatic Injuries
Description 12: Describe the role of MIT in diagnosing and monitoring the recovery of traumatic injuries, with case study illustrations.
Section 13: Limitations and Advantages of Infrared Imaging
Description 13: Address the benefits and limitations of MIT, emphasizing its role as a supplementary diagnostic tool in sports medicine.
Section 14: Conclusions
Description 14: Summarize the findings and future potential of MIT in sports medicine, stressing the need for further research and the creation of sports-specific databases. |
A Survey of Unstructured Text Summarization Techniques | 5 | ---
paper_title: Generic text summarization using relevance measure and latent semantic analysis
paper_content:
In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.
---
paper_title: Comparative Recall and Precision of Simple and Expert Searches in Google Scholar and Eight Other Databases
paper_content:
This study evaluates the effectiveness of simple and expert searches in Google Scholar (GS), EconLit, GEOBASE, PAIS, POPLINE, PubMed, Social Sciences Citation Index, Social Sciences Full Text, and Sociological Abstracts. It assesses the recall and precision of 32 searches in the field of later-life migration: nine simple keyword searches and 23 expert searches constructed by demography librarians at three top universities. For simple searches, Google Scholar’s recall and precision are well above average. For expert searches, the relative effectiveness of GS depends on the number of results users are willing to examine. Although Google Scholar’s expert-search performance is just average within the first fifty search results, GS is one of the few databases that retrieves relevant results with reasonably high precision after the fiftieth hit. The results also show that simple searches in GS, GEOBASE, PubMed, and Sociological Abstracts have consistently higher recall and precision than expert searches. This can be attributed not to differences in expert-search effectiveness, but to the unusually strong performance of simple searches in those four databases.
---
paper_title: Generic text summarization using relevance measure and latent semantic analysis
paper_content:
In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.
---
paper_title: Multi-topic based query-oriented summarization
paper_content:
Query-oriented summarization aims at extracting an informative summary from a document collection for a given query. It is very useful to help users grasp the main information related to a query. Existing work can be mainly classified into two categories: supervised method and unsupervised method. The former requires training examples, which makes the method limited to predefined domains. While the latter usually utilizes clustering algorithms to find ‘centered’ sentences as the summary. However, the method does not consider the query information, thus the summarization is general about the document collection itself. Moreover, most of existing work assumes that documents related to the query only talks about one topic. Unfortunately, statistics show that a large portion of summarization tasks talk about multiple topics. In this paper, we try to break limitations of the existing methods and study a new setup of the problem of multi-topic based query-oriented summarization. We propose using a probabilistic approach to solve this problem. More specifically, we propose two strategies to incorporate the query information into a probabilistic model. Experimental results on two different genres of data show that our proposed approach can effectively extract a multi-topic summary from a document collection and the summarization performance is better than baseline methods. The approach is quite general and can be applied to many other mining tasks, for example product opinion analysis and question answering.
---
paper_title: Multi-document Summarization Based on Cluster Using Non-negative Matrix Factorization
paper_content:
In this paper, a new summarization method, which uses non-negative matrix factorization (NMF) and K-means clustering, is introduced to extract meaningful sentences from multi-documents. The proposed method can improve the quality of document summaries because the inherent semantics of the documents are well reflected by using the semantic features calculated by NMF and the sentences most relevant to the given topic are extracted efficiently by using the semantic variables derived by NMF. Besides, it uses K-means clustering to remove noises so that it can avoid the biased inherent semantics of the documents to be reflected in summaries. We perform detail experiments with the well-known DUC test dataset. The experimental results demonstrate that the proposed method has better performance than other methods using the LSA, the Kmeans, and the NMF.
---
paper_title: Generic text summarization using relevance measure and latent semantic analysis
paper_content:
In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.
---
paper_title: Bayesian Query-Focused Summarization
paper_content:
We present BAYESUM (for "Bayesian summarization"), a model for sentence extraction in query-focused summarization. BAYESUM leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BAYESUM is not afflicted by the paucity of information in short queries. We show that approximate inference in BAYESUM is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BAYESUM can be understood as a justified query expansion technique in the language modeling for IR framework.
---
paper_title: Summarization beyond sentence extraction: A probabilistic approach to sentence compression
paper_content:
When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text/abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline.
---
paper_title: ROUGE: A Package For Automatic Evaluation Of Summaries
paper_content:
ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.
---
paper_title: A zipf-like distant supervision approach for multi-document summarization using wikinews articles
paper_content:
This work presents a sentence ranking strategy based on distant supervision for the multi-document summarization problem. Due to the difficulty of obtaining large training datasets formed by document clusters and their respective human-made summaries, we propose building a training and a testing corpus from Wikinews. Wikinews articles are modeled as "distant" summaries of their cited sources, considering that first sentences of Wikinews articles tend to summarize the event covered in the news story. Sentences from cited sources are represented as tuples of numerical features and labeled according to a relationship with the given distant summary that is based on the Zipf law. Ranking functions are trained using linear regressions and ranking SVMs, which are also combined using Borda count. Top ranked sentences are concatenated and used to build summaries, which are compared with the first sentences of the distant summary using ROUGE evaluation measures. Experimental results obtained show the effectiveness of the proposed method and that the combination of different ranking techniques outperforms the quality of the generated summary.
---
paper_title: Latent Dirichlet Allocation
paper_content:
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.
---
paper_title: Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering
paper_content:
A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents.
---
paper_title: The use of MMR, diversity-based reranking for reordering documents and producing summaries
paper_content:
This paper presents a method for combining ::: query-relevance with information-novelty in the context ::: of text retrieval and summarization. The Maximal ::: Marginal Relevance (MMR) criterion strives to reduce ::: redundancy while maintaining query relevance in ::: re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results ::: indicate some benefits for MMR diversity ranking ::: in document retrieval and in single document summarization. ::: The latter are borne out by the recent results of the ::: SUMMAC conference in the evaluation of summarization ::: systems. However, the clearest advantage is demonstrated ::: in constructing non-redundant multi-document ::: summaries, where MMR results are clearly superior to ::: non-MMR passage selection.
---
paper_title: Generic text summarization using relevance measure and latent semantic analysis
paper_content:
In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.
---
paper_title: Generic text summarization using relevance measure and latent semantic analysis
paper_content:
In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.
---
paper_title: The use of MMR, diversity-based reranking for reordering documents and producing summaries
paper_content:
This paper presents a method for combining ::: query-relevance with information-novelty in the context ::: of text retrieval and summarization. The Maximal ::: Marginal Relevance (MMR) criterion strives to reduce ::: redundancy while maintaining query relevance in ::: re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results ::: indicate some benefits for MMR diversity ranking ::: in document retrieval and in single document summarization. ::: The latter are borne out by the recent results of the ::: SUMMAC conference in the evaluation of summarization ::: systems. However, the clearest advantage is demonstrated ::: in constructing non-redundant multi-document ::: summaries, where MMR results are clearly superior to ::: non-MMR passage selection.
---
paper_title: The use of MMR, diversity-based reranking for reordering documents and producing summaries
paper_content:
This paper presents a method for combining ::: query-relevance with information-novelty in the context ::: of text retrieval and summarization. The Maximal ::: Marginal Relevance (MMR) criterion strives to reduce ::: redundancy while maintaining query relevance in ::: re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results ::: indicate some benefits for MMR diversity ranking ::: in document retrieval and in single document summarization. ::: The latter are borne out by the recent results of the ::: SUMMAC conference in the evaluation of summarization ::: systems. However, the clearest advantage is demonstrated ::: in constructing non-redundant multi-document ::: summaries, where MMR results are clearly superior to ::: non-MMR passage selection.
---
paper_title: Information-content based sentence extraction for text summarization
paper_content:
We propose the FULL-COVERAGE summarizer: an efficient, information retrieval oriented method to extract nonredundant sentences from text for summarization purposes. Our method leverages existing information retrieval technology by extracting key-sentences on the premise that the relevance of a sentence is proportional to its similarity to the whole document. We show that our method can produce sentence-based summaries that are up to 78% smaller than the original text with only 3% loss in retrieval performance.
---
paper_title: Centroid-Based Summarization Of Multiple Documents: Sentence Extraction, Utility-Based Evaluation, And User Studies
paper_content:
We present a multi-document summarizer, called MEAD, which generates summaries using cluster centroids produced by a topic detection and tracking system. We also describe two new techniques, based on sentence utility and subsumption, which we have applied to the evaluation of both single and multiple document summaries. Finally, we describe two user studies that test our models of multi-document summarization.
---
paper_title: A new approach to unsupervised text summarization
paper_content:
The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \cite{carbonell-goldstein98}. We propose, in addition, an information-centric approach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization. To find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.
---
paper_title: Introduction To The Special Issue On Summarization
paper_content:
generation based on rhetorical structure extraction. In Proceedings of the International Conference on Computational Linguistics, Kyoto, Japan, pages 344–348. Otterbacher, Jahna, Dragomir R. Radev, and Airong Luo. 2002. Revisions that improve cohesion in multi-document summaries: A preliminary study. In ACL Workshop on Text Summarization, Philadelphia. Papineni, K., S. Roukos, T. Ward, and W-J. Zhu. 2001. BLEU: A method for automatic evaluation of machine translation. Research Report RC22176, IBM. Radev, Dragomir, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Arda Celebi, Hong Qi, Elliott Drabek, and Danyu Liu. 2002. Evaluation of text summarization in a cross-lingual information retrieval framework. Technical Report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, June. Radev, Dragomir R., Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: Sentence extraction, utility-based evaluation, and user studies. In ANLP/NAACL Workshop on Summarization, Seattle, April. Radev, Dragomir R. and Kathleen R. McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):469–500. Rau, Lisa and Paul Jacobs. 1991. Creating segmented databases from free text for text retrieval. In Proceedings of the 14th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New York, pages 337–346. Saggion, Horacio and Guy Lapalme. 2002. Generating indicative-informative summaries with SumUM. Computational Linguistics, 28(4), 497–526. Salton, G., A. Singhal, M. Mitra, and C. Buckley. 1997. Automatic text structuring and summarization. Information Processing & Management, 33(2):193–207. Silber, H. Gregory and Kathleen McCoy. 2002. Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4), 487–496. Sparck Jones, Karen. 1999. Automatic summarizing: Factors and directions. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 1–13. Strzalkowski, Tomek, Gees Stein, J. Wang, and Bowden Wise. 1999. A robust practical text summarizer. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 137–154. Teufel, Simone and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4), 409–445. White, Michael and Claire Cardie. 2002. Selecting sentences for multidocument summaries using randomized local search. In Proceedings of the Workshop on Automatic Summarization (including DUC 2002), Philadelphia, July. Association for Computational Linguistics, New Brunswick, NJ, pages 9–18. Witbrock, Michael and Vibhu Mittal. 1999. Ultra-summarization: A statistical approach to generating highly condensed non-extractive summaries. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, pages 315–316. Zechner, Klaus. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Computational Linguistics, 28(4), 447–485.
---
paper_title: Introduction To The Special Issue On Summarization
paper_content:
generation based on rhetorical structure extraction. In Proceedings of the International Conference on Computational Linguistics, Kyoto, Japan, pages 344–348. Otterbacher, Jahna, Dragomir R. Radev, and Airong Luo. 2002. Revisions that improve cohesion in multi-document summaries: A preliminary study. In ACL Workshop on Text Summarization, Philadelphia. Papineni, K., S. Roukos, T. Ward, and W-J. Zhu. 2001. BLEU: A method for automatic evaluation of machine translation. Research Report RC22176, IBM. Radev, Dragomir, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Arda Celebi, Hong Qi, Elliott Drabek, and Danyu Liu. 2002. Evaluation of text summarization in a cross-lingual information retrieval framework. Technical Report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, June. Radev, Dragomir R., Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: Sentence extraction, utility-based evaluation, and user studies. In ANLP/NAACL Workshop on Summarization, Seattle, April. Radev, Dragomir R. and Kathleen R. McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):469–500. Rau, Lisa and Paul Jacobs. 1991. Creating segmented databases from free text for text retrieval. In Proceedings of the 14th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New York, pages 337–346. Saggion, Horacio and Guy Lapalme. 2002. Generating indicative-informative summaries with SumUM. Computational Linguistics, 28(4), 497–526. Salton, G., A. Singhal, M. Mitra, and C. Buckley. 1997. Automatic text structuring and summarization. Information Processing & Management, 33(2):193–207. Silber, H. Gregory and Kathleen McCoy. 2002. Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4), 487–496. Sparck Jones, Karen. 1999. Automatic summarizing: Factors and directions. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 1–13. Strzalkowski, Tomek, Gees Stein, J. Wang, and Bowden Wise. 1999. A robust practical text summarizer. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 137–154. Teufel, Simone and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4), 409–445. White, Michael and Claire Cardie. 2002. Selecting sentences for multidocument summaries using randomized local search. In Proceedings of the Workshop on Automatic Summarization (including DUC 2002), Philadelphia, July. Association for Computational Linguistics, New Brunswick, NJ, pages 9–18. Witbrock, Michael and Vibhu Mittal. 1999. Ultra-summarization: A statistical approach to generating highly condensed non-extractive summaries. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, pages 315–316. Zechner, Klaus. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Computational Linguistics, 28(4), 447–485.
---
paper_title: Assessing Agreement On Classification Tasks: The Kappa Statistic
paper_content:
Currently, computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics, none of which are easily interpretable or comparable to each other. Meanwhile, researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic. We discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science, and argue that we would be better off as a field adopting techniques from content analysis.
---
paper_title: Introduction To The Special Issue On Summarization
paper_content:
generation based on rhetorical structure extraction. In Proceedings of the International Conference on Computational Linguistics, Kyoto, Japan, pages 344–348. Otterbacher, Jahna, Dragomir R. Radev, and Airong Luo. 2002. Revisions that improve cohesion in multi-document summaries: A preliminary study. In ACL Workshop on Text Summarization, Philadelphia. Papineni, K., S. Roukos, T. Ward, and W-J. Zhu. 2001. BLEU: A method for automatic evaluation of machine translation. Research Report RC22176, IBM. Radev, Dragomir, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Arda Celebi, Hong Qi, Elliott Drabek, and Danyu Liu. 2002. Evaluation of text summarization in a cross-lingual information retrieval framework. Technical Report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, June. Radev, Dragomir R., Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: Sentence extraction, utility-based evaluation, and user studies. In ANLP/NAACL Workshop on Summarization, Seattle, April. Radev, Dragomir R. and Kathleen R. McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):469–500. Rau, Lisa and Paul Jacobs. 1991. Creating segmented databases from free text for text retrieval. In Proceedings of the 14th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New York, pages 337–346. Saggion, Horacio and Guy Lapalme. 2002. Generating indicative-informative summaries with SumUM. Computational Linguistics, 28(4), 497–526. Salton, G., A. Singhal, M. Mitra, and C. Buckley. 1997. Automatic text structuring and summarization. Information Processing & Management, 33(2):193–207. Silber, H. Gregory and Kathleen McCoy. 2002. Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4), 487–496. Sparck Jones, Karen. 1999. Automatic summarizing: Factors and directions. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 1–13. Strzalkowski, Tomek, Gees Stein, J. Wang, and Bowden Wise. 1999. A robust practical text summarizer. In I. Mani and M. T. Maybury, editors, Advances in Automatic Text Summarization. MIT Press, Cambridge, pages 137–154. Teufel, Simone and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4), 409–445. White, Michael and Claire Cardie. 2002. Selecting sentences for multidocument summaries using randomized local search. In Proceedings of the Workshop on Automatic Summarization (including DUC 2002), Philadelphia, July. Association for Computational Linguistics, New Brunswick, NJ, pages 9–18. Witbrock, Michael and Vibhu Mittal. 1999. Ultra-summarization: A statistical approach to generating highly condensed non-extractive summaries. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, pages 315–316. Zechner, Klaus. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Computational Linguistics, 28(4), 447–485.
---
paper_title: Summarization evaluation using relative utility
paper_content:
We present a series of experiments to demonstrate the validity of Relative Utility (RU) as a measure for evaluating extractive summarizers. RU is applicable in both single-document and multi-document summarization, is extendable to arbitrary compression rates with no extra annotation effort, and takes into account both random system performance and interjudge agreement. Our results using the JHU summary corpus indicate that RU is a reasonable and often superior alternative to several common evaluation metrics.
---
paper_title: Bleu: A Method For Automatic Evaluation Of Machine Translation
paper_content:
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
---
| Title: A Survey of Unstructured Text Summarization Techniques
Section 1: INTRODUCTION
Description 1: This section introduces the motivation for text summarization, mentioning the problem of information overload and the need for efficient summarization techniques.
Section 2: TEXT SUMMARIZATION BY CLASSIFICATION
Description 2: This section presents the classification of text summarization techniques into various categories and discusses their respective advantages and disadvantages.
Section 3: UNSUPERVISED GENERIC TEXT SUMMARIZATION
Description 3: This section explores generic text summarization using unsupervised approaches, detailing several prominent algorithms used in the literature.
Section 4: TEXT SUMMARIZATION EVALUATION TECHNIQUES
Description 4: This section discusses the methods used to evaluate the quality of text summarization systems, including various metrics and their relevance.
Section 5: CONCLUSION
Description 5: This section concludes the survey by summarizing the main findings and highlighting future research directions for improving text summarization techniques. |
Toward an Ontology of Workarounds: A Literature Review on Existing Concepts | 7 | ---
paper_title: Workarounds to Barcode Medication Administration Systems: Their Occurrences, Causes, and Threats to Patient Safety
paper_content:
The authors develop a typology of clinicians' workarounds when using barcoded medication administration (BCMA) systems. Authors then identify the causes and possible consequences of each workaround. The BCMAs usually consist of handheld devices for scanning machine-readable barcodes on patients and medications. They also interface with electronic medication administration records. Ideally, BCMAs help confirm the five “rights” of medication administration: right patient, drug, dose, route, and time. While BCMAs are reported to reduce medication administration errors—the least likely medication error to be intercepted— these claims have not been clearly demonstrated. The authors studied BCMA use at five hospitals by: (1) observing and shadowing nurses using BCMAs at two hospitals, (2) interviewing staff and hospital leaders at five hospitals, (3) participating in BCMA staff meetings, (4) participating in one hospital's failure-mode-and-effects analyses, (5) analyzing BCMA override log data. The authors identified 15 types of workarounds, including, for example, affixing patient identification barcodes to computer carts, scanners, doorjambs, or nurses' belt rings; carrying several patients' prescanned medications on carts. The authors identified 31 types of causes of workarounds, such as unreadable medication barcodes (crinkled, smudged, torn, missing, covered by another label); malfunctioning scanners; unreadable or missing patient identification wristbands (chewed, soaked, missing); nonbarcoded medications; failing batteries; uncertain wireless connectivity; emergencies. The authors found nurses overrode BCMA alerts for 4.2% of patients charted and for 10.3% of medications charted. Possible consequences of the workarounds include wrong administration of medications, wrong doses, wrong times, and wrong formulations. Shortcomings in BCMAs' design, implementation, and workflow integration encourage workarounds. Integrating BCMAs within real-world clinical workflows requires attention to in situ use to ensure safety features' correct use.
---
paper_title: Enacting computer workaround practices within a medication dispensing system
paper_content:
Computer workarounds in health information systems (HIS) threaten the potential for gains in efficiency through computerization aimed at reducing process variability. Eliminating such workarounds is desirable, but information system (IS) researchers tend to treat computer workarounds as black-boxes, whereas HIS researchers are primarily concerned with descriptive or prescriptive remedies. We propose to open the black-box of computer workarounds and study them as situated practices that consist of adjustments to existing computer-based procedures, which are enabled by the negotiated order of a hospital. This negotiative property of a hospital's organizational environment allows for interpretive flexibility, in which physicians stretch certain rules in practice, while inducing others to cooperate. We illustrate this conceptual framework with a non-participant observer case study of a medication dispensing system used in a teaching hospital to support a prior-approval policy for anti-microbial drugs. Within these enacted workaround practices, we found significant variety in roles, timing and interactions, which boil down to a pattern of four practices revolving around one function of an HIS. Our research extends the literature on computer workarounds in IS and HIS by proposing a theoretical understanding of workaround practices based on a contextual healthcare study.
---
paper_title: Enacting Integrated Information Technology: A Human Agency Perspective
paper_content:
Recent perspectives on organizational change have emphasized human agency, more than technology or structure, to explain empirical outcomes resulting from the use of information technologies in organizations. Yet, newer technologies such as enterprise resource planning (ERP) systems continue to be associated with the agenda of organizational transformation, largely because they are assumed to constrain human action. We report an interpretive case study of an ERP system after its implementation in a large government agency. Despite the transformation agenda accompanying the new system, users initially chose to avoid using it as much as possible (inertia) and later to work around system constraints in unintended ways (reinvention). We explain the change in enactments with the concept of improvised learning, which was motivated by social influence from project leaders, "power users," and peers. Our results are consistent with arguments regarding the enactment of information technology in organizations and with temporal views of human agency. We conclude that an integrated technology like ERP, which potentially represents a "hard" constraint on human agency, can be resisted and reinvented in use.
---
paper_title: Guerrilla Employees: Should Managers Nurture, Tolerate, or Terminate Them?
paper_content:
“Guerrilla government” is Rosemary O'Leary's term for the actions of career public servants who work against the wishes—either implicitly or explicitly communicated—of their superiors. This form of dissent is usually carried out by those who are dissatisfied with the actions of public organizations, programs, or people, but typically, for strategic reasons, choose not to go public with their concerns in whole or in part. Rather than acting openly, guerrillas often move clandestinely behind the scenes, salmon swimming against the current of power. Guerrillas run the spectrum from anti-establishment liberals to fundamentalist conservatives, from constructive contributors to deviant destroyers. ::: ::: Three public managers with significant experience comment on O'Leary's thesis that guerrilla government is about the power of career bureaucrats; the tensions between career bureaucrats and political appointees; organization culture; and what it means to act responsibly, ethically, and with integrity as a public servant. Karl Sleight, former director of the New York State Ethics Commission; David Warm, executive director of the Mid-America Regional Council of Greater Kansas City; and Ralph R, Bauer, former deputy regional administrator of the U.S. Environmental Protection Agency in the Seattle and Chicago regions, present unique perspectives on the “guerrilla” influence on policy and management, as well as the challenges posed by this ever-present public management phenomenon. ::: ::: Guerrilla: One who engages in irregular warfare especially as a member of an independent unit. ::: ::: —Webster's New College Dictionary, 2008
---
paper_title: Analyzing the past to prepare for the future: Writing a literature review
paper_content:
A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed.
---
paper_title: Organizing knowledge syntheses: A taxonomy of literature reviews
paper_content:
A taxonomy of literature reviews in education and psychology is presented. The taxonomy categorizes reviews according to: (a) focus; (b) goal; (c) perspective; (d) coverage; (e) organization; and (f) audience. The seven winners of the American Educational Research Association’s Research Review Award are used to illustrate the taxonomy’s categories. Data on the reliability of taxonomy codings when applied by readers is presented. Results of a survey of review authors provides baseline data on how frequently different types of reviews appear in the education and psychology literature. How the taxonomy might help in judging the quality of literature reviews is discussed, along with more general standards for evaluating reviews.
---
paper_title: Against the Rules: Synthesizing Types and Processes of Bureaucratic Rule-Breaking
paper_content:
Organizational scandals have become all too commonplace; from investment firms' financial improprieties to sexual abuse cover-ups, rule-breaking has become a “normal” feature of organizational life. Although there is considerable scholarly work on rule-breaking, efforts to explain it remain theoretically fragmented. Here we identify two fundamental dimensions of bureaucratic rule-breaking and develop a coherent theoretical conception of it as a structurally patterned and interactionally mediated sociological fact. First, rule-breaking may be permitted or contested by those charged with rule enforcement. Manifestations of rule-breaking take on a routine character only where it is unofficially allowed; where it is not, conflict ensues. Second, the hierarchical structure of bureaucracy is mirrored by an organizational hierarchy of rule-breaking. Rule-breaking can be undertaken by individuals acting alone, it can be coordinated by workgroups, or it can be organized by top management as a matter of unofficial ...
---
paper_title: IMPACTS OF IT ACCEPTANCE AND RESISTANCE BEHAVIORS : A NOVEL FRAMEWORK
paper_content:
Despite the progress that has been made in understanding acceptance and resistance, there remains a need to further clarify into what behaviors they translate and what their impacts are. On the basis of our review, acceptance and resistance are associated with a range of behaviors, which in turn are related to various individual and organizational impacts. We suggest that taking these behaviors at face value is misleading and that a better understanding of their impacts lies in taking organizational intent into account. We develop propositions to provide a theoretical explanation of the impacts of IT-related behaviors associated with acceptance and resistance in light of their conformity with IT terms of use. Generally, acceptance and conformity with terms of use result in positive impacts but may occasionally have adverse consequences. Similarly, resistance and non-conformity typically have negative consequences but may sometimes benefit the organization.
---
paper_title: IT Consumerization - A Theory and Practice Review
paper_content:
Consumerization of IT refers to privately-owned IT resources, such as devices or software that are used for business purposes. The effects of consumerization are considered to be a major driver that redefines the relationship between employees (in terms of consumers of enterprise IT) and the IT organization. While there has been extensive debate on these matters in practice, IS research has not developed a clear theoretical understanding of the phenomenon. We present a theory and practice review, where the existing literature on consumerization is reviewed and a clear definition of the concept is developed. This study contributes to a theoretical understanding of IT consumerization in relation to fundamental aspects of IS. Our analysis shows, first, which distinct aspects of IS are affected by consumerization. Secondly, we provide an overview over major advantages and disadvantages for employees and organizations by conducting a systematic analysis of current literature available on the topic.
---
paper_title: ERP GLOBAL TEMPLATE AND ORGANIZATIONAL INFORMAL STRUCTURES A PRACTICE-BASED STUDY
paper_content:
Based on an interpretive case study and Activity Theory as a theoretical framework, this contribution shows how local users of an enterprise resource planning system in a Chinese joint-venture of a French multinational corporation have developed an informal organizational structure which eventually led to workaround work practices. The development and quest for alternative practices was justified as indispensable and appropriate to respond to local needs. In that perspective, the formal structure was not sufficient to cover the problems met at the Chinese subsidiary. Our research tackles an original aspect of post ERP implementation since it suggests that informal work practices might eventually lead to the adoption of global IS.
---
paper_title: Stealing Fire: Creative Deviance in the Evolution of New Ideas
paper_content:
What happens when an employee generates a new idea and wants to further explore it but is instructed by a manager to stop working on it? Among the various possibilities, the employee could choose to violate the manager's order and pursue the new idea illegitimately. I describe this action as creative deviance and, drawing on the creativity literature and deviance literature, propose a theory about its organizational conditions and implications.
---
paper_title: Enacting Integrated Information Technology: A Human Agency Perspective
paper_content:
Recent perspectives on organizational change have emphasized human agency, more than technology or structure, to explain empirical outcomes resulting from the use of information technologies in organizations. Yet, newer technologies such as enterprise resource planning (ERP) systems continue to be associated with the agenda of organizational transformation, largely because they are assumed to constrain human action. We report an interpretive case study of an ERP system after its implementation in a large government agency. Despite the transformation agenda accompanying the new system, users initially chose to avoid using it as much as possible (inertia) and later to work around system constraints in unintended ways (reinvention). We explain the change in enactments with the concept of improvised learning, which was motivated by social influence from project leaders, "power users," and peers. Our results are consistent with arguments regarding the enactment of information technology in organizations and with temporal views of human agency. We conclude that an integrated technology like ERP, which potentially represents a "hard" constraint on human agency, can be resisted and reinvented in use.
---
paper_title: Against the Rules: Synthesizing Types and Processes of Bureaucratic Rule-Breaking
paper_content:
Organizational scandals have become all too commonplace; from investment firms' financial improprieties to sexual abuse cover-ups, rule-breaking has become a “normal” feature of organizational life. Although there is considerable scholarly work on rule-breaking, efforts to explain it remain theoretically fragmented. Here we identify two fundamental dimensions of bureaucratic rule-breaking and develop a coherent theoretical conception of it as a structurally patterned and interactionally mediated sociological fact. First, rule-breaking may be permitted or contested by those charged with rule enforcement. Manifestations of rule-breaking take on a routine character only where it is unofficially allowed; where it is not, conflict ensues. Second, the hierarchical structure of bureaucracy is mirrored by an organizational hierarchy of rule-breaking. Rule-breaking can be undertaken by individuals acting alone, it can be coordinated by workgroups, or it can be organized by top management as a matter of unofficial ...
---
paper_title: Employee Theft as a Reaction to Underpayment Inequity: The Hidden Cost of Pay Cuts
paper_content:
Employee theft rates were measured in manufacturing plants during a period in which pay was temporarily reduced by 15%. Compared with pre- or postreduction pay periods (or with control groups whose pay was unchanged), groups whose pay was reduced had significantly higher theft rates. When the basis for the pay cuts was thoroughly and sensitively explained to employees, feelings of inequity were lessened, and the theft rate was reduced as well. The data support equity theory's predictions regarding likely responses to underpayment and extend recently accumulated evidence demonstrating the mitigating effects of adequate explanations on feelings of inequity.
---
paper_title: Formal Ontology in Information Systems
paper_content:
Research on ontology is becoming increasingly widespread in the com- puter science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term "in- formation systems", in its broadest sense, to collectively refer to these application per- spectives. We argue in this paper that so-called ontologies present their own methodo- logical and architectural peculiarities: on the methodological side, their main peculiar- ity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an infor- mation system, leading to the perspective of ontology-driven information systems.
---
paper_title: Techniques of neutralization: A theory of delinquency.
paper_content:
IN attempting to uncover the roots of juvenile delinquency, the social scientist has long since ceased to search for devils in the mind or stigma of the body. It is now largely agreed that delinquent behavior, like most social behavior, is learned and that it is learned in the process of social interaction. The classic statement of this position is found in Sutherland's theory of differential association, which asserts that criminal or delinquent behavior involves the learning of (a) techniques of committing crimes and (b) motives, drives, rationalizations, and attitudes favorable to the violation of law.' Unfortunately, the specific content of what is learned-as opposed to the process by which it is learned-has received relatively little attention in either theory or research. Perhaps the single strongest school of thought on the nature of this content has centered on the idea of a delinquent subculture. The basic characteristic of the deliquent sub-culture, it is argued, is a system of values that represents an inversion of the values held by respectable, law-abiding society. The world of the delinquent is the world of the law-abiding turned upside down and its norms constitute a countervailing force directed against the conforming social order. Cohen 2 sees the process of developing a delinquent sub-culture as a matter of building, maintaining, and reinforcing a code for behavior which exists by opposition, which stands in point by point contradiction to dominant values, particularly those of the middle class. Cohen's portrayal of delinquency is executed with a good deal of sophistication, and he carefully avoids overly simple explanations such as those based on the principle of "follow the leader" or easy generalizations about "emotional disturbances." Furthermore, he does not accept the delinquent sub-culture as something given, but instead systematically examines the function of delinquent values as a viable solution to the lower-class, male child's problems in the area of social status. Yet in spite of its virtues, this image of juvenile delinquency as a form of behavior based on competing or countervailing values and norms appears to suffer from a number of serious defects. It is the nature of these defects and a possible alternative or modified explanation for a large portion of juvenile delinquency with which this paper is concerned. The difficulties in viewing delinquent behavior as springing from a set of deviant values and norms-as arising, that is to say, from a situation in which the delinquent defines his delinquency as "right"-are both empirical and theoretical. In the first place, if there existed in fact a delinquent subculture such that the delinquent viewed his illegal behavior as morally correct, we could reasonably suppose that he would exhibit no feelings of guilt or shame at detection or confinement. Instead, the major reaction would tend in the direction of indignation or a sense of martyrdom.3 It is true that some delinquents do react in the latter fashion, although the sense of martyrdom often seems to be based on the fact that others "get away with it" and indignation appears to be directed against the chance events or lack of skill that led to apprehension. More important, however, is the fact that there is a good deal of evidence suggesting that many delinquents do experience a sense of guilt or
---
paper_title: The IT way of loafing on the job: cyberloafing, neutralizing and organizational justice
paper_content:
Summary Much attention has been devoted to how technological advancements have created a brave new workplace, revolutionzing the ways in which work is being carried out, and how employees can improve their productivity and efficiency. However, the advent of technology has also opened up new avenues and opportunities for individuals to misbehave. This study focused on cyberloafing—the act of employees using their companies’ internet access for personal purposes during work hours. Cyberloafing, thus, represents a form of production deviance. Using the theoretical frameworks offered by social exchange, organizational justice and neutralization, we examined the often-neglected dark side of the internet and the role that neutralization techniques play in facilitating this misbehavior at the workplace. Specifically, we developed a model which suggested that when individuals perceived their organizations to be distributively, procedurally and interactionally unjust, they were likely to invoke the metaphor of the ledger as a neutralization technique to legitimize their subsequent engagement in the act of cyberloafing. Data were collected with the use of an electronic questionnaire and focus group interviews from 188 working adults with access to the internet at the workplace. Results of structural equation modelling provided empirical support for all of our hypotheses. Implications of our findings for organizational internet policies are discussed. Copyright # 2002 John Wiley & Sons, Ltd.
---
paper_title: When flexible routines meet flexible technologies: affordance, constraint, and the imbrication of human and material agencies
paper_content:
Employees in many contemporary organizations work with flexible routines and flexible technologies. When those employees find that they are unable to achieve their goals in the current environment, how do they decide whether they should change the composition of their routines or the materiality of the technologies with which they work? The perspective advanced in this paper suggests that the answer to this question depends on how human and material agencies-the basic building blocks common to both routines and technologies-are imbricated. Imbrication of human and material agencies creates infrastructure in the form of routines and technologies that people use to carry out their work. Routine or technological infrastructure used at any given moment is the result of previous imbrications of human and material agencies. People draw on this infrastructure to construct a perception that a technology either constrains their ability to achieve their goals, or that the technology affords the possibility of achieving new goals. The case of a computer simulation technology for automotive design used to illustrate this framework suggests that perceptions of constraint lead people to change their technologies while perceptions of affordance lead people to change their routines. This imbrication metaphor is used to suggest how a human agency approach to technology can usefully incorporate notions of material agency into its explanations of organizational change.
---
paper_title: Ontology Development 101: A Guide to Creating Your First Ontology
paper_content:
1 Why develop an ontology? In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:
---
paper_title: Workaround Aware Business Process Modeling
paper_content:
Workarounds are an omnipresent part of organizational settings where formal rules and regulations describe standardized processes. Still, only few studies have focused on incorporating workarounds in designing information systems (IS) or as a part of management decisions. Therefore, this study provides an extension to the Business Process Modeling Notation (BPMN) by conducting a metamodel transformation, which includes workarounds. As a result, the Workaround Process Modeling Notation (WPMN) (1) leads organizations in designing workaround aware systems, (2) supports managers in deciding how to deal with workarounds, and (3) provides auditors with visualizations of non-compliance. We exemplify how this technique can be used to model a workaround in the process of accessing patient-identifying data in a hospital. We evaluated the model and find it particular suitable as an empirically grounded BPMN extension.
---
paper_title: A Situational Perspective on Workarounds in IT-Enabled Business Processes: A Multiple Case Study
paper_content:
Workarounds are still one of the most puzzling phenomena in business process management research and practice. From a compliance perspective, workarounds are studied as control failure and the cause for inferior process quality. From a process reengineering perspective, however, workarounds are studied as an important source of process improvement. In this paper, we advance recent theory on the emergence of workarounds to resolve this puzzle by analyzing empirical evidence from a multiple case study. Our analysis reveals that employees utilize workarounds based on a risk-benefit analysis of the situational context. If the realized benefits (efficiency gains) outweigh the situational risks (exposure of process violations), workarounds will be perceived as process improvement. Erroneous risk-benefit analysis, however, leads to exposure of the same workaround as control failure. Quite unexpectedly, we found that information systems serve as critical cues for the situational balance of benefits and risks. Our result suggests that process-instance-level workarounds are treated as options that are engaged if the situation permits, in contrast to process-level workarounds that manifest as unofficial routines. We also contribute the notion of situational risk-benefits analysis to the theory on workarounds.
---
paper_title: Ontology Development 101: A Guide to Creating Your First Ontology
paper_content:
1 Why develop an ontology? In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:
---
| Title: Toward an Ontology of Workarounds: A Literature Review on Existing Concepts
Section 1: Introduction
Description 1: This section introduces the topic of workaround behavior in information systems, discussing its prevalence, varying interpretations, and the identified research gaps.
Section 2: Theoretical Background
Description 2: This section defines workarounds, traces their origins in organizational psychology, and reviews how they have been classified and examined in existing literature.
Section 3: Research Method
Description 3: This section describes the methodology used for the literature review, including the scope definition, search strategy, selection criteria, and the classification process.
Section 4: Results
Description 4: This section presents the findings from the literature review, including the identified types of workarounds, their definitions, and a summary of the relevant studies.
Section 5: Ontology
Description 5: This section outlines the development of an ontology for workarounds, describing the methodological approach and presenting a visual representation of the derived ontology.
Section 6: Discussion
Description 6: This section discusses the implications of the findings, acknowledges the limitations of the study, and explores the potential for future research in the field of workarounds.
Section 7: Conclusion
Description 7: This section summarizes the study's contributions to the understanding of workarounds, highlights key insights, and suggests directions for future research in the domain. |
Routing Protocols in Vehicular Ad hoc Networks: Survey and Research Challenges | 16 | ---
paper_title: Dynamic source routing in ad hoc wireless networks
paper_content:
An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.
---
paper_title: Ad-hoc on-demand distance vector routing
paper_content:
An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.
---
paper_title: Performance assessment of a geographic routing protocol for vehicular delay-tolerant networks
paper_content:
This paper analyses the performance of a new routing protocol for vehicular delay-tolerant networks, called GeoSpray. This geographic routing protocol performs a store-carry-and-forward, combining replication and forwarding/ routing decisions based on location information, with explicit delivery acknowledgments to improve network resources utilization. The performance of the proposed routing protocol is evaluated through simulation. The results have shown that GeoSpray achieves higher delivery ratios and lower delivery delays with a considerably low communication overhead, compared to six well-known routing protocols for delay-tolerant networks.
---
paper_title: Routing protocols for inter-vehicular networks: A comparative study in high-mobility and large obstacles environments
paper_content:
An ad hoc network is composed of mobile nodes without the presence of a fixed infrastructure. Communications among nodes are accomplished by forwarding data packets for each other, on a hop-by-hop basis along the current connection to the destination node. In particular, vehicle-to-vehicle communications have been studied, in recent years, to improve driver safety. As more of such applications of high-mobility ad hoc networks emerge, it is critical that the routing protocol employed is capable of efficiently coping with the high-frequency of broken links (i.e., robust with respect to high-mobility). This paper presents a comprehensive comparative study in a city environment of eight representative routing protocols for wireless mobile ad hoc networks and inter-vehicular networks developed in recent years. In a city environment, communication protocols need adapt fast moving nodes (e.g., vehicles on streets) and large obstacles (e.g., office buildings). In this paper, we elaborate upon extensive simulation results based on various network scenarios, and discuss the strengths and weaknesses of these techniques with regard to their support for highly mobile nodes.
---
paper_title: Routing Protocols for Vehicular Ad Hoc Networks That Ensure Quality of Service
paper_content:
Vehicular ad hoc networks (VANETs) allow vehicles to form a self-organized network without the need for permanent infrastructure. Even though VANETs are mobile ad hoc networks (MANETs), because of the intrinsic characteristics of VANETs, routing protocols designed for MANETs cannot be directly applied for VANETs. In this paper, we present a timeline of the development of the existing routing protocols for VANETs that try to support quality of service (QoS). Moreover, we classify and characterize the existing QoS routing protocols for VANETs and also provide a qualitative comparison of them. This classification and characterization helps in understanding the strengths and weaknesses of the existing QoS protocols and also throws light on open issues that remain to be addressed.
---
paper_title: Adaptive routing protocols for vehicular ad hoc networks
paper_content:
Vehicular ad hoc networks based on IEEE 802.11(WLAN) technology are emerging technologies for future Intelligent Transportation Systems (ITS). We investigate the behavior of routing protocols in vehicular networks by using microscopic mobility information based on the real road maps. The simulation results show the remarkable effect of the real mobility models on the performance of routing protocols. We observe a significant reduction in packet delivery ratio in traditional and even position based routing protocols. To address these performance limitations in vehicular networks, we propose several improvements and take into account special characteristics of vehicular networks. The major improvement is the use of link expiration time (LET) as a node mobility and density indicator to provide transmission range adaptation. We proposed two adaptive routing protocols for vehicular ad hoc networks and evaluated through simulation.
---
paper_title: Routing Protocols in Vehicular Ad Hoc Networks: A Survey and Future Perspectives
paper_content:
Vehicular Ad hoc Network (VANET), a subclass of mobile ad hoc networks (MANETs), is a promising approach for the intelligent transportation system (ITS). The design of routing protocols in VANETs is important and necessary issue for support the smart ITS. The key difference of VANET and MANET is the special mobility pattern and rapidly changeable topology. It is not effectively applied the existing routing protocols of MANETs into VANETs. In this investigation, we mainly survey new routing results in VANET. We introduce unicast protocol, multicast protocol, geocast protocol, mobicast protocol, and broadcast protocol. It is observed that carry-and-forward is the new and key consideration for designing all routing protocols in VANETs. With the consideration of multi-hop forwarding and carry-and-forward techniques, min-delay and delay-bounded routing protocols for VANETs are discussed in VANETs. Besides, the temporary network fragmentation problem and the broadcast storm problem are further considered for designing routing protocols in VANETs. The temporary network fragmentation problem caused by rapidly changeable topology influence on the performance of data transmissions. The broadcast storm problem seriously affects the successful rate of message delivery in VANETs. The key challenge is to overcome these problems to provide routing protocols with the low communication delay, the low communication overhead, and the low time complexity. The challenges and perspectives of routing protocols for VANETs are finally discussed.
---
paper_title: Simulation-Based Study of Common Issues in VANET Routing Protocols
paper_content:
Vehicular communications have been one of the hottest research topics for the last few years. Many routing protocols have been proposed for such kind of networks. Most of them try to exploit the information which may be available at the vehicle by the time that a routing decision must be made. In addition, some solutions are designed taking into account the particular, highly partitioned, network connectivity in vehicular settings. To do so, they embrace the store-carry- forward paradigm of delay-tolerant networks. Despite the great variety of approaches which have been proposed, we found that there is a set of issues which are common to many vehicular ad hoc routing protocols in the literature. In this paper, we perform a simulation-based analysis of five of those protocols, which are representative of the various categories of vehicular routing. We describe in detail every problem and show simulation results which support our reasonings. Moreover, solutions to solve every presented problem are outlined. The paper is concluded with some guidelines which may be helpful to prospective VANET routing protocol designers.
---
paper_title: Improvement of vehicular communications by using 3G capabilities to disseminate control information
paper_content:
Cellular networks have gained a lot of popularity in the context of vehicular communication within the last few years. Existing reference architectures such as CALM already consider them to provide enhanced connectivity to vehicles for data communication. Their capabilities, especially 3G and next-generation, translate into great potential in the vehicular environment, far beyond the provision of data connectivity. In this context we present a solution that uses 3G cellular networks not only as a backup for data communication among vehicles, but also and especially as an efficient mechanism for the dissemination of relevant control information for multiple applications, services, and protocols. Our simulation results demonstrate that by using our 3G-based solution to disseminate connectivity information, vehicular ad hoc routing protocols improve their route selections, which results in a higher packet delivery ratio in urban scenarios.
---
paper_title: Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
paper_content:
An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
---
paper_title: The broadcast storm problem in a mobile ad hoc network
paper_content:
Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.
---
paper_title: A routing strategy for vehicular ad hoc networks in city environments
paper_content:
Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV.
---
paper_title: Self-organizing wide-area network caches
paper_content:
A substantial fraction of all network traffic today comes from applications in which clients retrieve objects from servers. The caching of objects in locations "close" to clients is an important technique for reducing both network traffic and response time for such applications. In this paper we consider the benefits of associating caches with switching nodes throughout the network, rather than in a few locations. We also consider the use of various self-organizing or active cache management strategies for organizing cache content. We evaluate caching techniques using both simulation and a general analytic model for network caching. Our results indicate that in-network caching can make effective use of cache space, and in many cases self-organizing caching schemes yield better average round-trip latencies than traditional approaches, using much smaller per-node caches.
---
paper_title: Routing in a delay tolerant network
paper_content:
We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs.
---
paper_title: A scalable location service for geographic ad hoc routing
paper_content:
GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node. Experiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.
---
paper_title: Vehicular grid communications: the role of the internet infrastructure
paper_content:
Vehicle communications are becoming a reality, driven by navigation safety requirements and by the investments of car manufacturers and Public Transport Authorities. As a consequence many of the essential vehicle grid components (radios, Access Points, spectrum, standards, etc.) will soon be in place (and paid for) paving the way to unlimited opportunities for other car-to-car applications beyond safe navigation, for example, from news to entertainment, mobile network games and civic defense. In this study, we take a visionary look at these future applications, the emerging "Vehicular Grid" that will support them and the interplay between the grid and the communications infrastructure. In essence, the Vehicular Grid is a large scale ad hoc network. However, an important feature of the Vehicular Grid, which sets it apart from most instantly-deployed ad hoc networks, is the ubiquitous presence of the infrastructure (and the opportunity to use it). While the Vehicular Grid must be entirely self-supporting for emergency operations (natural disaster, terrorist attack, etc), it should exploit the infrastructure (when present) during normal operations. In this paper we address the interaction between vehicles and Internet servers through Virtual Grid and Internet Infrastructure. This includes transparent geo-route provisioning across the Internet, mobile resource monitoring, and mobility management (using back up services in case of infrastructure failure). We then focus on routing and show the importance of Infrastructure cooperation and feedback for efficient, congestion free routing.
---
paper_title: Drive and share: efficient provisioning of social networks in vehicular scenarios
paper_content:
Social Networks are one of the latest revolutions in networking, allowing users with common interests to stay connected and exchange information. They have enjoyed great success not only for traditional Internet users, but also for mobile users. Recent efforts are also being done to make social networks available within vehicles. However, to exploit social networks at their full potential in a vehicular context a number of technical challenges and design issues need to be faced. In this article, we analyze those challenges and present an innovative solution for providing Social services on the vehicle based on IP Multimedia Subsystem and Machine to Machine capabilities. To demonstrate the viability of the proposed scheme we present a social network service called Drive and Share which offers relevant information to vehicles using our proposed architecture.
---
paper_title: IGF: A State-Free Robust Communication Protocol for Wireless Sensor Networks
paper_content:
Wireless Sensor Networks (WSNs) are being designed to solve a gamut of interesting real-world problems. Limitations on available energy and bandwidth, message loss, high rates of node failure, and communication restrictions pose challenging requirements for these systems. Beyond these inherent limitations, both the possibility of node mobility and energy conserving protocols that power down nodes introduce additional complexity to routing protocols that depend on up to date routing or neighborhood tables. Such state-based protocols suffer excessive delay or message loss, as system dynamics require expensive upkeep of these tables. Utilizing characteristics of high node density and location awareness, we introduce IGF, a location-aware routing protocol that is robust and works without knowledge of the existence of neighboring nodes (state-free). We compare our work against established routing protocols to demonstrate the efficacy of our solution when nodes are mobile or periodically sleep to conserve energy. We show that IGF far outperforms these protocols, in some cases delivering close to 100% of the packets transmitted while alternate solutions fail to even find a path between a source and destination. Specifically, we show that our protocol demonstrates a vast improvement over prior work using metrics of delivery ratio, control overhead, and end-to-end delay.
---
paper_title: PROMPT: A cross-layer position-based communication protocol for delay-aware vehicular access networks
paper_content:
Vehicular communication systems facilitate communication devices for exchange of information among vehicles and between vehicles and roadside equipment. These systems are used to provide a myriad of services ranging from traffic safety application to convenience applications for drivers and passengers. In this paper, we focus on the design of communication protocols for vehicular access networks where vehicles access a wired backbone network by means of a multi-hop data delivery service. Key challenges in designing protocols for vehicular access networks include quick adaptability to frequent changes in the network topology due to vehicular mobility and delay awareness in data delivery. To address these challenges, we propose a cross-layer position-based delay-aware communication protocol called PROMPT. It adopts a source routing mechanism that relies on positions independent of vehicle movement rather than on specific vehicle addresses. Vehicles monitor information exchange in their reception range to obtain data flow statistics, which are then used in estimating the delay and selecting best available paths. Through a detailed simulation study using ns-2, we empirically show that PROMPT outperforms existing routing protocols proposed for vehicular networks in terms of end-to-end packet delay, packet loss rate, and fairness of service.
---
paper_title: A vehicle-to-vehicle communication protocol for cooperative collision warning
paper_content:
This paper proposes a vehicle-to-vehicle communication protocol for cooperative collision warning. Emerging wireless technologies for vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications such as DSRC are promising to dramatically reduce the number of fatal roadway accidents by providing early warnings. One major technical challenge addressed in this paper is to achieve low-latency in delivering emergency warnings in various road situations. Based on a careful analysis of application requirements, we design an effective protocol, comprising congestion control policies, service differentiation mechanisms and methods for emergency warning dissemination. Simulation results demonstrate that the proposed protocol achieves low latency in delivering emergency warnings and efficient bandwidth usage in stressful road scenarios.
---
paper_title: GeOpps: Geographical Opportunistic Routing for Vehicular Networks
paper_content:
Vehicular networks can be seen as an example of hybrid delay tolerant network where a mixture of infostations and vehicles can be used to geographically route the information messages to the right location. In this paper we present a forwarding protocol which exploits both the opportunistic nature and the inherent characteristics of the vehicular network in terms of mobility patterns and encounters, and the geographical information present in navigator systems of vehicles. We also report about our evaluation of the protocol over a simulator using realistic vehicular traces and in comparison with other geographical routing protocols.
---
paper_title: Vehicle-to-vehicle safety messaging in DSRC
paper_content:
This paper studies the design of layer-2 protocols for a vehicle to send safety messages to other vehicles. The target is to send vehicle safety messages with high reliability and low delay. The communication is one-to-many, local, and geo-significant. The vehicular communication network is ad-hoc, highly mobile, and with large numbers of contending nodes. The messages are very short, have a brief useful lifetime, but must be received with high probability. For this environment, this paper explores the efficacy of rapid repetition of broadcast messages. This paper proposes several random access protocols for medium access control. The protocols are compatible with the Dedicated Short Range Communications (DSRC) multi-channel architecture. Analytical bounds on performance of the proposed protocols are derived. Simulations are conducted to assess the reception reliability and channel usage of the protocols. The sensitivity of the protocol performance is evaluated under various offered traffic and vehicular traffic flows. The results show our approach is feasible for vehicle safety messages in DSRC.
---
paper_title: Simulation-Based Study of Common Issues in VANET Routing Protocols
paper_content:
Vehicular communications have been one of the hottest research topics for the last few years. Many routing protocols have been proposed for such kind of networks. Most of them try to exploit the information which may be available at the vehicle by the time that a routing decision must be made. In addition, some solutions are designed taking into account the particular, highly partitioned, network connectivity in vehicular settings. To do so, they embrace the store-carry- forward paradigm of delay-tolerant networks. Despite the great variety of approaches which have been proposed, we found that there is a set of issues which are common to many vehicular ad hoc routing protocols in the literature. In this paper, we perform a simulation-based analysis of five of those protocols, which are representative of the various categories of vehicular routing. We describe in detail every problem and show simulation results which support our reasonings. Moreover, solutions to solve every presented problem are outlined. The paper is concluded with some guidelines which may be helpful to prospective VANET routing protocol designers.
---
paper_title: GeOpps: Geographical Opportunistic Routing for Vehicular Networks
paper_content:
Vehicular networks can be seen as an example of hybrid delay tolerant network where a mixture of infostations and vehicles can be used to geographically route the information messages to the right location. In this paper we present a forwarding protocol which exploits both the opportunistic nature and the inherent characteristics of the vehicular network in terms of mobility patterns and encounters, and the geographical information present in navigator systems of vehicles. We also report about our evaluation of the protocol over a simulator using realistic vehicular traces and in comparison with other geographical routing protocols.
---
paper_title: Beacon-less geographic routing made practical: challenges, design guidelines, and protocols
paper_content:
Geographic routing has emerged as one of the most efficient and scalable routing solutions for wireless sensor networks. In traditional geographic routing protocols, each node exchanges periodic one-hop beacons to determine the position of its neighbors. Recent studies proved that these beacons can create severe problems in real deployments due to the highly dynamic and error-prone nature of wireless links. To avoid these problems, new variants of geographic routing protocols that do not require beacons are being proposed. In this article we review some of the latest proposals in the field of beacon-less geographic routing and introduce the main design challenges and alternatives. In addition, we perform an empirical study to assess the performance of beacon-based and beacon-less routing protocols using a real WSN deployment.
---
paper_title: Contention-Based Forwarding for Street Scenarios
paper_content:
In this paper, we propose to apply Contention-Based Forwarding (CBF) to Vehicular Ad Hoc Networks (VANETs). CBF is a greedy position-based forwarding algorithm that does not require proactive transmission of beacon messages. CBF performance is analyzed using realistic movement patterns of vehicles on a highway. We show by means of simulation that CBF as well as traditional position-based routing (PBR) achieve a delivery rate of almost 100 given that connectivity exists. However, CBF has a much lower forwarding overhead than PBR since PBR can achieve high delivery ratios only by implicitly using a trial-and-error next-hop selection strategy. With CBF, a better total throughput can be achieved. We further discuss several optimizations of CBF for its use in VANETs, in particular a new position-encoding scheme that naturally allows for communication paradigms such as `street geocast` and `street flooding`. The discussions show that CBF can be viewed as a concept for convergence of intelligent flooding, geocast, and multi-hop forwarding in the area of inter-vehicle communication.
---
paper_title: Priority-based receiver-side relay election in wireless ad hoc sensor networks
paper_content:
Receiver-side relay election has been recently proposed as an alternative to transmitter-side relay selection in wireless ad hoc networks. In this paper we study different prioritization schemes among potential relay nodes to achieve a better delay and contention resolution performance. We consider a priority criteria based on least remaining distance to the destination and propose a generalized mapping function to introduce relative priority among the eligible relay nodes. We show that, a suitable mapping can be found to achieve an optimum relay election performance, which also outperforms the random forwarding approach. Our intuition is guided by an analytic framework and verified by network simulation.
---
paper_title: Beacon-Less Geographic Routing in Real Wireless Sensor Networks
paper_content:
Geographic Routing (GR) algorithms require nodes to periodically transmit HELLO messages to allow neighbors to know their positions (beaconing mechanism). Beacon-less routing algorithms have recently been proposed to reduce the control overheads due to these messages. However, existing beacon-less algorithms have not considered realistic physical layers. Therefore, those algorithms cannot work properly in realistic scenarios. In this paper we present a new beaconless routing protocol called BOSS. Its design is based on the conclusions of our open-field experiments using Tmote-sky sensors. BOSS is adapted to error-prone networks and incorporates a new mechanism to reduce collisions and duplicate messages produced during the selection of the next forwarder node. We compare BOSS with Beacon-Less Routing (BLR) and Contention-Based Forwarding (CBF) algorithms through extensive simulations. The results show that our scheme is able to achieve almost perfect packet delivery ratio (like BLR) while having a low bandwidth consumption (even lower than CBF). Additionally, we carried out an empirical evaluation in a real testbed that shows the correctness of our simulation results.
---
paper_title: Beaconless Position-based Routing with Guaranteed Delivery for Wireless Ad hoc and Sensor Networks
paper_content:
Existing position-based routing algorithms, where packets are forwarded in the geographic direction of the destination, normally require that the forwarding node should know the positions of all neighbors in its transmission range. This information on direct neighbors is gained by observing beacon messages that each node sends out periodically. Several beaconless greedy routing schemes have been proposed recently. However, none of the existing beaconless schemes guarantee the delivery of packets. Moreover, they incur communication overhead by sending excessive control messages or by broadcasting data packets. In this paper, we describe how existing localized position based routing schemes that guarantee delivery can be made beaconless, while preserving the same routes. In our guaranteed delivery beaconless routing scheme, the next hop is selected through the use of control RTS/CTS messages and biased timeouts. In greedy mode, the neighbor closest to destination responds first. In recovery mode, nodes closer to the source will select shorter timeouts, so that other neighbors, overhearing CTS packets, can eliminate their own CTS packets if they realize that their link to the source is not part of Gabriel graph. Nodes also cancel their packets after receiving data message sent by source to the selected neighbor. We analyze the behavior of our scheme on our simulation environment assuming ideal MAC, following GOAFR+ and GFG routing schemes. Our results demonstrate low communication overhead in addition to guaranteed delivery.
---
paper_title: Trajectory based forwarding and its applications
paper_content:
Trajectory based forwarding (TBF) is a novel methodto forward packets in a dense ad hoc network that makes it possible to route a packet along a predefined curve. It is a hybrid between source based routing and Cartesian forwarding in that the trajectory is set by the source, but the forwarding decision is based on the relationship to the trajectory rather than names of intermediate nodes. The fundamental aspects of TBF are: it decouples path naming from the actual path; it provides cheap path diversity; it trades off communication for computation. These aspects address the double scalability issue with respect to mobility rate and network size. In addition, TBF provides a common framework for many services such as: broadcasting, discovery, unicast, multicast and multipath routing in ad hoc networks. TBF requires that nodes know their position relative to a coordinate system. While a global coordinate system afforded by a system such as GPS would be ideal, approximate positioning methods provided by other algorithms are also usable.
---
paper_title: Knowledge-based opportunistic forwarding in vehicular wireless ad hoc networks
paper_content:
When highly mobile nodes are interconnected via wireless links, the resulting network can be used as a transit network to connect other disjoint ad-hoc networks. In this paper, we compare five different opportunistic forwarding schemes, which vary in their overhead, their success rate, and the amount of knowledge about neighboring nodes that they require. In particular, we present the MoVe algorithm, which uses velocity information to make intelligent opportunistic forwarding decisions. Using auxiliary information to make forwarding decisions provides a reasonable trade-off between resource overhead and performance.
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: A routing strategy for vehicular ad hoc networks in city environments
paper_content:
Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV.
---
paper_title: Connectivity-Aware Routing (CAR) in Vehicular Ad-hoc Networks
paper_content:
Vehicular ad hoc networks using WLAN technology have recently received considerable attention. We present a position-based routing scheme called Connectivity-Aware Routing (CAR) designed specifically for inter-vehicle communication in a city and/or highway environment. A distinguishing property of CAR is the ability to not only locate positions of destinations but also to find connected paths between source and destination pairs. These paths are auto-adjusted on the fly, without a new discovery process. "Guards" help to track the current position of a destination, even if it traveled a substantial distance from its initially known location. For the evaluation of the CAR protocol we use realistic mobility traces obtained from a microscopic vehicular traffic simulator that is based on a model of driver behavior and the real road maps of Switzerland.
---
paper_title: Enhanced Perimeter Routing for Geographic Forwarding Protocols in Urban Vehicular Scenarios
paper_content:
Geographic stateless routing schemes such as GPSR have been widely adopted to routing in vehicular ad hoc networks (VANET). However, due to the particular urban topology and the non-uniform distribution of cars, the greedy routing mode often fails and needs a recovery strategy such as GPSR's perimeter mode to deliver data successfully to the destination. It has been shown that the cost of planarization, the non-uniform distribution of cars, and radio obstacles make GPSR's perimeter mode inefficient in urban configurations. Some enhancements have been proposed such as GPCR, which uses the concept of junction nodes to control the next road segments that packets should follow. However, the concept of junction nodes itself is problematic and hard to maintain in a dynamic urban environment. In this paper, we describe GpsrJ+, a solution that further improves the packet delivery ratio of GPCR with minimal modification by predicting on which road segment its neighboring junction node will forward packets to. GpsrJ+ differs from GPCR as decisions about which road segment to turn does not need to be made by junction nodes. Moreover, GpsrJ+ does not need an expensive planarization strategy since it uses the natural planar feature of urban maps. Consequently, GpsrJ+ reduces the hop count used in the perimeter mode by as much as 200% compared to GPSR. It therefore allows geographic routing schemes to return to the greedy mode faster.
---
paper_title: A static-node assisted adaptive routing protocol in vehicular networks
paper_content:
Vehicular networks have attracted great interest in the research community recently, and multi-hop routing becomes an important issue. To improve data delivery performance, we propose SADV, which utilizes some static nodes at road intersections in a completely mobile vehicular network to help relay data. With the assistance of static nodes at intersections, a packet can be stored in the node for a while and wait until there are vehicles within communication range along the best delivery path to further forward the packet, which reduces the overall data delivery delay. In addition, we let adjacent nodes measure the delay of forwarding data between each other in real time, so that the routing decision can adapt to changing vehicle densities. Our simulation results show that SADV outperforms other multi-hop data dissemination protocols, especially under median or low vehicle density where the network is frequently partitioned.
---
paper_title: A-STAR: A Mobile Ad Hoc Routing Strategy for Metropolis Vehicular Communications
paper_content:
One of the major issues that affect the performance of Mobile Ad hoc NETworks (MANET) is routing. Recently, position-based routing for MANET is found to be a very promising routing strategy for inter-vehicular communication systems (IVCS). However, position-based routing for IVCS in a built-up city environment faces greater challenges because of potentially more uneven distribution of vehicular nodes, constrained mobility, and difficult signal reception due to radio obstacles such as high-rise buildings. This paper proposes a new position-based routing scheme called Anchor-based Street and Traffic Aware Routing (A-STAR), designed specifically for IVCS in a city environment. Unique to A-STAR is the usage of information on city bus routes to identify an anchor path with high connectivity for packet delivery. Along with a new recovery strategy for packets routed to a local maximum, the proposed protocol shows significant performance improvement in a comparative simulation study with other similar routing approaches.
---
paper_title: LOUVRE: Landmark Overlays for Urban Vehicular Routing Environments
paper_content:
In this paper, we introduce a routing solution called "landmark overlays for urban vehicular routing environments" (LOUVRE), an approach that efficiently builds a landmark overlay network on top of an urban topology. We define urban junctions as overlay nodes and create an overlay link if and only if the traffic density of the underlying network guarantees the multi-hop vehicular routing between the two overlay nodes. LOUVRE contains a distributed traffic density estimation scheme which is used to evaluate the existence of an overlay link. Then, efficient routing is performed on the overlay network, guaranteeing a correct delivery of each packet. We evaluate LOUVRE against the benchmark routing protocols of GPSR and GPCR and show that LOUVRE performs higher in packet delivery and achieves lower hop count.
---
paper_title: Beaconless Position-based Routing with Guaranteed Delivery for Wireless Ad hoc and Sensor Networks
paper_content:
Existing position-based routing algorithms, where packets are forwarded in the geographic direction of the destination, normally require that the forwarding node should know the positions of all neighbors in its transmission range. This information on direct neighbors is gained by observing beacon messages that each node sends out periodically. Several beaconless greedy routing schemes have been proposed recently. However, none of the existing beaconless schemes guarantee the delivery of packets. Moreover, they incur communication overhead by sending excessive control messages or by broadcasting data packets. In this paper, we describe how existing localized position based routing schemes that guarantee delivery can be made beaconless, while preserving the same routes. In our guaranteed delivery beaconless routing scheme, the next hop is selected through the use of control RTS/CTS messages and biased timeouts. In greedy mode, the neighbor closest to destination responds first. In recovery mode, nodes closer to the source will select shorter timeouts, so that other neighbors, overhearing CTS packets, can eliminate their own CTS packets if they realize that their link to the source is not part of Gabriel graph. Nodes also cancel their packets after receiving data message sent by source to the selected neighbor. We analyze the behavior of our scheme on our simulation environment assuming ideal MAC, following GOAFR+ and GFG routing schemes. Our results demonstrate low communication overhead in addition to guaranteed delivery.
---
paper_title: Location-aided routing (LAR) in mobile ad hoc networks
paper_content:
A mobile ad hoc network consists of wireless hosts that may move often. Movement of hosts results in a change in routes, requiring some mechanism for determining new routes. Several routing protocols have already been proposed for ad hoc networks. This report suggests an approach to utilize location information (for instance, obtained using the global positioning system) to improve performance of routing protocols for ad hoc networks.
---
paper_title: Contention-Based Forwarding for Street Scenarios
paper_content:
In this paper, we propose to apply Contention-Based Forwarding (CBF) to Vehicular Ad Hoc Networks (VANETs). CBF is a greedy position-based forwarding algorithm that does not require proactive transmission of beacon messages. CBF performance is analyzed using realistic movement patterns of vehicles on a highway. We show by means of simulation that CBF as well as traditional position-based routing (PBR) achieve a delivery rate of almost 100 given that connectivity exists. However, CBF has a much lower forwarding overhead than PBR since PBR can achieve high delivery ratios only by implicitly using a trial-and-error next-hop selection strategy. With CBF, a better total throughput can be achieved. We further discuss several optimizations of CBF for its use in VANETs, in particular a new position-encoding scheme that naturally allows for communication paradigms such as `street geocast` and `street flooding`. The discussions show that CBF can be viewed as a concept for convergence of intelligent flooding, geocast, and multi-hop forwarding in the area of inter-vehicle communication.
---
paper_title: Contention-Based Forwarding for Mobile Ad Hoc Networks
paper_content:
Existing position-based unicast routing algorithms which forward packets in the geographic direction of the destination require that the forwarding node knows the positions of all neighbors in its transmission range. This information on direct neighbors is gained by observing beacon messages each node sends out periodically. Due to mobility, the information that a node receives about its neighbors becomes outdated, leading either to a significant decrease in the packet delivery rate or to a steep increase in load on the wireless channel as node mobility increases. In this paper, we propose a mechanism to perform position-based unicast forwarding without the help of beacons. In our contention-based forwarding scheme (CBF) the next hop is selected through a distributed contention process based on the actual positions of all current neighbors. For the contention process, CBF makes use of biased timers. To avoid packet duplication, the first node that is selected suppresses the selection of further nodes. We propose three suppression strategies which vary with respect to forwarding efficiency and suppression characteristics. We analyze the behavior of CBF with all three suppression strategies and compare it to an existing greedy position-based routing approach by means of simulation with ns-2. Our results show that CBF significantly reduces the load on the wireless channel required to achieve a specific delivery rate compared to the load a beacon-based greedy forwarding strategy generates.
---
paper_title: VANET Routing on City Roads Using Real-Time Vehicular Traffic Information
paper_content:
This paper presents a class of routing protocols called road-based using vehicular traffic (RBVT) routing, which outperforms existing routing protocols in city-based vehicular ad hoc networks (VANETs). RBVT protocols leverage real-time vehicular traffic information to create road-based paths consisting of successions of road intersections that have, with high probability, network connectivity among them. Geographical forwarding is used to transfer packets between intersections on the path, reducing the path's sensitivity to individual node movements. For dense networks with high contention, we optimize the forwarding using a distributed receiver-based election of next hops based on a multicriterion prioritization function that takes nonuniform radio propagation into account. We designed and implemented a reactive protocol RBVT-R and a proactive protocol RBVT-P and compared them with protocols representative of mobile ad hoc networks and VANETs. Simulation results in urban settings show that RBVT-R performs best in terms of average delivery rate, with up to a 40% increase compared with some existing protocols. In terms of average delay, RBVT-P performs best, with as much as an 85% decrease compared with the other protocols.
---
paper_title: TO-GO: TOpology-assist geo-opportunistic routing in urban vehicular grids
paper_content:
Road topology information has recently been used to assist geo-routing, thereby improving the overall performance. However, the unreliable wireless channel nature in urban vehicular grids (due to motion, obstructions, etc) still creates problems with the basic greedy forwarding. In this paper, we propose TO-GO (TOpology-assisted Geo-Opportunistic Routing), a geo-routing protocol that exploits topology knowledge acquired via 2-hop beaconing to select the best target forwarder and incorporates opportunistic forwarding with the best chance to reach it. The forwarder selection takes into account of wireless channel quality, thus significantly improving performance in error and interference situations. Extensive simulations confirm TO-GO superior robustness to errors/losses as compared to conventional topology-assisted geographic routing.
---
paper_title: VanetMobiSim: generating realistic mobility patterns for VANETs
paper_content:
In this paper, we present and describe VanetMobiSim, a generator of realistic vehicular movement traces for telecommunication networks simulators. VanetMobiSim mobility description is validated by illustrating how the interaction between featured macro- and micro-mobility is able to reproduce typical phenomena of vehicular traffic.
---
paper_title: Towards Efficient Geographic Routing in Urban Vehicular Networks
paper_content:
Vehicular ad hoc networks (VANETs) have received considerable attention in recent times. Multihop data delivery between vehicles is an important aspect for the support of VANET-based applications. Although data dissemination and routing have extensively been addressed, many unique characteristics of VANETs, together with the diversity in promising applications, offer newer research challenges. This paper introduces the improved greedy traffic-aware routing protocol (GyTAR), which is an intersection-based geographical routing protocol that is capable of finding robust and optimal routes within urban environments. The main principle behind GyTAR is the dynamic and in-sequence selection of intersections through which data packets are forwarded to the destinations. The intersections are chosen considering parameters such as the remaining distance to the destination and the variation in vehicular traffic. Data forwarding between intersections in GyTAR adopts an improved greedy carry-and-forward mechanism. Evaluation of the proposed routing protocol shows significant performance improvement in comparison with other existing routing approaches. With the aid of extensive simulations, we also validate the optimality and sensitivity of significant GyTAR parameters.
---
paper_title: DIR: diagonal-intersection-based routing protocol for vehicular ad hoc networks
paper_content:
In this paper, we present a diagonal-intersection-based routing (DIR) protocol for vehicular ad hoc networks. The DIR protocol constructs a series of diagonal intersections between the source and destination vehicles. The DIR protocol is a geographic routing protocol. Based on the geographic routing protocol, source vehicle geographically forwards data packet toward the first diagonal intersection, second diagonal intersection, and so on, until the last diagonal intersection, and finally geographically reach to the destination vehicle. For given a pair of neighboring diagonal intersections, two or more disjoint sub-paths exist between them. The novel property of DIR protocol is the auto-adjustability, while the auto-adjustability is achieved that one sub-path with low data packet delay, between two neighboring diagonal intersections, is dynamically selected to forward data packets. To reduce the data packet delay, the route is automatically re-routed by the selected sub-path with lowest delay. The proposed DIR protocol allows the mobile source and destination vehicles in the urban VANETs. Experimental results show that the DIR protocol outperforms existing solutions in terms of packet delivery ratio, data packet delay, and throughput.
---
paper_title: Dynamic source routing in ad hoc wireless networks
paper_content:
An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.
---
paper_title: Ad-hoc on-demand distance vector routing
paper_content:
An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.
---
paper_title: A Novel Delay- and Reliability- Aware Inter-Vehicle Routing Protocol
paper_content:
Intelligent transportation systems could improve transportation safety, driving assistance and traffic management system. Vehicular Ad hoc Network (VANET) is an emerging field of technology, embedding wireless communication networks into vehicles to achieve intelligent transportation systems. The development of such systems pose many unique challenges like designing routing protocols that not only forward packets with good end to end delay but also take into consideration the reliability and progress in data packets forwarding. In this article, we begin by presenting a review of recent unicast, geocast and broadcast routing protocols for message transmission. We then outline a novel Delay and Reliability aware Routing (DR 2 ) protocol that addresses these challenges (forwarding packets with low latency, high reliability and fast progress toward destination). Furthermore, our DR 2 protocol uses cross layer communication between MAC (Medium Access Control) and network layer. That is, the MAC layer observes the Signal to Noise (SNR), delay and velocity vector difference metrics for all paths of neighboring nodes, network layer then could select the best preferable path based on fuzzy inference system. We also used H∞ technique to optimize the membership functions and then tune it with rapid changing topology of VANET. To achieve a fair comparison with other routing protocols, we have implemented the proposed DR 2 protocol in Network Simulator 2 (NS 2). The preliminary results show that the proposed DR 2 protocol is able to improve end- to-end delay in sparse traffic conditions and packet delivery ratio in error prone urban vehicular scenarios.
---
paper_title: Fuzzy-assisted social-based routing for urban vehicular environments
paper_content:
In the autonomous environment of Vehicular Ad hoc NETwork (VANET), vehicles randomly move with high speed and rely on each other for successful data transmission process. The routing can be difficult or impossible to predict in such intermittent vehicles connectivity and highly dynamic topology. The existing routing solutions do not consider the knowledge that behaviour patterns exist in real-time urban vehicular networks. In this article, we propose a fuzzy-assisted social-based routing (FAST) protocol that takes the advantage of social behaviour of humans on the road to make optimal and secure routing decisions. FAST uses prior global knowledge of real-time vehicular traffic for packet routing from the source to the destination. In FAST, fuzzy inference system leverages friendship mechanism to make critical decisions at intersections which is based on prior global knowledge of real-time vehicular traffic information. The simulation results in urban vehicular environment for with and without obstacles scenario show that the FAST performs best in terms of packet delivery ratio with upto 32% increase, average delay 80% decrease, and hops count 50% decrease compared to the state of the art VANET routing solutions.
---
paper_title: A Load Balancing and Congestion-Avoidance Routing Mechanism for Teal-Time Traffic over Vehicular Networks
paper_content:
With the growth up of internet in mobile commerce, researchers have reproduced various mobile applications that vary from entertainment and commercial services to diagnostic and safety tools. Resource management for real-time traffic has widely been recognized as one of the most challenging problems for seamless access to vehicular networks. In this paper, a novel load balancing and congestion-avoidance routing mechanism over short communication range is proposed to satisfy the stringent QoS requirement of real-time traffic in vehicular ad hoc networks. Fuzzy logic systems are used to select the intermediate nodes on the routing path via inter-vehicle communications, and H-infinity technique is used to adjust the membership functions employed in the fuzzy logic systems to adapt to the volatile characteristics of the vehicular networks. Notably, a prediction of the remaining connection time among each vehicle and its neighbors is derived to assisting in the determination of the intermediate nodes on the routing path. The experimental results verify the effectiveness and feasibility of the proposed schemes, in terms of several performance metrics such as packet delivery ratio, end-to-end delay, control overhead, throughputs, call blocking probability and call dropping probability.
---
paper_title: Multi-metric Routing Decisions in VANET
paper_content:
Due to the rapidly changing topology of Vehicular Ad Hoc network (VANET), it requires the routing protocol can be able to find out comparatively more stable routes. But the routing protocols in existence are not taking the distinguishing features of vehicles and roadway into consideration, and this brought about challenges to these routing protocols’ applicability in VANET. In this paper, we take the characteristics of VANET into account, and using the method of fuzzy logic and fuzzy control to make routing decision under multiple selection criteria, then we developed a new routing protocol based on the classical AODV which name is Fcar(Fuzzy control based AODV routing). Simulations have been carried out comparing AODV and Fcar, the results indicate that Fcar is superior to AODV in several aspects, so Fcar shows its adaptability in VANET.
---
paper_title: Fuzzy logic-assisted geographical routing over vehicular ad hoc networks
paper_content:
Vehiculiu: Ad Hoc Network" (VANET,,) i" a type of ad hoc network that allows vehicles to communicate with each other in the absence of ji'J:ed infrastruct'ure. Inter-vehicle geographic ro'uting has been proven to perform well in high speed vehic'ular environments. In connected and reliable cehicular scenarios, greedy based geographical toutuu; protocol" could forward data packet" efficiently and q'uiekly toward" the de"tination. However, e'J:tremely dynamic cehicular euoironmcnts and 'uneven distrib'ution of vehicles could create 'unreliable wireless charmels between vehicles and disconnected cehicular partitions. On the one hand, in connected uehiculur networks, an intelligent multi-metric ro'uting protocol must be e'J:ploited in consideration of the 'unreliable nature of wireless channcls between vehicles and vehic'ular mobility characteristics. On the other hand, a mechanism must be 'utilized to create a tnrtuu! bridge between vehicles in disconnected cehicular scenurios. To thi" end, we fir"tly propose a novel Stability and Reliability aware Routinq (SRR) protocol that forward" packet" with a high degree of reliability and "tability toward" the destination. That is, the SRR protocol incorporate" fuzzy loou: with geographical ro'uting when making packet forwarding decisions. Ro'uting metric», sucl: as direction and distance, are considered as inp'uts of the fuzzy decision making system so that the best preferable neighbo'ur annuul a "mart vehiele i" "elected. We then 'utilize a mechanism to cache data packets once the network is disconnected and then switch back to SRR in a connected vehic'ular scenario. Traffic density is considered as an inp'ut when estimating network dis-connectivity. After devcloping an analytical modcl of our protocol, we implemented it and compared it with standard protocols. In a realistic highway uchicular scetuirio, the results "how that the propo"ed protocol perform" better than Greedy Perimeter Coordinator RO'uting (GPCR) with increases of 'up to 21.12 %, 2.9.,14 % and //..98 %in packet dcliver;1J ratio in high lossy clumnel, sparse, and dense traffic conditions respectivcly. In terms of average packet dclay, SRR performs better with performance increases of 'up to 2,1..92 % in deus« traffic condition". But, GPCR perform" better in sparse traffic condition" by 'up to ,16.,10 %. Finally, SRR ha" Ie"" control overhead than the "tate of the art protocol".
---
paper_title: GeoDTN+Nav: Geographic DTN Routing with Navigator Prediction for Urban Vehicular Environments
paper_content:
Position-based routing has proven to be well suited for highly dynamic environment such as Vehicular Ad Hoc Networks (VANET) due to its simplicity. Greedy Perimeter Stateless Routing (GPSR) and Greedy Perimeter Coordinator Routing (GPCR) both use greedy algorithms to forward packets by selecting relays with the best progress towards the destination or use a recovery mode in case such solutions fail. These protocols could forward packets efficiently given that the underlying network is fully connected. However, the dynamic nature of vehicular network, such as vehicle density, traffic pattern, and radio obstacles could create unconnected networks partitions. To this end, we propose GeoDTN+Nav, a hybrid geographic routing solution enhancing the standard greedy and recovery modes exploiting the vehicular mobility and on-board vehicular navigation systems to efficiently deliver packets even in partitioned networks. GeoDTN+Nav outperforms standard geographic routing protocols such as GPSR and GPCR because it is able to estimate network partitions and then improves partitions reachability by using a store-carry-forward procedure when necessary. We propose a virtual navigation interface (VNI) to provide generalized route information to optimize such forwarding procedure. We finally evaluate the benefit of our approach first analytically and then with simulations. By using delay tolerant forwarding in sparse networks, GeoDTN+Nav greatly increases the packet delivery ratio of geographic routing protocols and provides comparable routing delay to benchmark DTN algorithms.
---
paper_title: An evaluation of inter-vehicle ad hoc networks based on realistic vehicular traces
paper_content:
Vehicular ad hoc networks (VANETs) using WLAN tech-nology have recently received considerable attention. The evaluation of VANET routing protocols often involves simulators since management and operation of a large number of real vehicular nodes is expensive. We study the behavior of routing protocols in VANETs by using mobility information obtained from a microscopic vehicular traffic simulator that is based on the on the real road maps of Switzerland. The performance of AODV and GPSR is significantly in uenced by the choice of mobility model, and we observe a significantly reduced packet delivery ratio when employing the realistic traffic simulator to control mobility of nodes. To address the performance limitations of communication pro-tocols in VANETs, we investigate two improvements that increase the packet delivery ratio and reduce the delay until the first packet arrives. The traces used in this study are available for public download.
---
paper_title: A survey of mobility models for ad hoc network research
paper_content:
In the performance evaluation of a protocol for an ad hoc network, the protocol should be tested under realistic conditions including, but not limited to, a sensible transmission range, limited buffer space for the storage of messages, representative data traffic models, and realistic movements of the mobile users (i.e., a mobility model). This paper is a survey of mobility models that are used in the simulations of ad hoc networks. We describe several mobility models that represent mobile nodes whose movements are independent of each other (i.e., entity mobility models) and several mobility models that represent mobile nodes whose movements are dependent on each other (i.e., group mobility models). The goal of this paper is to present a number of mobility models in order to offer researchers more informed choices when they are deciding upon a mobility model to use in their performance evaluations. Lastly, we present simulation results that illustrate the importance of choosing a mobility model in the simulation of an ad hoc network protocol. Specifically, we illustrate how the performance results of an ad hoc network protocol drastically change as a result of changing the mobility model simulated.
---
paper_title: Mobility modeling in wireless networks: categorization, smooth movement, and border effects
paper_content:
The movement pattern of mobile users plays an important role in performance analysis of wireless computer and communication networks. In this paper, we first give an overview and classification of mobility models used for simulation-based studies. Then, we present an enhanced random mobility model, which makes the movement trace of mobile stations more realistic than common approaches for random mobility. Our movement concept is based on random processes for speed and direction control in which the new values are correlated to previous ones. Upon a speed change event, a new target speed is chosen, and an acceleration is set to achieve this target speed. The principles for direction changes are similar. Finally, we discuss strategies for the stations' border behavior (i.e., what happens when nodes move out of the simulation area) and show the effects of certain border behaviors and mobility models on the spatial user distribution.
---
paper_title: The networking shape of vehicular mobility
paper_content:
Mobility is the distinguishing feature of vehicular networks, affecting the evolution of network connectivity over space and time in a unique way. Connectivity dynamics, in turn, determine the performance of networking protocols, when they are employed in vehicle-based, large-scale communication systems. Thus, a key question in vehicular networking is: which effects does nodes mobility generate on the topology of a network built over vehicles? Surprisingly, such a question has been quite overlooked by the networking research community. In this paper, we present an in-depth analysis of the topological properties of a vehicular network, unveiling the physical reasons behind the peculiar connectivity dynamics generated by a number of mobility models. Results make one think about the validity of studies conducted under unrealistic car mobility and stimulate interesting considerations on how network protocols could take advantage of vehicular mobility to improve their performance.
---
paper_title: Ad-hoc on-demand distance vector routing
paper_content:
An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.
---
paper_title: An integrated mobility and traffic model for vehicular wireless networks
paper_content:
Ad-hoc wireless communication among highly dynamic, mobile nodes in a urban network is a critical capability for a wide range of important applications including automated vehicles, real-time traffic monitoring and vehicular safety applications. When evaluating application performance in simulation, a realistic mobility model for vehicular ad-hoc networks (VANETs) is critical for accurate results. This paper analyzes ad-hoc wireless network performance in a vehicular network in which nodes move according to a simplified vehicular traffic model on roads defined by real map data. We show that when nodes move according to our street mobility model, STRAW, network performance is significantly different from that of the commonly used random waypoint model. We also demonstrate that protocol performance varies with the type of urban environment. Finally, we use these results to argue for the development of integrated vehicular and network traffic simulators to evaluate vehicular ad-hoc network applications, particularly when the information passed through the network affects node mobility.
---
paper_title: Contention-Based Forwarding for Street Scenarios
paper_content:
In this paper, we propose to apply Contention-Based Forwarding (CBF) to Vehicular Ad Hoc Networks (VANETs). CBF is a greedy position-based forwarding algorithm that does not require proactive transmission of beacon messages. CBF performance is analyzed using realistic movement patterns of vehicles on a highway. We show by means of simulation that CBF as well as traditional position-based routing (PBR) achieve a delivery rate of almost 100 given that connectivity exists. However, CBF has a much lower forwarding overhead than PBR since PBR can achieve high delivery ratios only by implicitly using a trial-and-error next-hop selection strategy. With CBF, a better total throughput can be achieved. We further discuss several optimizations of CBF for its use in VANETs, in particular a new position-encoding scheme that naturally allows for communication paradigms such as `street geocast` and `street flooding`. The discussions show that CBF can be viewed as a concept for convergence of intelligent flooding, geocast, and multi-hop forwarding in the area of inter-vehicle communication.
---
paper_title: Fuzzy logic-assisted geographical routing over vehicular ad hoc networks
paper_content:
Vehiculiu: Ad Hoc Network" (VANET,,) i" a type of ad hoc network that allows vehicles to communicate with each other in the absence of ji'J:ed infrastruct'ure. Inter-vehicle geographic ro'uting has been proven to perform well in high speed vehic'ular environments. In connected and reliable cehicular scenarios, greedy based geographical toutuu; protocol" could forward data packet" efficiently and q'uiekly toward" the de"tination. However, e'J:tremely dynamic cehicular euoironmcnts and 'uneven distrib'ution of vehicles could create 'unreliable wireless charmels between vehicles and disconnected cehicular partitions. On the one hand, in connected uehiculur networks, an intelligent multi-metric ro'uting protocol must be e'J:ploited in consideration of the 'unreliable nature of wireless channcls between vehicles and vehic'ular mobility characteristics. On the other hand, a mechanism must be 'utilized to create a tnrtuu! bridge between vehicles in disconnected cehicular scenurios. To thi" end, we fir"tly propose a novel Stability and Reliability aware Routinq (SRR) protocol that forward" packet" with a high degree of reliability and "tability toward" the destination. That is, the SRR protocol incorporate" fuzzy loou: with geographical ro'uting when making packet forwarding decisions. Ro'uting metric», sucl: as direction and distance, are considered as inp'uts of the fuzzy decision making system so that the best preferable neighbo'ur annuul a "mart vehiele i" "elected. We then 'utilize a mechanism to cache data packets once the network is disconnected and then switch back to SRR in a connected vehic'ular scenario. Traffic density is considered as an inp'ut when estimating network dis-connectivity. After devcloping an analytical modcl of our protocol, we implemented it and compared it with standard protocols. In a realistic highway uchicular scetuirio, the results "how that the propo"ed protocol perform" better than Greedy Perimeter Coordinator RO'uting (GPCR) with increases of 'up to 21.12 %, 2.9.,14 % and //..98 %in packet dcliver;1J ratio in high lossy clumnel, sparse, and dense traffic conditions respectivcly. In terms of average packet dclay, SRR performs better with performance increases of 'up to 2,1..92 % in deus« traffic condition". But, GPCR perform" better in sparse traffic condition" by 'up to ,16.,10 %. Finally, SRR ha" Ie"" control overhead than the "tate of the art protocol".
---
paper_title: AN EFFICIENT, UNIFYING APPROACH TO SIMULATION USING VIRTUAL MACHINES
paper_content:
Due to their popularity and widespread utility; discrete event simulators have been the subject of much research. Systems researchers have built many types of simulation kernels and libraries, while the languages community has designed numerous languages specifically for simulation. In this dissertation, I propose a new approach for constructing simulators that leverages virtual machines and thus combines the advantages of both the traditional systems-based and language-based approaches to simulator construction. ::: I present JiST, a Java-based simulation engine that exemplifies virtual machine-based simulation. JiST executes discrete event simulations by embedding simulation time semantics directly into the Java execution model. The system provides all the standard benefits that the modern Java runtime affords. In addition, JiST is efficient, out-performing existing highly optimized simulation runtimes, and inherently flexible, capable of transparently performing cross-cutting program transformations and optimizations at the bytecode level. I illustrate the practicality of the JiST approach through the construction of SWANS, a scalable wireless ad hoc network simulator that can simulate million node wireless networks, which is more than an order of magnitude in scale over what existing simulators can achieve on equivalent hardware and at the same level of detail.
---
paper_title: GeoCross : A geographic routing protocol in the presence of loops in urban scenarios
paper_content:
In this paper,we propose GeoCross, a simple, yet novel, event-driven geographic routing protocol that removes cross-links dynamically to avoid routing loops in urban Vehicular Ad Hoc Networks (VANETs). GeoCross exploits the natural planar feature of urban maps without resorting to cumbersome planarization. Its feature of dynamic loop detection makes GeoCross suitable for highly mobile VANET. We have shown that in pathologic cases, GeoCross's packet delivery ratio (PDR) is consistently higher than Greedy Perimeter Stateless Routing's (GPSR's) and Greedy Perimeter Coordinator Routing's (GPCR's). We have also shown that caching (GeoCross+Cache) provides the same high PDR but uses fewer hops.
---
paper_title: Multi-metric Routing Decisions in VANET
paper_content:
Due to the rapidly changing topology of Vehicular Ad Hoc network (VANET), it requires the routing protocol can be able to find out comparatively more stable routes. But the routing protocols in existence are not taking the distinguishing features of vehicles and roadway into consideration, and this brought about challenges to these routing protocols’ applicability in VANET. In this paper, we take the characteristics of VANET into account, and using the method of fuzzy logic and fuzzy control to make routing decision under multiple selection criteria, then we developed a new routing protocol based on the classical AODV which name is Fcar(Fuzzy control based AODV routing). Simulations have been carried out comparing AODV and Fcar, the results indicate that Fcar is superior to AODV in several aspects, so Fcar shows its adaptability in VANET.
---
paper_title: VANET Routing on City Roads Using Real-Time Vehicular Traffic Information
paper_content:
This paper presents a class of routing protocols called road-based using vehicular traffic (RBVT) routing, which outperforms existing routing protocols in city-based vehicular ad hoc networks (VANETs). RBVT protocols leverage real-time vehicular traffic information to create road-based paths consisting of successions of road intersections that have, with high probability, network connectivity among them. Geographical forwarding is used to transfer packets between intersections on the path, reducing the path's sensitivity to individual node movements. For dense networks with high contention, we optimize the forwarding using a distributed receiver-based election of next hops based on a multicriterion prioritization function that takes nonuniform radio propagation into account. We designed and implemented a reactive protocol RBVT-R and a proactive protocol RBVT-P and compared them with protocols representative of mobile ad hoc networks and VANETs. Simulation results in urban settings show that RBVT-R performs best in terms of average delivery rate, with up to a 40% increase compared with some existing protocols. In terms of average delay, RBVT-P performs best, with as much as an 85% decrease compared with the other protocols.
---
| Title: Routing Protocols in Vehicular Ad hoc Networks: Survey and Research Challenges
Section 1: Introduction
Description 1: Write about the need for improvements in transportation systems due to the growth in the number of vehicles, and introduce the concept of VANET and its significance.
Section 2: Previous Surveys of VANET Routing
Description 2: Provide an overview of previous surveys conducted on VANET routing, highlighting the limitations and gaps in those studies.
Section 3: VANET Routing Protocols
Description 3: Discuss the potential and requirements for VANET routing protocols and the challenges specific to vehicular environments.
Section 4: Geographical and Topology-based Routing Protocols
Description 4: Describe the purpose of routing protocols in VANET and provide a general classification into geographical and topology-based routing protocols.
Section 5: Topology-based Routing Protocols
Description 5: Explain the general concept of topology-based routing, focusing on reactive and proactive routing protocols.
Section 6: Proactive Routing Protocols
Description 6: Detail the working mechanism of proactive routing protocols and discuss representative examples like Destination Sequenced Distance-Vector (DSDV).
Section 7: Reactive Routing Protocols
Description 7: Describe the operating principles of reactive routing protocols with specific examples like AODV and DSR.
Section 8: Geographical Routing Protocols
Description 8: Elaborate on the principles of geographical routing protocols, including neighbor discovery, and real-time identification of destination positions.
Section 9: Packet Buffering based Geographical Routing Protocols
Description 9: Discuss delay-tolerant routing protocols that use packet buffering mechanisms like Geographical Opportunistic Routing and Vehicle-Assisted Data Delivery.
Section 10: Connectionless Geographical Routing Protocols
Description 10: Describe connectionless routing protocols that do not maintain neighbor information, such as Contention Based Forwarding and GDBF.
Section 11: Connection-Oriented Geographical Routing Protocols
Description 11: Explain connection-oriented routing protocols that maintain local information and the distinction into trajectories-based and source and map-based routing, with examples.
Section 12: Hybrid (Packet and Non-Packet Buffering) Geographical Routing Protocols
Description 12: Cover hybrid routing protocols that combine packet buffering and non-buffering techniques, exemplified by protocols like GeoDTN+Nav and SRR.
Section 13: Influence of Mobility Model
Description 13: Analyze the impact of different mobility models on the performance of VANET routing protocols.
Section 14: Performance Evaluation
Description 14: Present the results of performance evaluations of geographical and topology-based routing protocols under various conditions in urban environments.
Section 15: Research Directions and Open Issues
Description 15: Discuss the open research issues in VANET routing and propose potential future research directions.
Section 16: Conclusions
Description 16: Summarize the main findings of the survey and suggest the importance of continuing research in developing efficient and optimal VANET routing solutions. |
Survey over VANET Routing Protocols for Vehicle to Vehicle Communication | 8 | ---
paper_title: CarTALK 2000: safe and comfortable driving based upon inter-vehicle-communication
paper_content:
CarTALK 2000 is a European Project focussing on new driver assistance systems which are based upon inter-vehicle communication. The main objectives are the development of cooperative driver assistance systems on the one hand and the development of a self-organising ad-hoc radio network as a communication basis with the aim of preparing a future standard. As for the assistance system, the main issues are: a) assessment of today's and future applications for co-operative driver assistance systems, b) development of software structures and algorithms, i.e. new fusion techniques, c) testing and demonstrating assistance functions in probe vehicles in real or reconstructed traffic scenarios. To achieve a suitable communication system, algorithms for radio ad-hoc networks with extremely high dynamic network topologies are developed and prototypes tested in the vehicles. Apart from technological goals, CarTALK 2000 actively addresses market introduction strategies including cost/benefit analyses and legal aspects, and aims at the standardisation to bring these systems to the European market. CarTALK 2000 started in August 2001 as a three-years project which is funded within the IST Cluster of the 5th Framework Program of the European Commission.
---
paper_title: A highly adaptive distributed routing algorithm for mobile wireless networks
paper_content:
We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term "link reversal" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized "single pass" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a "physical or logical clock" to establish the "temporal order" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA).
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: Geographic routing in city scenarios
paper_content:
Position-based routing, as it is used by protocols like Greedy Perimeter Stateless Routing (GPSR) [5], is very well suited for highly dynamic environments such as inter-vehicle communication on highways. However, it has been discussed that radio obstacles [4], as they are found in urban areas, have a significant negative impact on the performance of position-based routing. In prior work [6] we presented a position-based approach which alleviates this problem and is able to find robust routes within city environments. It is related to the idea of position-based source routing as proposed in [1] for terminode routing. The algorithm needs global knowledge of the city topology as it is provided by a static street map. Given this information the sender determines the junctions that have to be traversed by the packet using the Dijkstra shortest path algorithm. Forwarding between junctions is then done in a position-based fashion. In this short paper we show how position-based routing can be aplied to a city scenario without assuming that nodes have access to a static street map and without using source routing.
---
paper_title: Connectivity-Aware Routing (CAR) in Vehicular Ad-hoc Networks
paper_content:
Vehicular ad hoc networks using WLAN technology have recently received considerable attention. We present a position-based routing scheme called Connectivity-Aware Routing (CAR) designed specifically for inter-vehicle communication in a city and/or highway environment. A distinguishing property of CAR is the ability to not only locate positions of destinations but also to find connected paths between source and destination pairs. These paths are auto-adjusted on the fly, without a new discovery process. "Guards" help to track the current position of a destination, even if it traveled a substantial distance from its initially known location. For the evaluation of the CAR protocol we use realistic mobility traces obtained from a microscopic vehicular traffic simulator that is based on a model of driver behavior and the real road maps of Switzerland.
---
paper_title: A routing strategy for vehicular ad hoc networks in city environments
paper_content:
Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV.
---
paper_title: A-STAR: A Mobile Ad Hoc Routing Strategy for Metropolis Vehicular Communications
paper_content:
One of the major issues that affect the performance of Mobile Ad hoc NETworks (MANET) is routing. Recently, position-based routing for MANET is found to be a very promising routing strategy for inter-vehicular communication systems (IVCS). However, position-based routing for IVCS in a built-up city environment faces greater challenges because of potentially more uneven distribution of vehicular nodes, constrained mobility, and difficult signal reception due to radio obstacles such as high-rise buildings. This paper proposes a new position-based routing scheme called Anchor-based Street and Traffic Aware Routing (A-STAR), designed specifically for IVCS in a city environment. Unique to A-STAR is the usage of information on city bus routes to identify an anchor path with high connectivity for packet delivery. Along with a new recovery strategy for packets routed to a local maximum, the proposed protocol shows significant performance improvement in a comparative simulation study with other similar routing approaches.
---
paper_title: A cluster-based directional routing protocol in VANET
paper_content:
Vehicular Ad-hoc Network (VANET) is a new application of Mobile Ad-hoc Network (MANET) in the field of Inter-vehicle communication. As the high mobility of vehicles, some traditional MANET routing protocols may not fit the VANET. In this paper, we propose a cluster-based directional routing protocol (CBDRP) for highway scenarios, in which the header of a cluster selects another header according to the moving direction of vehicle to forward packets. Simulation results shows the CBDRP can solve the problem of link stability in VANET, realizing reliable and rapid data transmission.
---
paper_title: A New Cluster Based Routing Protocol for VANET
paper_content:
With the development of vehicles and mobile Ad Hoc network technology, the Vehicle Ad hoc Network (VANET) has become an emerging field of study. It is a challenging problem for searching and maintaining an effective route for transporting some data information. In this paper the authors designed a new routing protocol for VANET based on the former results, called CBR (Cluster Based Routing). Compared with other routing protocols, the new one has obvious improvement in the average routing overhead and small average end to end delay jitter with the increase of vehicles number. The real-time traffic applications require data transmission delay time to be relatively stable, small average end to end delay jitter with the increase of vehicles number just meets the real-time application needs.
---
paper_title: Emergency Broadcast Protocol for Inter-Vehicle Communications
paper_content:
The most important goal in transportation systems is to reduce the dramatically high number of accidents and fatal consequences. One of the most important factors that would make it possible to reach this goal is the design of effective broadcast protocols. In this paper we present an emergency broadcast protocol designed for sensor inter-vehicle communications and based in geographical routing. Sensors installed in cars continuously gather important information and in any emergency detection raise the need for immediate broadcast. The highway is divided in virtual cells, which moves as the vehicles moves. The cell members choose a cell reflector that behaves for a certain time interval as a base station that handle the emergency messages coming from members of the same cell, or close members from neighbor cells. Besides that the cell reflector serves as an intermediate node in the routing of emergency messages coming from its neighbor cell reflectors and does a prioritization of all messages in order to decide which is the first to be forwarded. After this the message is forwarded through the other cell reflectors. Finally the destination cell reflector sends the message to the destination node. Our simulation results show that our proposed protocol is more effective compared to existing inter-vehicles protocols
---
paper_title: Broadcasting in VANET
paper_content:
In this paper, we report the first complete version of a multi-hop broadcast protocol for vehicular ad hoc networks (VANET). Our results clearly show that broadcasting in VANET is very different from routing in mobile ad hoc networks (MANET) due to several reasons such as network topology, mobility patterns, demographics, traffic patterns at different times of the day, etc. These differences imply that conventional ad hoc routing protocols such as DSR and AODV will not be appropriate in VANETs for most vehicular broadcast applications. We identify three very different regimes that a vehicular broadcast protocol needs to work in: i) dense traffic regime; ii) sparse traffic regime; and iii) regular traffic regime. We build upon our previously proposed routing solutions for each regime and we show that the broadcast message can be disseminate efficiently. The proposed design of the distributed vehicular broadcast (DV-CAST) protocol integrates the use of various routing solutions we have previously proposed.
---
paper_title: Parameterless Broadcasting in Static to Highly Mobile Wireless Ad Hoc, Sensor and Actuator Networks
paper_content:
In a broadcasting task, source node wants to send the same message to all the other nodes in the network. Existing solutions range from connected dominating set (CDS) based for static networks, to blind flooding for moderate mobility, to hyperflooding for highly mobile and frequently partitioned networks. The only existing protocol for all scenarios is based on some threshold parameters (which may be expensive to gather) to locally select between these three solution approaches. Here we propose a new protocol, which adjusts itself to any mobility scenario without using any parameter. Unlike existing methods for highly mobile scenarios, in proposed method, two nodes do not transmit every time they discover each other as new neighbors. Each node maintains a list of two hop neighbors by periodically exchanging 'hello' messages, and decides whether or not it is in CDS. Upon receipt of the first copy of message intended for broadcasting, it selects a waiting timeout and constructs two lists of neighbors: neighbors that received the same message and neighbors that did not receive it. Nodes not in CDS select longer timeouts than nodes in CDS. These lists are updated upon receipt of further copies of same packet. When timeout expires, node retransmits if the list of neighbors in need of message is nonempty. 'Hello' messages received while waiting, or after timeout expiration may revise all lists (and CDS status) and consequently the need to retransmit. This provides a seamless transition of protocol behavior from static to highly mobile scenarios. Our protocol is compared to existing solutions. It was shown to be superior to all of them in number of retransmissions and reliability.
---
paper_title: An evaluation of inter-vehicle ad hoc networks based on realistic vehicular traces
paper_content:
Vehicular ad hoc networks (VANETs) using WLAN tech-nology have recently received considerable attention. The evaluation of VANET routing protocols often involves simulators since management and operation of a large number of real vehicular nodes is expensive. We study the behavior of routing protocols in VANETs by using mobility information obtained from a microscopic vehicular traffic simulator that is based on the on the real road maps of Switzerland. The performance of AODV and GPSR is significantly in uenced by the choice of mobility model, and we observe a significantly reduced packet delivery ratio when employing the realistic traffic simulator to control mobility of nodes. To address the performance limitations of communication pro-tocols in VANETs, we investigate two improvements that increase the packet delivery ratio and reduce the delay until the first packet arrives. The traces used in this study are available for public download.
---
paper_title: DECA: Density-aware reliable broadcasting in vehicular ad hoc networks
paper_content:
Reliable broadcasting in vehicular ad hoc networks is challenging due to its unique characteristics including intermittent connectivity and various vehicular scenarios (car traffic is possibly very dense in urban areas or very sparse on highways). In this paper, we propose a new reliable broadcast protocol which is suitable for such characteristics. To address the issue of various vehicular scenarios, our protocol performs periodic beaconing to gather local density information of 1-hop neighbors and uses such information to adapt its broadcast decision dynamically. Specifically, before broadcasting each message, a node selects a neighbor with the highest density. Upon the reception of the broadcast message, each node checks if it is the selected neighbor. If so, it is responsible for rebroadcasting the message immediately. Otherwise, it stores the message for possible rebroadcasting. To address the issue of intermittent connectivity, message information in beacons is used to discover new neighbors, which have not yet received the message. Simulation results show that, in both highway and urban scenarios, our proposed protocol outperforms other competing solutions in terms of reliability, overhead and speed of data dissemination.
---
paper_title: Reliable geographical multicast routing in vehicular ad-hoc networks
paper_content:
Vehicular ad-hoc networks (VANETs) offer a large number of new potential applications without relying on significant infrastructure. Many of these applications benefit from multi-hop relaying of information, thus requiring a routing protocol. Characteristics unique to VANETs (such as high mobility and the need for geographical addressing) make many conven tional ad hoc routing protocols unsuitable. Also, some envisioned applica tions have end-to-end QoS requirements. In this paper we propose a new multicast routing protocol specifically designed for VANETs. Its purpose is to provide a routing service for a future reliable transport protocol. We eval uate its performance using realistic network and traffic models. It is shown that it is possible to implement a reliable multicast routing protocol for VANETs.
---
paper_title: DTSG: Dynamic time-stable geocast routing in vehicular ad hoc networks
paper_content:
Vehicular ad hoc networks (VANETs) have emerged as an area of interest for both industry and researches because they have become an essential part of intelligent transportation systems (ITSs). Many applications in VANET require sending a message to certain or all vehicles within a region, called geocast. Sometimes geocast requires that the message be kept alive within the region for a period of time. This time-stable geocast has a vital role in some ITS applications, particularly commercial applications. This study presents a novel time-stable geocast protocol that works well even in too sparse networks. Moreover, since commercial applications sometimes make it necessary to change the duration of the stable message within the region, the dynamic nature of a geocast protocol should allow this time to be extended, reduced, or canceled without any additional cost. Therefore, we call it a dynamic time-stable geocast, DTSG, protocol. It works in two phases (the pre-stable period and the stable period), and the simulation results show that it works well in its performance metrics (delivery ratio and network cost). In addition, these results validate the protocol prediction of its performance metrics. Moreover, with the informed time of zero, all the intended vehicles will be informed as soon as they enter the region. The fact that the protocol is independent of the networks' density, the vehicles' speed, and the vehicles' broadcasting range, makes it more robust than others that fail in sparse networks or in high-speed nodes.
---
| ```
Title: Survey over VANET Routing Protocols for Vehicle to Vehicle Communication
Section 1: Introduction
Description 1: Write about the emergence of VANET technology, its applications, and the motivations behind studying VANET routing protocols.
Section 2: Overview Of Routing Protocols
Description 2: Explain the challenges due to highly dynamic topology in VANET, and classify the routing protocols into different categories: Topology based, Position based, Cluster based, Geocast based, and Broadcast based.
Section 3: Topology Based Routing Protocol
Description 3: Describe the working mechanism of topology-based routing protocols, including proactive (table-driven) and reactive (on-demand) routing, along with their pros and cons.
Section 4: Position Based Routing Protocol
Description 4: Explore how position-based routing utilizes geographic positioning for routing decisions. Discuss various protocols such as GPSR, GPCR, CAR, GSR, A-STAR, and STBR, including their strengths and weaknesses.
Section 5: Cluster Based Routing Protocol
Description 5: Discuss cluster-based routing where nodes are grouped into clusters for scalable communication. Describe protocols like HCB, CBDRP, CBLR, CBR, and LORA-CBF, and their respective advantages and disadvantages.
Section 6: Broadcast Based Routing Protocol
Description 6: Explain broadcast routing protocols which use flooding techniques for message dissemination. Detail protocols such as BROADCOMM, EAEP, DV-CAST, SRB, PBSM, PGB, UMB, V-TRADE, and DECA, with their benefits and drawbacks.
Section 7: Geocast Based Routing Protocol
Description 7: Outline Geocast routing protocols aimed at targeted communication within a geographic area. Highlight protocols like IVG, ROVER, and DTSG, along with their practical implications.
Section 8: Conclusion
Description 8: Summarize the comparative analysis of various VANET routing protocols and emphasize the necessity of further research to develop more robust and efficient protocols for vehicular communication.
``` |
Artificial Intelligence in Mineral Processing Plants: An Overview | 16 | ---
paper_title: Methods for automatic control, observation, and optimization in mineral processing plants
paper_content:
Abstract For controlling strongly disturbed, poorly modeled, and difficult to measure processes, such as those involved in the mineral processing industry, the peripheral tools of the control loop (fault detection and isolation system, data reconciliation procedure, observers, soft sensors, optimizers, model parameter tuners) are as important as the controller itself. The paper briefly describes each element of this generalized control loop, while putting emphasis on mineral processing specific cases.
---
paper_title: State of the art and challenges in mineral processing control
paper_content:
Abstract The objective of process control in the mineral industry is to optimise the recovery of the valuable minerals, while maintaining the quality of the concentrates delivered to the metal extraction plants. The paper presents a survey of the control approaches for ore size reduction and mineral separation processes. The present limitations of the measurement instrumentation are discussed, as well as the methods to upgrade the information delivered by the sensors. In practice, the overall economic optimisation goal must be hierarchically decomposed into simpler control problems. Model-based and AI methods are reviewed, mainly for grinding and flotation processes, and classified as mature, active or emerging.
---
paper_title: Methods for automatic control, observation, and optimization in mineral processing plants
paper_content:
Abstract For controlling strongly disturbed, poorly modeled, and difficult to measure processes, such as those involved in the mineral processing industry, the peripheral tools of the control loop (fault detection and isolation system, data reconciliation procedure, observers, soft sensors, optimizers, model parameter tuners) are as important as the controller itself. The paper briefly describes each element of this generalized control loop, while putting emphasis on mineral processing specific cases.
---
paper_title: The long way toward multivariate predictive control of flotation processes
paper_content:
Abstract Flotation processes are very complex, and after more than one hundred years of history, there are few reports on applications of novel techniques in monitoring and control of flotation units, circuits and global plants. On the other hand, the successful application of multivariate predictive control on other processes is well known. In this paper, an analysis on how the characteristics of flotation processes, the quality of measurements of key variables, and the general lack of realistic dynamic models, are delaying the appropriate use of predictive control. In this context, the applications of multivariate statistics, such as PCA, to model the relationship between operating data for on-line diagnosis and fault detection and to build causal models are discussed. Also the use of PLS models to predict target variables for control purposes, is presented. Results, obtained at pilot and industrial scales, are discussed, introducing new ideas on how to obtain more valuable information from the usual available operating data of the plant, and particularly from froth images.
---
paper_title: Automatic flotation control- a review of 20 years of effort
paper_content:
Abstract Some twenty years ago, the first on-line devices for measuring the metal content of flotation slurries became available. As a result, the first studies in the automatic or computer control of industrial flotation circuits commenced. The development of robust and lasting automatic control systems for flotation circuits has proved difficult. Reasons for this include the inherent complexity and unpredictability of the response of most flotation circuits to upset consitions, unclear expectations of what a control system can achieve, unrealistic objectives for control systems and excessive complexity of the actual control strategies. However, the interest in developing control systems has persisted because the benefits to be gained in terms of improved metallurgical performance are substantial. A pattern of development has emerged for flotation control systems. Most of the early systems were concerned with some form of stabilizing control, although a few systems were aimed directly at optimization. It is now generally accepted that stabilizing control must precede optimization, and the focus has shifted to a range of increasingly sophisticated approaches to achieve stabilization by the use of various model based control strategies. A recent development is the application of expert systems as the crucial role and knowledge of operators are being appreciated.
---
paper_title: Methods for automatic control, observation, and optimization in mineral processing plants
paper_content:
Abstract For controlling strongly disturbed, poorly modeled, and difficult to measure processes, such as those involved in the mineral processing industry, the peripheral tools of the control loop (fault detection and isolation system, data reconciliation procedure, observers, soft sensors, optimizers, model parameter tuners) are as important as the controller itself. The paper briefly describes each element of this generalized control loop, while putting emphasis on mineral processing specific cases.
---
paper_title: Supervisory control at salvador flotation columns
paper_content:
Abstract The experience of developing, implementing and evaluating a hierarchical control strategy in a flotation column circuit at Salvador is discussed. The supervisory control system was installed in two columns in the copper cleaning circuit. Salvador concentrator produces over 200,000 tons per year of 30% copper concentrate. The column control is organised at two different levels: regulatory and supervisory control. The supervisor, SINCO-PRO, was developed to consider mainly three aspects: process data validation, metallurgical objectives control and operating problems detection. The system was installed in August 1997, and fine-tuning, was completed in October 1997. Evaluation of the new implemented system was performed in November and December 1997. The main results were an average increment in the concentrate grade of 1.2% without loss in process recovery, and a decrement in the standard deviation of the concentrate grade from 0.9 to 0.7 %. The global operation of the cleaning circuit was stabilised, increasing the average feed grade from 8 to 10 % and dramatically reducing its standard deviation from 1.8 to 0.4 %. This stabilisation of the whole circuit allowed a tighter pH control with a significant reduction in chemical reagent consumption. The project was paid during the first two month of the evaluation period.
---
paper_title: The long way toward multivariate predictive control of flotation processes
paper_content:
Abstract Flotation processes are very complex, and after more than one hundred years of history, there are few reports on applications of novel techniques in monitoring and control of flotation units, circuits and global plants. On the other hand, the successful application of multivariate predictive control on other processes is well known. In this paper, an analysis on how the characteristics of flotation processes, the quality of measurements of key variables, and the general lack of realistic dynamic models, are delaying the appropriate use of predictive control. In this context, the applications of multivariate statistics, such as PCA, to model the relationship between operating data for on-line diagnosis and fault detection and to build causal models are discussed. Also the use of PLS models to predict target variables for control purposes, is presented. Results, obtained at pilot and industrial scales, are discussed, introducing new ideas on how to obtain more valuable information from the usual available operating data of the plant, and particularly from froth images.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Online monitoring and control of froth flotation systems with machine vision: A review
paper_content:
Abstract Research and development into the application of machine vision in froth flotation systems has continued since its introduction in the late 1980s. Machine vision is able to accurately and rapidly extract froth characteristics, both physical (e.g. bubble size) and dynamic (froth velocity) in nature, from digital images and present these results to operators and/or use the results as inputs to process control systems. Currently, machine vision has been implemented on several industrial sites worldwide and the technology continues to benefit from advances in computer technology. Effort continues to be directed into linking concentrate grade with measurable attributes of the froth phase, although this is proving difficult. As a result other extracted variables, such as froth velocity, have to be used to infer process performance. However, despite more than 20 years of development, a long-term, fully automated control system using machine vision is yet to materialise. In this review, the various methods of data extraction from images are investigated and the associated challenges facing each method discussed. This is followed by a look at how machine vision has been implemented into process control structures and a review of some of the commercial froth imaging systems currently available. Lastly, the review assesses future trends and draws several conclusions on the current status of machine vision technology.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Supervisory control at salvador flotation columns
paper_content:
Abstract The experience of developing, implementing and evaluating a hierarchical control strategy in a flotation column circuit at Salvador is discussed. The supervisory control system was installed in two columns in the copper cleaning circuit. Salvador concentrator produces over 200,000 tons per year of 30% copper concentrate. The column control is organised at two different levels: regulatory and supervisory control. The supervisor, SINCO-PRO, was developed to consider mainly three aspects: process data validation, metallurgical objectives control and operating problems detection. The system was installed in August 1997, and fine-tuning, was completed in October 1997. Evaluation of the new implemented system was performed in November and December 1997. The main results were an average increment in the concentrate grade of 1.2% without loss in process recovery, and a decrement in the standard deviation of the concentrate grade from 0.9 to 0.7 %. The global operation of the cleaning circuit was stabilised, increasing the average feed grade from 8 to 10 % and dramatically reducing its standard deviation from 1.8 to 0.4 %. This stabilisation of the whole circuit allowed a tighter pH control with a significant reduction in chemical reagent consumption. The project was paid during the first two month of the evaluation period.
---
paper_title: The long way toward multivariate predictive control of flotation processes
paper_content:
Abstract Flotation processes are very complex, and after more than one hundred years of history, there are few reports on applications of novel techniques in monitoring and control of flotation units, circuits and global plants. On the other hand, the successful application of multivariate predictive control on other processes is well known. In this paper, an analysis on how the characteristics of flotation processes, the quality of measurements of key variables, and the general lack of realistic dynamic models, are delaying the appropriate use of predictive control. In this context, the applications of multivariate statistics, such as PCA, to model the relationship between operating data for on-line diagnosis and fault detection and to build causal models are discussed. Also the use of PLS models to predict target variables for control purposes, is presented. Results, obtained at pilot and industrial scales, are discussed, introducing new ideas on how to obtain more valuable information from the usual available operating data of the plant, and particularly from froth images.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Online monitoring and control of froth flotation systems with machine vision: A review
paper_content:
Abstract Research and development into the application of machine vision in froth flotation systems has continued since its introduction in the late 1980s. Machine vision is able to accurately and rapidly extract froth characteristics, both physical (e.g. bubble size) and dynamic (froth velocity) in nature, from digital images and present these results to operators and/or use the results as inputs to process control systems. Currently, machine vision has been implemented on several industrial sites worldwide and the technology continues to benefit from advances in computer technology. Effort continues to be directed into linking concentrate grade with measurable attributes of the froth phase, although this is proving difficult. As a result other extracted variables, such as froth velocity, have to be used to infer process performance. However, despite more than 20 years of development, a long-term, fully automated control system using machine vision is yet to materialise. In this review, the various methods of data extraction from images are investigated and the associated challenges facing each method discussed. This is followed by a look at how machine vision has been implemented into process control structures and a review of some of the commercial froth imaging systems currently available. Lastly, the review assesses future trends and draws several conclusions on the current status of machine vision technology.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Fuzzy supervisory control of flotation columns
paper_content:
Abstract The application of fuzzy logic to supervise a distributed basic control in a flotation column is discussed. The control strategies were studied and tested in a dynamic simulator of the process. Two control strategies were developed and tested to manage three basic distributed controllers: an expert supervisor, mainly based on rules following a binary logic, and a fuzzy supervisor. The objective function to be optimized was to keep the concentrate grade in a high band, subject to maintaining the process recovery over a minimum value. The supervisor takes into account the present state of the gas flowrate, the froth depth and the wash water fiowrate, to make a decision. Simulated results are discussed.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
paper_title: Soft computing-based modeling of flotation processes – A review
paper_content:
Abstract The modern modeling of flotation processes has been burdened for years with limitations of classical mathematics and modeling. The emergence of soft computing methods has also been looked upon in many of the industry’s branches as promising, and up to a certain point, those expectations have been met. Today, these soft computing methods are used regularly in research, and in some cases, within different industrial practices. This paper gives a review regarding the most common soft computing methods and their use in flotation processes modeling. Artificial neural networks have received the widest application in this area, and are followed by fuzzy logic, genetic algorithms, support vector machines and learning decision trees. Over the last five years, the number of reported studies within this field, has steadily increased. And although several classes of flotation problems are being successfully modeled with soft computing methods, there still remain a number of unresolved issues and obstacles. This paper thus attempts to provide an explanation for the current state and use of soft computing methods, as well as to present some ideas on future initiatives and potential developments within the area.
---
| Title: Artificial Intelligence in Mineral Processing Plants: An Overview
Section 1: INTRODUCTION
Description 1: This section should provide an introduction to mineral processing plants and the flotation process, highlighting the importance and challenges associated with it, and setting the stage for discussing the application of artificial intelligence.
Section 2: Flotation Process
Description 2: This section should detail the froth flotation process, discussing its historical context, current inefficiencies, and the necessary parameters for optimization and control.
Section 3: Modeling and Control Difficulties
Description 3: This section should discuss the stochastic nature of the processes and the modeling and control challenges faced, including process variability, difficulties in measurement, and the inadequacy of current control systems.
Section 4: ARTIFICIAL INTELLIGENT APPLICATIONS
Description 4: This section should cover the various soft computing techniques used in mineral processing, describing how they contribute to process optimization and control.
Section 5: Application of Artificial Neural Networks
Description 5: This section should explore the use of artificial neural networks (ANN) in flotation processes, highlighting their advantages, applications, and challenges.
Section 6: Application of Fuzzy Logic
Description 6: This section should examine the role of fuzzy logic in modeling and controlling flotation processes, detailing specific applications and benefits.
Section 7: Application of Genetic Algorithms
Description 7: This section should look into genetic algorithms for flotation process optimization, discussing their capabilities and practical limitations.
Section 8: Application of Support Vector Machine
Description 8: This section should describe the application of support vector machines (SVM) in flotation process modeling, highlighting their use in classification and regression analysis.
Section 9: Application of Decision Trees
Description 9: This section should discuss the use of decision trees in predictive modeling of flotation systems and their comparative advantages.
Section 10: Other Soft Computing Methods
Description 10: This section should introduce other soft computing methods like particle swarm optimization, glowworm swarm optimization, and others, and their applications in flotation process optimization.
Section 11: The Hybrid (Combined) Approach
Description 11: This section should discuss hybrid approaches combining various artificial intelligence methods and traditional mathematical modeling for improved process performance.
Section 12: DISCUSSION
Description 12: This section should provide a comprehensive discussion of the constraints imposed by the flotation process and artificial intelligent methods, including a summary of criticisms and future expectations.
Section 13: On the Process Constraints
Description 13: This section should delve deeper into the specific constraints imposed by the complexity of the flotation process and its impact on modeling and optimization.
Section 14: On the AI Constraints
Description 14: This section should address the limitations and challenges of applying artificial intelligence methods to flotation systems, including the need for hybrid models.
Section 15: Today Criticism
Description 15: This section should highlight criticisms of current AI applications in flotation processes, including issues with generalization, industrial applicability, and robustness.
Section 16: ACKNOWLEDGMENT
Description 16: This section should acknowledge financial and institutional support for the research. |
A Literature Survey and Comprehensive Study of Intrusion Detection | 9 | ---
paper_title: An Intrusion-Detection Model
paper_content:
A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system.
---
paper_title: An Application of Pattern Matching in Intrusion Detection
paper_content:
This report examines and classi es the characteristics of signatures used in misuse intrusion detection. E cient algorithms to match patterns in some of these classes are described. A generalized model for matching intrusion signatures based on Colored Petri Nets is presented, and some of its properties are derived.
---
paper_title: SVM approach with a genetic algorithm for network intrusion detection
paper_content:
Due to the increase in unauthorized access and stealing of internet resources, internet security has become a very significant issue. Network anomalies in particular can cause many potential problems, but it is difficult to discern these from normal traffic. In this paper, we focus on a Support Vector Machine (SVM) and a genetic algorithm to detect network anomalous attacks. We first use a genetic algorithm (GA) for choosing proper fields of traffic packets for analysis. Only the selected fields are used, and a time delay processing is applied to SVM for considering temporal relationships among packets. In order to verify our approach, we tested our proposal with the datasets of MIT Lincoln Lab, and then analyzed its performance. Our SVM approach with selected fields showed excellent performance.
---
paper_title: Fuzzy classification by evolutionary algorithms
paper_content:
Fuzzy sets and fuzzy logic can be used for efficient data classification by fuzzy rules and fuzzy classifiers. This paper presents an application of genetic programming to the evolution of fuzzy classifiers based on extended Boolean queries. Extended Boolean queries are well known concept in the area of fuzzy information retrieval. An extended Boolean query represents a complex soft search expression that defines a fuzzy set on the collection of searched documents. We interpret the data mining task as a fuzzy information retrieval problem and we apply a proven method for query induction from data to find useful fuzzy classifiers. The ability of the genetic programming to evolve useful fuzzy classifiers is demonstrated on two use cases in which we detect faulty products in a product processing plant and discover intrusions in a computer network.
---
paper_title: A data mining framework for building intrusion detection models
paper_content:
There is often the need to update an installed intrusion detection system (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert knowledge, changes to IDSs are expensive and slow. We describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. New detection models are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report on the results of applying these programs to the extensively gathered network audit data for the 1998 DARPA Intrusion Detection Evaluation Program.
---
paper_title: The architecture of a network level intrusion detection system
paper_content:
This paper presents the preliminary architecture of a network level intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.
---
paper_title: Artificial Neural Networks for Misuse Detection
paper_content:
Misuse detection is the process of attempting to identify instances of network attacks by comparing current activity against the expected actions of an intruder. Most current approaches to misuse detection involve the use of rule-based expert systems to identify indications of known attacks. However, these techniques are less successful in identifying attacks which vary from expected patterns. Artificial neural networks provide the potential to identify and classify network activity based on limited, incomplete, and nonlinear data sources. We present an approach to the process of misuse detection that utilizes the analytical strengths of neural networks, and we provide the results from our preliminary analysis of this approach.
---
paper_title: Detecting attack signatures in the real network traffic with ANNIDA
paper_content:
In this paper, an improved version of ANNIDA for detecting attack signatures in the payload of network packets is presented. The Hamming Net artificial neural network methodology was used with good results. A review of the application's development is followed by a summary of the modifications made in the application in order to classify real data. Application improvements are reported, solving the problems of time delays in writing/reading data in the files and data collision effects when generating numeric keys used to model data for the neural network. Test results highlight the increased accuracy and efficiency of the new application when submitted to real data from HTTP network traffic containing actual traces of attacks and legitimate data. Finally, an evaluation of the application to detect signatures in real network traffic data is presented.
---
paper_title: An Improved Intrusion Detection Technique based on two Strategies Using Decision Tree and Neural Network
paper_content:
In this paper we enhance the notion of anomaly detection and use both neural network (NN) and decision tree (DT) for intrusion detection. While DTs are highly successful in detecting known attacks, NNs are more interesting to detect new attacks. In our method we proposed a new approach to design the system using both DT and combination of unsupervised and supervised NN for Intrusion Detection System (IDS). By applying DT known attacks would be recognized with a quick execution time. Unknown attacks would be detected by applying the unsupervised NN based on hybrid of Self Organizing Map (SOM) for clustering attacks into smaller categories and supervised NN based on Backpropagation for detailed clustering.
---
paper_title: Ensemble-based DDoS detection and mitigation model
paper_content:
This work-in-progress paper presents an ensemble-based model for detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, and its partial implementation. The model utilises network traffic analysis and MIB (Management Information Base) server load analysis features for detecting a wide range of network and application layer DDoS attacks and distinguishing them from Flash Events. The proposed model will be evaluated against realistic synthetic network traffic generated using a software-based traffic generator that we have developed as part of this research. In this paper, we summarise our previous work, highlight the current work being undertaken along with preliminary results obtained and outline the future directions of our work.
---
paper_title: Application of bagging, boosting and stacking to intrusion detection
paper_content:
This paper investigates the possibility of using ensemble algorithms to improve the performance of network intrusion detection systems. We use an ensemble of three different methods, bagging, boosting and stacking, in order to improve the accuracy and reduce the false positive rate. We use four different data mining algorithms, naive bayes, J48 (decision tree), JRip (rule induction) and iBK( nearest neighbour), as base classifiers for those ensemble methods. Our experiment shows that the prototype which implements four base classifiers and three ensemble algorithms achieves an accuracy of more than 99% in detecting known intrusions, but failed to detect novel intrusions with the accuracy rates of around just 60%. The use of bagging, boosting and stacking is unable to significantly improve the accuracy. Stacking is the only method that was able to reduce the false positive rate by a significantly high amount (46.84%); unfortunately, this method has the longest execution time and so is inefficient to implement in the intrusion detection field.
---
paper_title: Unsupervised Clustering Approach for Network Anomaly Detection
paper_content:
This paper describes the advantages of using the anomaly detection approach over the misuse detection technique in detecting unknown network intrusions or attacks. It also investigates the performance of various clustering algorithms when applied to anomaly detection. Five different clustering algorithms: k-Means, improved k-Means, k-Medoids, EM clustering and distance-based outlier detection algorithms are used. Our experiment shows that misuse detection techniques, which implemented four different classifiers (naive Bayes, rule induction, decision tree and nearest neighbour) failed to detect network traffic, which contained a large number of unknown intrusions; where the highest accuracy was only 63.97% and the lowest false positive rate was 17.90%. On the other hand, the anomaly detection module showed promising results where the distance-based outlier detection algorithm outperformed other algorithms with an accuracy of 80.15%. The accuracy for EM clustering was 78.06%, for k-Medoids it was 76.71%, for improved k-Means it was 65.40% and for k-Means it was 57.81%. Unfortunately, our anomaly detection module produces high false positive rate (more than 20%) for all four clustering algorithms. Therefore, our future work will be more focus in reducing the false positive rate and improving the accuracy using more advance machine learning techniques.
---
paper_title: Packet and Flow Based Network Intrusion Dataset
paper_content:
With exponential growth in the number of computer applications and the size of networks, the potential damage that can be caused by attacks launched over the internet keeps increasing dramatically. A number of network intrusion detection methods have been developed with their respective strengths and weaknesses. The majority of research in the area of network intrusion detection is still based on the simulated datasets because of non-availability of real datasets. A simulated dataset cannot represent the real network intrusion scenario. It is important to generate real and timely datasets to ensure accurate and consistent evaluation of methods. We propose a new real dataset to ameliorate this crucial shortcoming. We have set up a testbed to launch network traffic of both attack as well as normal nature using attack tools. We capture the network traffic in packet and flow format. The captured traffic is filtered and preprocessed to generate a featured dataset. The dataset is made available for research purpose.
---
paper_title: Hybrid Intelligent Intrusion Detection Scheme
paper_content:
This paper introduces a hybrid scheme that combines the advantages of deep belief network and support vector machine. An application of intrusion detection imaging has been chosen and hybridization scheme have been applied to see their ability and accuracy to classify the intrusion into two outcomes: normal or attack, and the attacks fall into four classes; R2L, DoS, U2R, and Probing. First, we utilize deep belief network to reduct the dimensionality of the feature sets. This is followed by a support vector machine to classify the intrusion into five outcome; Normal, R2L, DoS, U2R, and Probing. To evaluate the performance of our approach, we present tests on NSL-KDD dataset and show that the overall accuracy offered by the employed approach is high.
---
paper_title: An autonomous labeling approach to support vector machines algorithms for network traffic anomaly detection
paper_content:
In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold. This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.
---
paper_title: An Intelligent Intrusion Detection System for Mobile Ad-Hoc Networks Using Classification Techniques
paper_content:
This paper proposes an intelligent multi level classification technique for effective intrusion detection in Mobile Ad-hoc Networks. The algorithm uses a combination of a tree classifier which uses a labeled training data and an Enhanced Multiclass SVM algorithm. Moreover, an effective preprocessing technique has been proposed and implemented in this work in order to improve the detection accuracy and to reduce the processing time. From the experiments carried out in this work, it has been observed that significant improvement has been achieved in this model from the view point of both high detection rates as well as low false alarm rates.
---
paper_title: A new intrusion detection system using support vector machines and hierarchical clustering
paper_content:
Whenever an intrusion occurs, the security and value of a computer system is compromised. Network-based attacks make it difficult for legitimate users to access various network services by purposely occupying or sabotaging network resources and services. This can be done by sending large amounts of network traffic, exploiting well-known faults in networking services, and by overloading network hosts. Intrusion Detection attempts to detect computer attacks by examining various data records observed in processes on the network and it is split into two groups, anomaly detection systems and misuse detection systems. Anomaly detection is an attempt to search for malicious behavior that deviates from established normal patterns. Misuse detection is used to identify intrusions that match known attack scenarios. Our interest here is in anomaly detection and our proposed method is a scalable solution for detecting network-based anomalies. We use Support Vector Machines (SVM) for classification. The SVM is one of the most successful classification algorithms in the data mining area, but its long training time limits its use. This paper presents a study for enhancing the training time of SVM, specifically when dealing with large data sets, using hierarchical clustering analysis. We use the Dynamically Growing Self-Organizing Tree (DGSOT) algorithm for clustering because it has proved to overcome the drawbacks of traditional hierarchical clustering algorithms (e.g., hierarchical agglomerative clustering). Clustering analysis helps find the boundary points, which are the most qualified data points to train SVM, between two classes. We present a new approach of combination of SVM and DGSOT, which starts with an initial training set and expands it gradually using the clustering structure produced by the DGSOT algorithm. We compare our approach with the Rocchio Bundling technique and random selection in terms of accuracy loss and training time gain using a single benchmark real data set. We show that our proposed variations contribute significantly in improving the training process of SVM with high generalization accuracy and outperform the Rocchio Bundling technique.
---
paper_title: Optimized intrusion detection mechanism using soft computing techniques
paper_content:
Intrusion detection is an important technique in computer and network security. A variety of intrusion detection approaches be present to resolve this severe issue but the main problem is performance. It is important to increase the detection rates and reduce false alarm rates in the area of intrusion detection. Therefore, in this research, an optimized intrusion detection mechanism using soft computing techniques is proposed to overcome performance issues. The KDD-cup dataset is used that is a benchmark for evaluating the security detection mechanisms. The Principal Component Analysis (PCA) is applied to transform the input samples into a new feature space. The selecting of an appropriate number of principal components is a critical problem. So, Genetic Algorithm (GA) is used in the optimum selection of principal components instead of using traditional method. The Support Vector Machine (SVM) is used for classification purpose. The performance of this approach is addresses. Further, a comparative analysis is made with existing approaches. Consequently, this method provides optimal intrusion detection mechanism which is capable to minimize amount of features and maximize the detection rates.
---
paper_title: Neural networks learning improvement using the K-means clustering algorithm to detect network intrusions
paper_content:
In the present work, we propose a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model use multi-layered network architecture with a backpropagation learning mechanism. The K-means algorithm is first applied to the training dataset to reduce the amount of samples to be presented to the neural network, by automatically selecting an optimal set of samples. The obtained results demonstrate that the proposed technique performs exceptionally in terms of both accuracy and computation time when applied to the KDD99 dataset compared to a standard learning schema that use the full dataset..
---
paper_title: Y-means: a clustering method for intrusion detection
paper_content:
As the Internet spreads to each comer of the world, computers are exposed to miscellaneous intrusions from the World Wide Web. We need effective intrusion detection systems to protect our computers from these unauthorized or malicious actions. Traditional instance-based learning methods for intrusion detection can only detect known intrusions since these methods classify instances based on what they have learned. They rarely detect the intrusions that they have not learned before. In this paper, we present a clustering heuristic for intrusion detection, called Y-means. This proposed heuristic is based on the K-means algorithm and other related clustering algorithms. It overcomes two shortcomings of K-means: number of clusters dependency and degeneracy. The result of simulations run on the KDD-99 data set shows that Y-means is an effective method for partitioning large data space. A detection rate of 89.89% and a false alarm rate of 1.00% are achieved with Y-means.
---
paper_title: K-Means+ID3: A Novel Method for Supervised Anomaly Detection by Cascading K-Means Clustering and ID3 Decision Tree Learning Methods
paper_content:
In this paper, we present "k-means+ID3", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED
---
paper_title: Fusion of multiple classifiers for intrusion detection in computer networks
paper_content:
The security of computer networks plays a strategic role in modern computer systems. In order to enforce high protection levels against threats, a number of software tools have been currently developed. Intrusion Detection Systems aim at detecting intruders who elude "first line" protection. In this paper, a pattern recognition approach to network intrusion detection based on the fusion of multiple classifiers is proposed. Five decision fusion methods are assessed by experiments and their performances compared. The potentialities of classifier fusion for the development of effective intrusion detection systems are evaluated and discussed.
---
paper_title: Evaluation of an adaptive genetic-based signature extraction system for network intrusion detection
paper_content:
Machine learning techniques are frequently applied to intrusion detection problems in various ways such as to classify normal and intrusive activities or to mine interesting intrusion patterns. Self-learning rule-based systems can relieve domain experts from the difficult task of hand crafting signatures, in addition to providing intrusion classification capabilities. To this end, a genetic-based signature learning system has been developed that can adaptively and dynamically learn signatures of both normal and intrusive activities from the network traffic. In this paper, we extend the evaluation of our systems to real time network traffic which is captured from a university departmental server. A methodology is developed to build fully labelled intrusion detection data set by mixing real background traffic with attacks simulated in a controlled environment. Tools are developed to pre-process the raw network data into feature vector format suitable for a supervised learning classifier system and other related machine learning systems. The signature extraction system is then applied to this data set and the results are discussed. We show that even simple feature sets can help detecting payload-based attacks.
---
paper_title: Hierarchical Kohonenen net for anomaly detection in network security
paper_content:
A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.
---
paper_title: Anomaly Network Intrusion Detection Based on Improved Self Adaptive Bayesian Algorithm
paper_content:
Recently, research on intrusion detection in computer systems has received much attention to the computational intelligence society. Many intelligence learning algorithms applied to the huge volume of complex and dynamic dataset for the construction of efficient intrusion detection systems (IDSs). Despite of many advances that have been achieved in existing IDSs, there are still some difficulties, such as correct classification of large intrusion detection dataset, unbalanced detection accuracy in the high speed network traffic, and reduce false positives. This paper presents a new approach to the alert classification to reduce false positives in intrusion detection using improved self adaptive Bayesian algorithm (ISABA). The proposed approach applied to the security domain of anomaly based network intrusion detection, which correctly classifies different types of attacks of KDD99 benchmark dataset with high classification rates in short response time and reduce false positives using limited computational resources.
---
paper_title: Anomaly Intrusion Detection System using Hamming Network Approach
paper_content:
Intrusion detection is an interesting approach that could be used to improve the security of network system. IDS detects suspected patterns of network traffic on the remaining open parts through monitoring user activities. The major problems of existing models is recognition of new attacks, low accuracy, detection time and system adaptability. In this paper, evolving anomaly intrusion detection system is constructed using hamming and MAXNET Neural Network for recognize attack class in the network traffic. The result is encouraging, the detection rate is 95% which is relatively high. We describe another approach based on Multilayer Perceptrons (MLP) network and compare the results of the two approaches to evaluate the system. The experimental results demonstrate that the designed models are promising in terms of accuracy and computational time of real word intrusion detection. Training and testing data obtains from the Defense Advanced Research Projects Agency(DARPA) intrusion detection evaluation datasets.
---
paper_title: Anomaly Detection in Ethernet Networks Using Self Organizing Maps
paper_content:
Anomaly detection attempts to recognize abnormal behavior to detect intrusions. We have concentrated to design a prototype UNIX Anomaly Detection System. Neural Networks are tolerant of imprecise data and uncertain information. A tool has been devised for detecting such intrusions into the network. The tool uses the machine learning approaches ad clustering techniques like Self Organizing Map and compares it with the K-means approach. Our system is described for applying hierarchical unsupervised neural network to intrusion detection system.
---
paper_title: Layered Approach Using Conditional Random Fields for Intrusion Detection
paper_content:
Intrusion detection faces a number of challenges; an intrusion detection system must reliably detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic. In this paper, we address these two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. We demonstrate that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach. Experimental results on the benchmark KDD '99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes. The improvement in attack detection accuracy is very high, particularly, for the U2R attacks (34.8 percent improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle noisy data without compromising performance.
---
paper_title: Bayesian based intrusion detection system
paper_content:
In this paper intrusion detection using Bayesian probability is discussed. The systems designed are trained a priori using a subset of the KDD dataset. The trained classifier is then tested using a larger subset of KDD dataset. Initially, a system was developed using a naive Bayesian classifier that is used to identify possible intrusions. This classifier was able to detect intrusion with an acceptable detection rate. The classier was then extended to a multi-layer Bayesian based intrusion detection. Finally, we introduce the concept that the best possible intrusion detection system is a layered approach using different techniques in each layer.
---
paper_title: Autonomous rule creation for intrusion detection
paper_content:
Many computational intelligence techniques for anomaly based network intrusion detection can be found in literature. Translating a newly discovered intrusion recognition criteria into a distributable rule can be a human intensive effort. This paper explores a multi-modal genetic algorithm solution for autonomous rule creation. This algorithm focuses on the process of creating rules once an intrusion has been identified, rather than the evolution of rules to provide a solution for intrusion detection. The algorithm was demonstrated on anomalous ICMP network packets (input) and Snort rules (output of the algorithm). Output rules were sorted according to a fitness value and any duplicates were removed. The experimental results on ten test cases demonstrated a 100 percent rule alert rate. Out of 33,804 test packets 3 produced false positives. Each test case produced a minimum of three rule variations that could be used as candidates for a production system.
---
paper_title: An active learning based TCM-KNN algorithm for supervised network intrusion detection
paper_content:
As network attacks have increased in number and severity over the past few years, intrusion detection is increasingly becoming a critical component of secure information systems and supervised network intrusion detection has been an active and difficult research topic in the field of intrusion detection for many years. However, it hasn't been widely applied in practice due to some inherent issues. The most important reason is the difficulties in obtaining adequate attack data for the supervised classifiers to model the attack patterns, and the data acquisition task is always time-consuming and greatly relies on the domain experts. In this paper, we propose a novel supervised network intrusion detection method based on TCM-KNN (Transductive Confidence Machines for K-Nearest Neighbors) machine learning algorithm and active learning based training data selection method. It can effectively detect anomalies with high detection rate, low false positives under the circumstance of using much fewer selected data as well as selected features for training in comparison with the traditional supervised intrusion detection methods. A series of experimental results on the well-known KDD Cup 1999 data set demonstrate that the proposed method is more robust and effective than the state-of-the-art intrusion detection methods, as well as can be further optimized as discussed in this paper for real applications.
---
paper_title: Hierarchical Kohonenen net for anomaly detection in network security
paper_content:
A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.
---
paper_title: Unsupervised Clustering Approach for Network Anomaly Detection
paper_content:
This paper describes the advantages of using the anomaly detection approach over the misuse detection technique in detecting unknown network intrusions or attacks. It also investigates the performance of various clustering algorithms when applied to anomaly detection. Five different clustering algorithms: k-Means, improved k-Means, k-Medoids, EM clustering and distance-based outlier detection algorithms are used. Our experiment shows that misuse detection techniques, which implemented four different classifiers (naive Bayes, rule induction, decision tree and nearest neighbour) failed to detect network traffic, which contained a large number of unknown intrusions; where the highest accuracy was only 63.97% and the lowest false positive rate was 17.90%. On the other hand, the anomaly detection module showed promising results where the distance-based outlier detection algorithm outperformed other algorithms with an accuracy of 80.15%. The accuracy for EM clustering was 78.06%, for k-Medoids it was 76.71%, for improved k-Means it was 65.40% and for k-Means it was 57.81%. Unfortunately, our anomaly detection module produces high false positive rate (more than 20%) for all four clustering algorithms. Therefore, our future work will be more focus in reducing the false positive rate and improving the accuracy using more advance machine learning techniques.
---
| Title: A Literature Survey and Comprehensive Study of Intrusion Detection
Section 1: INTRODUCTION
Description 1: This section provides an overview of intrusion detection, its importance in network security, and various techniques used to detect suspicious activities in computer networks.
Section 2: SURVEY OF INTRUSION DETECTION BASED ON DIFFERENT TECHNIQUES
Description 2: This section presents an extensive study of various intrusion detection classifier techniques and other methods categorized into neural network, support vector machine, k-means classifier, hybrid technique, and other detection techniques.
Section 3: Neural network based intrusion detection
Description 3: This section provides a brief review of neural network-based intrusion detection techniques and their applications.
Section 4: Support vector machine based intrusion detection
Description 4: This section discusses various papers related to support vector machine-based intrusion detection techniques.
Section 5: K-means algorithm based intrusion detection
Description 5: This section explores different papers utilizing the k-means algorithm for intrusion detection and their outcomes.
Section 6: Hybrid technique based intrusion detection
Description 6: This section reviews the hybrid approaches combining multiple techniques for effective intrusion detection.
Section 7: Other classifier based intrusion detection
Description 7: This section discusses various other classifier-based techniques for intrusion detection presented in different research papers.
Section 8: COMPREHENSIVE ANALYSIS AND DISCUSSIONS
Description 8: This section presents a comprehensive analysis of the various intrusion detection methods with respect to detection rate, time, and false alarm rate.
Section 9: CONCLUSION
Description 9: This section concludes the survey by summarizing the various techniques available for intrusion detection and highlights the importance of Network Intrusion Detection Systems in network security. |
A SURVEY OF CURRENT RESEARCH ON CAPTCHA | 14 | ---
paper_title: Is it human or computer? Defending e-commerce with Captchas
paper_content:
A Captcha - a completely automatic public Turing test to tell computers and humans apart - is a test that humans can pass but computer programs cannot; such tests are becoming key to defending e-commerce systems. By using a Captcha, for example, IT systems can permit only real people-rather than a spammer's script-to create a free e-mail account. This article explains the various types of Captchas and discusses their strengths and weaknesses as a security measure. It also lists sources for more information on the formal research into Captchas.
---
paper_title: A new architecture for the generation of picture based CAPTCHA
paper_content:
Automated network attack such as denial-of-service (DoS) leads to significant wastage of resources, which is a common threat to network security. To prevent these automated network attacks CAPTCHA based security mechanism is to be adopted so that it will differentiate humans from machines. Optical Character Recognition (OCR) based CAPTCHAs are more vulnerable to automated attacks due to the existence of correlation algorithms and direct distortion estimation techniques. The illegibility of the text CAPTCHA makes the user difficult to read it and thus they feel uncomfortable. In order to overcome these difficulties a new type of CAPTCHA, that is, picture based CAPTCHAs came into existence, which are more efficient and secure than the existing text based CAPTCHAs. We propose a new architecture for the generation of picture based CAPTCHA, which is resistant to segmentation through edge detection and thresholding, shape matching and random guessing. Our security analysis shows that the proposed architecture is showing better results in comparison with other picture based CAPTCHAs.
---
paper_title: A 3-layer Dynamic CAPTCHA Implementation
paper_content:
In order to prevent the attack from malicious progroms,the server system introduces CAPTCHA mechanism to tell humans and computers apart.With the continuous development of pattern recognition and artificial intelligence technology,the traditional 2D static CAPTCHA reveals more and more security vulnerabilities,making it possible that some malicious network programs can initiate the program attack by cracking the CAPTCHA.This paper presents a safe and practical 3-layer dynamic CAPTCHA,creatively combining the single-frame zero-knowledge theory and the biological vision theory,thus making it not only difficult to identify a single frame,but also easy to recognize for humans.This design takes advantage of the weakness of computers in identifying multiple moving objects in a complex context,ensuring it still extremely hard for the computer programs to crack even using multiple frames.Hierarchical structure makes the CAPTCHA design much more clearly,bearing strong scalability and large space for optimization.
---
paper_title: Multi-Modal CAPTCHA: A User Verification Scheme
paper_content:
CAPTCHA is an automated test that humans can pass, but current computer programs can't pass any program that has high success over a CAPTCHA can be used to solve an unsolved Artificial Intelligence (AI) problem. The most widely used CAPTCHAs rely on the sophisticated distortion of text images rendering them unrecognizable to the state of the art of pattern recognition techniques, and these text-based schemes have found widespread applications in commercial websites like free email service providers, social networking sites and online auction sites. The increase in bots breaking CAPTCHAs shows the ineffectiveness of the text-based CAPTCHAs that are used on most websites and Webmail services today. Bots can easily read the distorted letters and words using optical character recognition (OCR) or break the CAPTCHA using a dictionary attack. The weakness of each CAPTCHA scheme is summarized; accordingly we make an approach to build our CAPTCHA method. Considering the case study results and including other points which may pose difficulty for the OCR systems,we proposed a new technique to build a CAPTCHA which is multi-modal (Picture and Text based). An image is being rendered on the screen and many text labels drawn over it. A user has to identify the correct name of the underlying image among the set of text labels that are scattered over it, in order to pass a human verification test. We also proposed to use cursive text instead of plain text labels.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Making CAPTCHAs clickable
paper_content:
We show how to convert regular keyboard-entry CAPTCHAs into clickable CAPTCHAs. The goal of this conversion is to simplify and speed-up the entry of the CAPTCHA solution, to minimize user frustration and permit the use of CAPTCHAs on devices where they would otherwise be unsuitable. We propose a technique for producing secure clickable CAPTCHAs that are well suited for use on cell phones and other mobile devices. We support the practical viability of our approach by results from a user study, and an analysis of its security guarantees.
---
paper_title: CAPTCHA Using Strangeness in Machine Translation
paper_content:
CAPTCHA is a technique that is used to prevent automatic programs from being able to acquire free e-mail or online service accounts. However, as many researchers have already reported, conventional CAPTCHA could be overcome by state-of-the-art malware since the capabilities of computers are approaching those of humans. Therefore, CAPTCHA should be based on even more advanced human-cognitive-processing abilities. We propose using the human ability of recognizing “strangeness” to achieve a new CAPTCHA. This paper focuses on strangeness in machine-translated sentences as an example, and proposes CAPTCHA using Strangeness in Sentences (SS-CAPTCHA), which detects malware by checking if users can distinguish natural sentences created by humans from machine-translated sentences. We discuss possible threats to SS-CAPTCHA and countermeasures against these threats. We also carried out basic experiments to confirm its usability by human users.
---
paper_title: 3D CAPTCHA: A Next Generation of the CAPTCHA
paper_content:
Nowadays, the Internet is now becoming a part of our everyday lives. Many services, including Email, search engine, and web board on Internet, are provided with free of charge and unintentionally turns them into vulnerability services. Many software robots or, in short term, bots are developed with purpose to use such services illegally and automatically. Thus, web sites employ human authentication mechanism called Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) to counter this attack. Unfortunately, many CAPTCHA have been already broken by bots and some CAPTCHA are difficult to read by human. In this paper, a new CAPTCHA method called 3D CAPTCHA is proposed to provide an enhanced protection from bots. This method based on assumption that human can recognize 3D character image better than Optical Character Recognition (OCR) software bots.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Question-Based CAPTCHA
paper_content:
Today there are many Internet sites which require only the entry by human users but unfortunately some computer softwares called bots are designed by some hackers to enter these sites and use their resources through false registration. As a result some systems named CAPTCHA have been introduced to tell apart human users and computer software. This paper introduces a new CAPTCHA method. In this method a simple mathematical problem is generated according to a predefined pattern but instead of some object's name, we put their images. Then the whole problem is saved and shown to the user inform of an image to be answered by him. But since answering this problem requires four abilities of understanding text of question, detection of question images, understanding the problem, and solving the problem, only a human user can answer this question and present computer programs are unable to solve it. This project has been implemented by PHP language.
---
paper_title: Multiple SEIMCHA: Multiple semantic image CAPTCHA
paper_content:
This paper presents a new semantic image CAPTCHA, called Multiple SEIMCHA. Multiple SEIMCHA system warps images by using geometric transformations and a 2d view of warped image is shown to user. Users should click on the upright orientation of warped image as a semantic. Multiple SEIMCHA evaluates user based on the idea of almost right response instead of completely right response. This idea uses hardness rate concept which is defined based on user response rate to images. The proposed system has a great response time and success rate as usability metrics and it is secure to bots.
---
paper_title: CAPTCHA for children
paper_content:
In some websites it is necessary to distinguishing between human users and computer programs which is known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart). CAPTCHA methods are mainly based on the weak points of OCR systems and using them are undesirable to human users. In this paper a method has been presented for distinguishing between human users and computer programs on the basis of choice of an object shown on the screen. In this method some objects are chosen randomly and the pictures of these topics are downloaded from the Internet. Then after applying some effects such as rotation, all of the pictures are shown on the screen. Then we ask the user to choose a specific object. Instead of writing the question, it is said using a TTS (Text-To-Speech) system. Therefore children who maybe illiterate and cannot read, can easily pass the test. If the user chooses the right object, we can guess that the user is a human being and not a computer program. This method is specially developed for children websites. The main advantage of this method is its simplicity because the user does not have to read or type anything.
---
paper_title: Advanced Collage CAPTCHA
paper_content:
Some Internet Web sites for providing services to their customers, ask them to register in the Web site. Unfortunately hackers write programs to make automatic false enrollments which waste the Web site's resources. To solve this problem, systems known as CAPTCHA (completely automated public Turing test to tell computers and humans apart) have been introduced to distinguish between human user and computer program. One of the CAPTCHA methods is collage CAPTCHA. In this method some shapes are shown with distortion and the user is asked to choose a specific object. In this paper we increase the resistance of this method to attacks. For this purpose, we also show some other objects on the right of the screen. Some of these objects are same as the previous objects, but have the different shapes. Now we ask the user to choose a specific object and also its same object on the right of the screen. The user will be passed the test if he chooses the two similar objects correctly. In this method because the computer program should also recognize the similar object, the possibility of passing the test by computer is more reduced.
---
paper_title: Multi-Modal CAPTCHA: A User Verification Scheme
paper_content:
CAPTCHA is an automated test that humans can pass, but current computer programs can't pass any program that has high success over a CAPTCHA can be used to solve an unsolved Artificial Intelligence (AI) problem. The most widely used CAPTCHAs rely on the sophisticated distortion of text images rendering them unrecognizable to the state of the art of pattern recognition techniques, and these text-based schemes have found widespread applications in commercial websites like free email service providers, social networking sites and online auction sites. The increase in bots breaking CAPTCHAs shows the ineffectiveness of the text-based CAPTCHAs that are used on most websites and Webmail services today. Bots can easily read the distorted letters and words using optical character recognition (OCR) or break the CAPTCHA using a dictionary attack. The weakness of each CAPTCHA scheme is summarized; accordingly we make an approach to build our CAPTCHA method. Considering the case study results and including other points which may pose difficulty for the OCR systems,we proposed a new technique to build a CAPTCHA which is multi-modal (Picture and Text based). An image is being rendered on the screen and many text labels drawn over it. A user has to identify the correct name of the underlying image among the set of text labels that are scattered over it, in order to pass a human verification test. We also proposed to use cursive text instead of plain text labels.
---
paper_title: Protection through multimedia CAPTCHAs
paper_content:
CAPTCHAS which are well known as complete automatic public Turing test to tell computers and humans apart are a modern implementation of the Turing test, which ask a series of questions of two players: a human and computer. But both of the players pretend to be human. On the bases of the answers the judge has to decide which one is human and which one is computer, but the judge itself is a computer. In this article, we review current CAPTCHAs. After analysis of all the current CAPTCHAS we propose a new 3-D AI CATCHA which has all the strengths of existing CAPTCHAS to provide a better security alternative for ecommerce.
---
paper_title: Motion CAPTCHA
paper_content:
In some Websites it is necessary to distinguish between human users and computer programs which is known as CAPTCHA (completely automated public turing test to tell computers and human apart). CAPTCHA methods are mainly based on the weaknesses of OCR systems while using them are undesirable to human users. In this paper a new CAPTCHA method is introduced on the basis of showing a movie of a personpsilas action. Then we ask the user to describe the movement of that person. The user should select the sentence which describes the motion from a list of sentences. If the user chooses the right sentence we can guess that the user is a human and not a computer program. The main advantage of this method is its simplicity and also the difficulty of computer attacks to it. This project has been implemented by PHP scripting language.
---
paper_title: A Novel Image Based CAPTCHA Using Jigsaw Puzzle
paper_content:
Most commonly used CAPTCHAs are text-based CAPTCHAs which relay on the distortion of texts in the background image. With the development of automated computer vision techniques, which have been designed to remove noise and segment the distorted strings to make characters readable for OCR, traditional text-based CAPTHCAs are not considered safe anymore for authentication. A novel image based CAPTCHA which involves in solving a jigsaw puzzle is presented in this paper. An image is divided into an niAn (n=3, 4 or 5, depends on security level) pieces to construct the jigsaw puzzle CAPTCHA. Only two of the pieces are misplaced from their original positions. Users are required to find the two pieces and swap them. Considering the previous works which are devoted to solving jigsaw puzzles using edge matching technique, the edges of all pieces are processed with glitch treatment to prevent the computer automatic solving. Experiments and security analysis proved that human users can complete the CAPTCHA verification quickly and accurately, but computers rarely can. It is a promising substitution to the current text-based CAPTCHA.
---
paper_title: Multiple SEIMCHA: Multiple semantic image CAPTCHA
paper_content:
This paper presents a new semantic image CAPTCHA, called Multiple SEIMCHA. Multiple SEIMCHA system warps images by using geometric transformations and a 2d view of warped image is shown to user. Users should click on the upright orientation of warped image as a semantic. Multiple SEIMCHA evaluates user based on the idea of almost right response instead of completely right response. This idea uses hardness rate concept which is defined based on user response rate to images. The proposed system has a great response time and success rate as usability metrics and it is secure to bots.
---
| Title: A SURVEY OF CURRENT RESEARCH ON CAPTCHA
Section 1: Definition of CAPTCHA
Description 1: This section defines CAPTCHA, explaining its purpose, functioning, and importance in distinguishing between humans and bots.
Section 2: Application of CAPTCHA
Description 2: This section describes various applications of CAPTCHA, including protecting website registration, shielding email addresses from scrapers, securing online polls, preventing dictionary attacks, and controlling search engine bots.
Section 3: History of CAPTCHAs
Description 3: This section provides a chronological overview of the development and milestones in the history of CAPTCHAs.
Section 4: Classification of CAPTCHAs
Description 4: This section categorizes CAPTCHAs into different types based on what is distorted, such as text, image, audio, video, and puzzle.
Section 5: CAPTCHA Based on Text
Description 5: This section explores various methodologies under text-based CAPTCHAs, including Gimpy CAPTCHA, EZ-Gimpy, PessimalPrint, Baffle Text, Scatter Type Method, Strangeness in Sentences CAPTCHA, and 3D CAPTCHA.
Section 6: CAPTCHA Based on Image
Description 6: This section discusses different image-based CAPTCHAs, such as Bongo, Pix CAPTCHA, Hand-Written CAPTCHA Method, Implicit CAPTCHA Method, Drawing CAPTCHA Method, CAPTCHA Systems for Nintendo, CAPTCHAs for Children, Collage CAPTCHA, Advanced Collage CAPTCHA, and Online Collage CAPTCHA.
Section 7: CAPTCHA Based on Audio
Description 7: This section covers audio-based CAPTCHAs, detailing methods like Text-To-Speech conversion and spoken CAPTCHAs for blind users.
Section 8: CAPTCHA Based on Video
Description 8: This section examines video-based CAPTCHAs, specifically the Motion CAPTCHAs method.
Section 9: CAPTCHA Based on Puzzle
Description 9: This section describes puzzle-based CAPTCHAs and the different methods used, such as Multiple SEIMCHA.
Section 10: Strength and Weakness of CAPTCHA Based on Image
Description 10: This section discusses the strengths and weaknesses of image-based CAPTCHAs, highlighting issues like user accessibility and bot susceptibility.
Section 11: Strength and Weakness of CAPTCHA Based on Audio
Description 11: This section evaluates the strengths and weaknesses of audio-based CAPTCHAs, considering aspects like accessibility for visually impaired users and potential user confusion.
Section 12: Strength and Weakness of CAPTCHA Based on Video
Description 12: This section reviews the strengths and weaknesses of video-based CAPTCHAs, addressing concerns such as download issues and language barriers.
Section 13: Strength and Weakness of CAPTCHA Based on Puzzle
Description 13: This section analyzes the strengths and weaknesses of puzzle-based CAPTCHAs, including user experience and accessibility challenges.
Section 14: Conclusions
Description 14: This section summarizes the significance of CAPTCHAs in web security, reviews the discussed classification methods, and suggests developing multi-linguistic CAPTCHAs. |
Intelligent Evacuation Management Systems: A Review | 18 | ---
paper_title: Collecting Data on Crowds and Rallies: A New Method of Stationary Sampling
paper_content:
This paper proposes a field procedure for collecting data at stationary mass rallies. Noting the research gap in adequate data about ongoing gatherings, we present a set of techniques summarized under the name zone-sector strategies. We emphasize area sampling, dividing the crowd into zones and sectors, and collecting data during stationary phases of assemblages. We also employ two-member interview teams, in an effort to collect reliable information about attitudinal and nonvisible characteristics of participants during the demonstration itself. Methods of testing reliability and validity of information are presented. Data from four political rallies exemplify the techniques and basically support the validity of the zone-sector approach. We conclude by stating possible uses and advantages of the methods. Social researchers have shown interest in the difficult problem of collecting data at sporadic demonstrations and other gatherings (e.g., Evans; Milgram and Toch). Some have noted the general lack of good data on crowd behavior (Berk: a, b). Others have pointed out the difficulties inherent in research on crowds and the pitfalls of the techniques most commonly used, such as sideline observation and retrospective questioning (e.g., Couch; Fisher; McPhail, a). Recent attempts to gather data have stressed primarily the stationary phase of crowds or assemblages. They have paid attention to improving observation. Fisher suggested a dramaturgical framework for such observation (see Ponting). Others (McPhail, b; McPhail and Pickens) have been developing a photographic method for observing crowd processes, with emphasis on at least momentary participant alignment. Jacobs has figured out how to calculate the size of a relatively stationary crowd, again using photographic observation. While these observational techniques and recent general guidelines for research (e.g., Lofland; Quarantelli and Dynes; Schatzman and Strauss) have advanced the methods of gathering data as collective episodes occur, new techniques could still be profitable. The lack of data about crowds themselves, mentioned by Berk (b, 15) persists. In particular, the field of collective behavior needs a *This research was supported in part by funds made available through a National Science Foundation Science Development grant to the University of North Carolina at Chapel Hill. We are very grateful to Glen H. Elder, N. David Milder, E. L. Quarantelli, Timothy J. Curry, and John S. Reed for helpful comments on earlier drafts. We also express our appreciation to the following interviewers: Edward Arroyo, Greg Bird, Habibullah Dada, Suzann Frey, Marg Gainer, Carol Goldsmith, Linda Harris, Marsha Hill, Evon Kruse, Roxi Ann Liming, Ken Long, Jane McCaskey, Frank McIntyre, Christine Nelms, Steven Rinehart, Anne Sykes, Darlene Walker, and Barbara Werner.
---
paper_title: Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments
paper_content:
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
---
paper_title: Urban Computing: Concepts, Methodologies, and Applications
paper_content:
Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community.
---
paper_title: A Novel Wireless Sensor Network Architecture for Crowd Disaster Mitigation
paper_content:
Disasters aroused due to dynamic movement of large, uncontrollable crowds are ever increasing. The inherent real-time dynamics of crowd need to be tightly monitored and alerted to avoid such disasters. Most of the existing crowd monitoring systems is difficult to deploy, maintain, and dependent on single component failure. This research work proposes novel network architecture based on the key technologies of wireless sensor network and mobile computing for the effective prediction of causes of crowd disaster particularly stampedes in the crowd and thereby alerting the crowd controlling station to take appropriate actions in time. In the current implemented version of the proposed architecture, the smart phones act as wireless sensor nodes to estimate the probability of occurrence of stampede using data fusion and analysis of embedded sensors such as tri-axial accelerometers, gyroscopes, GPS, light sensors etc. The implementation of the proposed architecture in smart phones provides light weight, easy to deploy, context aware wireless services for effective crowd disaster mitigation.
---
paper_title: Capturing crowd dynamics at large scale events using participatory GPS-localization
paper_content:
Large-scale festivals with a multitude of stages, food stands, and attractions require a complex perimeter design and program planning in order to manage the mobility of crowds as a controlled process. Errors in the planning phase can cause unexpected crowd dynamics and lead to stampedes with lethal consequences. We deployed an official app for Zuri Fascht 2013 — the largest Swiss event — over a period of three days. The app offered information about the festival and featured a background localization allowing us to collect continuously the visitor position. With 56,000 app downloads and 28,000 users contributing 25M location updates in total, we obtained a large scale dataset. By aggregation of location points complex crowd dynamics can be captured during the entire festival. In this paper we present the data collection for Zuri Fascht 2013 and best practices to acquire as many contributing users as possible for such an event. Furthermore, we show the potential of aggregated location data and visualize relevant parameters that can serve as tool for analysis and planning of program and perimeter design.
---
paper_title: The use of Bluetooth for analysing spatiotemporal dynamics of human movement at mass events: a case study of the Ghent Festivities.
paper_content:
Abstract In this paper, proximity-based Bluetooth tracking is postulated as an efficient and effective methodology for analysing the complex spatiotemporal dynamics of visitor movements at mass events. A case study of the Ghent Festivities event (1.5 million visitors over 10 days) is described in detail and preliminary results are shown to give an indication of the added value of the methodology for stakeholders of the event. By covering 22 locations in the study area with Bluetooth scanners, we were able to extract 152,487 trajectories generated by 80,828 detected visitors. Apart from generating clear statistics such as visitor counts, the share of returning visitors, and visitor flow maps, the analyses also reveal the complex nature of this event by hinting at the existence of several mutually different visitor profiles. We conclude by arguing why Bluetooth tracking offers significant advantages for tracking mass event visitors with respect to other and more prominent technologies, and outline some of its remaining deficiencies.
---
paper_title: Mobile Mapping of Sporting Event Spectators Using Bluetooth Sensors: Tour of Flanders 2011
paper_content:
Accurate spatiotemporal information on crowds is a necessity for a better management in general and for the mitigation of potential security risks. The large numbers of individuals involved and their mobility, however, make generation of this information non-trivial. This paper proposes a novel methodology to estimate and map crowd sizes using mobile Bluetooth sensors and examines to what extent this methodology represents a valuable alternative to existing traditional crowd density estimation methods. The proposed methodology is applied in a unique case study that uses Bluetooth technology for the mobile mapping of spectators of the Tour of Flanders 2011 road cycling race. The locations of nearly 16,000 cell phones of spectators along the race course were registered and detailed views of the spatiotemporal distribution of the crowd were generated. Comparison with visual head counts from camera footage delivered a detection ratio of 13.0 ± 2.3%, making it possible to estimate the crowd size. To our knowledge, this is the first study that uses mobile Bluetooth sensors to count and map a crowd over space and time.
---
paper_title: Bluetooth based collaborative crowd density estimation with mobile phones
paper_content:
We present a technique for estimating crowd density by using a mobile phone to scan the environment for Bluetooth devices. The paper builds on previous work directed to use Bluetooth scans to analyze social context and extends it with more advanced features, leveraging collaboration between close by devices, and the use of relative features that do not directly depend on the absolute number of devices in the environment. The method is evaluated on a data set from an experiment at the public viewing event in Kaiserslautern during the European soccer championship showing over 75% recognition accuracy on seven discrete classes.
---
paper_title: An Efficient Sequential Approach to Tracking Multiple Objects Through Crowds for Real-Time Intelligent CCTV Systems
paper_content:
Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects ( ges 3) in complex occlusion for real-world surveillance scenarios.
---
paper_title: Floor Fields for Tracking in High Density Crowd Scenes
paper_content:
This paper presents an algorithm for tracking individual targets in high density crowd scenes containing hundreds of people. Tracking in such a scene is extremely challenging due to the small number of pixels on the target, appearance ambiguity resulting from the dense packing, and severe inter-object occlusions. The novel tracking algorithm, which is outlined in this paper, will overcome these challenges using a scene structure based force model. In this force model an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in the scene. The key ingredients of the force model are three floor fields, which are inspired by the research in the field of evacuation dynamics, namely Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of move from one location to another by converting the long-range forces into local ones. The SFF specifies regions of the scene which are attractive in nature (e.g. an exit location). The DFF specifies the immediate behavior of the crowd in the vicinity of the individual being tracked. The BFF specifies influences exhibited by the barriers in the scene (e.g. walls, no-go areas). By combining cues from all three fields with the available appearance information, we track individual targets in high density crowds.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: Smart Objects Identification System for Robotic Surveillance
paper_content:
Video surveillance is an active research topic in computer vision. In this paper, humans and cars identification technique suitable for real time video surveillance systems is presented. The technique we proposed includes background subtraction, foreground segmentation, shadow removal, feature extraction and classification. The feature extraction of the extracted foreground objects is done via a new set of affine moment invariants based on statistics method and these were used to identify human or car. When the partial occlusion occurs, although features of full body cannot be extracted, our proposed technique extracts the features of head shoulder. Our proposed technique can identify human by extracting the human head-shoulder up to 60%---70% occlusion. Thus, it has a better classification to solve the issue of the loss of property arising from human occluded easily in practical applications. The whole system works at approximately 16---29 fps and thus it is suitable for real-time applications. The accuracy for our proposed technique in identifying human is very good, which is 98.33%, while for cars' identification, the accuracy is also good, which is 94.41%. The overall accuracy for our proposed technique in identifying human and car is at 98.04%. The experiment results show that this method is effective and has strong robustness.
---
paper_title: Human identification system based on moment invariant features
paper_content:
Video surveillance is an active research topic in computer vision. Recent research in video surveillance system has shown an increasing focus on creating reliable systems utilizing non-computationally expensive technique for detecting and observing humans' appearance, movements and activities. In this paper, we present a human identification technique suitable for video surveillance. The technique we propose includes background subtraction, foreground segmentation, feature extraction and classification. First of all, we extract all foreground objects from the background. Then, we perform a morphological reconstruction algorithm to recover the distorted foreground objects. The feature extraction is done using affine moment invariants of full body and head-shoulder of the extracted foreground objects and these were used to identify human. When the partial occlusion occurs, although feature of full body cannot be extracted, still the features of head shoulder can be extracted. Thus, it has a better classification on solving the issue of the loss of property arising from human occluded easily in practical applications. The experiment results show that this method is effective, and it has strong robustness.
---
paper_title: Dynamics of crowd disasters: An empirical study
paper_content:
Many observations of the dynamics of pedestrian crowds, including various self-organization phenomena, have been successfully described by simple many-particle models. For ethical reasons, however, there is a serious lack of experimental data regarding crowd panic. Therefore, we have analyzed video recordings of the crowd disaster in Mina/Makkah during the Hajj in 1426H on 12 January 2006. They reveal two subsequent, sudden transitions from laminar to stop-and-go and "turbulent" flows, which question many previous simulation models. While the transition from laminar to stop-and-go flows supports a recent model of bottleneck flows [D. Helbing, Phys. Rev. Lett. 97, 168001 (2006)], the subsequent transition to turbulent flow is not yet well understood. It is responsible for sudden eruptions of pressure release comparable to earthquakes, which cause sudden displacements and the falling and trampling of people. The insights of this study into the reasons for critical crowd conditions are important for the organization of safer mass events. In particular, they allow one to understand where and when crowd accidents tend to occur. They have also led to organizational changes, which have ensured a safe Hajj in 1427H.
---
paper_title: Hajj crowd management and navigation system: People tracking and location based services via integrated mobile and RFID systems
paper_content:
Yearly there's an influx of over three million Muslims to Makkah., Saudi Arabia to perform Hajj. As this large group of pilgrims move between the different religious sites safety and security becomes an issue of main concern. This research looks into the integration of different mobile technologies to serve the purpose of crowd management., people tracking and location based services. It explores the solution to track the movement of pilgrims via RFID technology. A location aware mobile solution will also be integrated into this. This will be made available to pilgrims with smartphones to enhance the accuracy and tracking time of the pilgrims and provide them with location based services for Hajj.
---
paper_title: Pilgrims Tracking and Identification Using RFID Technology
paper_content:
Now a days there were so many problems regarding the crowd control and security issues in the holy areas.Especially during pilgrimage, the pilgrimage authority is facing so many problems regarding crowd control,security issues,identification of the pilgrims and the tracking of the pilgrims.India is a multi-religious country, in which so many holy areas are thereEvery year,the respective authorities of the holy areas werefacing many problems. But they are unable to provide those facilities in a full fledge manner.However,providing a solution to solve the problems completely is impossible.But we can reduce these problems upto some extent.There are so many technologies which were implemented in Saudi Arabia during Hajj pilgrimage for to reduce these types of problems.Here,we are proposing one architecture in which we are using RFID technology. In our proposed paper,we are dealing the Tirupati pilgrims case.Details about every pilgrim will be taken at the entrance, and each pilgrim will be given an active RFID tag with a specific UID numbers.Here readers were used to read the UID numbers of each tag and these UID numbers are given to the host server where it updates the location of the particular pilgrim.In this we are providing these tags as wrist band tags.Here we are using LAN network between the readers and the host servers,Intranet between the PML servers and Internet between the Gateway PML servers.A mobile sensor node is placed in every segmented area which is an additional feature in our paper
---
paper_title: Tracking human mobility at mass gathering events using WISP
paper_content:
Mass gathering events, where thousands of people meet in a confined area for a defined period of time, pose a severe strain on the safety and security of the gathering crowds. The process of managing and controlling crowd movements has been a challenge. Lack of good management and planning of high crowd density with restricted points of access may end up with tragic disasters and mortalities. With the significant advancement of wireless networks, mobile technologies, and communication protocols, many host organizers are embracing the use of modern technologies to assist in managing, controlling, and tracking crowds to ensure their security and safety. In this paper, we propose a novel framework based on Wireless Identification and Sensing Platform (WISP) for tracking, monitoring, and managing human movement at mass gathering events. Individuals are offered WISP tags that store their basic identity and contact information. WISP readers and writers are positioned at various locations during a mass gathering event to monitor and control crowd movement. A central management system utilizes the collected data to provide assistance to crowd managers and trigger potential alerts.
---
paper_title: Crowd Management with RFID and Wireless Technologies
paper_content:
The recent spread of communicable diseases like swine flu, disasters like stampedes and ongoing security issues have made management of large crowded events more critical than ever before. Managing large crowds is a very complex, challenging and costly exercise. Many of the problems encountered in crowd management can be minimized by the use of RFID and other wireless technologies. These technologies are already being used in managing and administering many activities of daily life. However, the effectiveness of these technologies is yet to be tested for managing dense crowds and poses a challenge to the industry. The aim of this paper is to provide a management framework for large & dense crowds. The analysis of the technological framework is done with help of Hajj & Kumbh case studies. The research is industrial in nature and we hope that it would help the organizers of crowded events and law and order enforcement agencies.
---
paper_title: RFID Technology and Crowded Event Management
paper_content:
Efficient management of large crowded events is always a challenge. Successful management of such events largely depends on the use of technologies. There are many business cases where the use of latest technology can vastly improve their management. In recent times, many types of identification and sensor devices, including RFID tags, have been developed. Such technologies, combined with appropriate backend database systems, can be used to improve the crowd and event management. Hajj, an annual pilgrimage to Mecca, is a very large and unique gathering, which attracts millions of pilgrims for two or more weeks. Despite tremendous advancement of technology and its availability, Hajj continues to be managed manually. There are many aspects of Hajj which are worth researching. The aim of this paper is to identify appropriate technologies which can be used to improve the management of large gatherings such as those of Hajj and Kumbh gatherings in India.
---
paper_title: CROWD CONTROL SYSTEM USING IR TRANSMITTER AND RECEIVER
paper_content:
An efficient crowd control system is needed for safety of lives, property, time and economy. Crowd control system presents a design and implementation of low cost, low power consummated and more reliable and an infrared based intelligent crowd control system. The system contains Infrared transmitters and receiver. The basic concept of IR (Infrared) obstacle detection is to transmit the IR signal (radiation) in a direction and signal is received at the IR receiver when the IR radiation bounces back from a surface of the object. The system can response rapidly with violation of crowd limit. System describes highly accurate crowd control system using infrared communication. Proposed system achieves high accuracy and more efficiency at four way terminals. In every direction the road will consist of an IR transmitter-receiver pair at a certain distance. When crowd will be heavy in one particular direction during emergency situation it will indicate the administrator by sending message. So the heavy crowd can be routed to other route by preventing the stampede.
---
paper_title: Inferring Crowd Conditions from Pedestrians' Location Traces for Real-Time Crowd Monitoring during City-Scale Mass Gatherings
paper_content:
There is a need for event organizers and emergency response personnel to detect emerging, potentially critical crowd situations at an early stage during city-wide mass gatherings. In this work, we introduce and describe mathematical methods based on pedestrian-behavior models to infer and visualize crowd conditions from pedestrians' GPS location traces. We tested our approach during the 2011 Lord Mayor's Show in London by deploying a system able to infer and visualize in real-time crowd density, crowd turbulence, crowd velocity and crowd pressure. To collection location updates from festival visitors, a mobile phone app that supplies the user with event-related information and periodically logs the device's location was distributed. We collected around four million location updates from over 800 visitors. The City of London Police consulted the crowd condition visualization to monitor the event. As an evaluation of the usefulness of our approach, we learned through interviews with police officers that our approach helps to assess occurring crowd conditions and to spot critical situations faster compared to the traditional video-based methods. With that, appropriate measure can be deployed quickly helping to resolve a critical situation at an early stage.
---
paper_title: Probing crowd density through smartphones in city-scale mass gatherings
paper_content:
City-scale mass gatherings attract hundreds of thousands of pedestrians. These pedestrians need to be monitored constantly to detect critical crowd situations at an early stage and to mitigate the risk that situations evolve towards dangerous incidents. Hereby, the crowd density is an important characteristic to assess the criticality of crowd situations.
---
paper_title: Exploring Trampling and Crushing in a Crowd
paper_content:
Using published field data on accidents in crowds involving either the trampling or crushing of pedestrians, a theory has been developed to model similar dangerous situations. Whether a particular situation is fatal is statistical, but general rules may be established. In cases involving trampling, which occurs when pedestrians are moving, it was found that the density of pedestrians is an important isotropic quantity that appears to determine the probability of a fatal accident. Previous studies have also identified the density of a crowd as determining the probability of crushing within a stationary crowd, although different densities are involved in the initiation of these two types of accidents. In a moving crowd, trampling will occur before crushing, but trampling does not occur in a static crowd. The cross-sectional size of pedestrians within the crowd is important in determining whether a particular density is dangerous in both types of accidents. Consequently, comparisons of accidents involving different groups of pedestrians, possibly in different countries, need to be standardized for the characteristics of the group.
---
paper_title: Density, velocity and flow relationships for closely packed crowds
paper_content:
Abstract Work undertaken to quantify the relationships between crowd velocities, flow rates and densities for uni-directional motion is reviewed. Most of the available data has been generated for underground stations in the UK; similar work for commuter stations in Japan is introduced and developed. Maximum observed flow rates from this work are compared with those suggested in the ‘Green Guide’ for the evacuation of sports grounds. The ‘Green Guide’ figures are higher than the maximum values obtained from the work reviewed.
---
paper_title: Crowd disasters as systemic failures: analysis of the Love Parade disaster
paper_content:
Each year, crowd disasters happen in different areas of the world. How and why do such disasters happen? Are the fatalities caused by relentless behavior of people or a psychological state of panic that makes the crowd ‘go mad’? Or are they a tragic consequence of a breakdown of coordination? These and other questions are addressed, based on a qualitative analysis of publicly available videos and materials, which document the planning and organization of the Love Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our analysis reveals a number of misunderstandings that have widely spread. We also provide a new perspective on concepts such as ‘intentional pushing’, ‘mass panic’, ‘stampede’, and ‘crowd crushes’. The focus of our analysis is on the contributing causal factors and their mutual interdependencies, not on legal issues or the judgment of personal or institutional responsibilities. Video recordings show that people stumbled and piled up due to a ‘domino effect’, resulting from a phenomenon called ‘crowd turbulence’ or ‘crowd quake’. Crowd quakes are a typical reason for crowd disasters, to be distinguished from crowd disasters resulting from ‘mass panic’ or ‘crowd crushes’. In Duisburg, crowd turbulence was the consequence of amplifying feedback and cascading effects, which are typical for systemic instabilities. Accordingly, things can go terribly wrong in spite of no bad intentions from anyone. Comparing the incident in Duisburg with others, we give recommendations to help prevent future crowd disasters. In particular, we introduce a new scale to assess the criticality of conditions in the crowd. This may allow preventative measures to be taken earlier on. Furthermore, we discuss the merits and limitations of citizen science for public investigation, considering that today, almost every event is recorded and reflected in the World Wide Web.
---
paper_title: The investigation of the Hillsborough disaster by the Health and Safety Executive
paper_content:
This paper describes the work carried out by the Health and Safety Executive in investigating the disaster at the Hillsborough Stadium on 15 April 1989. The individual aspects investigated include examination and testing of crush barriers, collapse load calculations, development of a model to predict crowd pressures and estimation of the number of spectators entering the West Terrace of the ground.
---
paper_title: Dynamics of crowd disasters: An empirical study
paper_content:
Many observations of the dynamics of pedestrian crowds, including various self-organization phenomena, have been successfully described by simple many-particle models. For ethical reasons, however, there is a serious lack of experimental data regarding crowd panic. Therefore, we have analyzed video recordings of the crowd disaster in Mina/Makkah during the Hajj in 1426H on 12 January 2006. They reveal two subsequent, sudden transitions from laminar to stop-and-go and "turbulent" flows, which question many previous simulation models. While the transition from laminar to stop-and-go flows supports a recent model of bottleneck flows [D. Helbing, Phys. Rev. Lett. 97, 168001 (2006)], the subsequent transition to turbulent flow is not yet well understood. It is responsible for sudden eruptions of pressure release comparable to earthquakes, which cause sudden displacements and the falling and trampling of people. The insights of this study into the reasons for critical crowd conditions are important for the organization of safer mass events. In particular, they allow one to understand where and when crowd accidents tend to occur. They have also led to organizational changes, which have ensured a safe Hajj in 1427H.
---
paper_title: Crowd Security Detection based on Entropy Model
paper_content:
Identifying the terror attack, illegal public gathering or other mass events risks by utilizing cameras is an important concern both in crowd security area and in pattern recognition research area. This paper provides a physical entropy model to measure the crowd security level. The entropy model was created by identifying individuals’ moving velocity and the related probability. The individuals are represented by Harris Corners in videos, thus to avoid the time-consuming human recognition task. Simulation experiment and video detection experiments were conducted, verified that in the disordered state, the entropy is higher; while in ordered state, the entropy is much lower; when the crowd security has a sudden change, the entropy will change. It was verified that the entropy is the applicable indicator of crowd security. By recognizing the entropy mutation, it is possible to automatically detect the abnormal crowd behavior and to set the warning alarm.
---
paper_title: Data-Driven Modeling of Pedestrian Crowds
paper_content:
Numerous crowd disasters occur each year at large gatherings around the world. Unfortunately, the information about the (spatio-temporal) development of these events tend to be qualitative rather than quantitative. Video recordings from the crowd disaster in Mina, Kingdom of Saudi Arabia, on the 12th of January 2006, where hundreds of pilgrims lost their lives during the annual Muslim pilgrimage to Makkah, gave the possibility to scientifically evaluate the dynamics of the crowd. Based on the insights from the analysis of the crowd disaster, new tools and measures to detect and avoid critical crowd conditions have been proposed, and some of them have been implemented in order to reduce the likelihood of similar disasters in the future. In order to enable the revision of previous works and the analysis of the crowd disaster mentioned above, algorithms used for video-tracking have been introduced. The novelty of this work concerns not only the algorithms themselves, but also the uniqueness and quantity of data on which the algorithms have been validated and calibrated.
---
paper_title: Social identification moderates the effect of crowd density on safety at the Hajj
paper_content:
Crowd safety is a major concern for those attending and managing mass gatherings, such as the annual Hajj or pilgrimage to Mecca (also called Makkah). One threat to crowd safety at such events is crowd density. However, recent research also suggests that psychological membership of crowds can have positive benefits. We tested the hypothesis that the effect of density on safety might vary depending on whether there is shared social identification in the crowd. We surveyed 1,194 pilgrims at the Holy Mosque, Mecca, during the 2012 Hajj. Analysis of the data showed that the negative effect of crowd density on reported safety was moderated by social identification with the crowd. Whereas low identifiers reported reduced safety with greater crowd density, high identifiers reported increased safety with greater crowd density. Mediation analysis suggested that a reason for these moderation effects was the perception that other crowd members were supportive. Differences in reported safety across national groups (Arab countries and Iran compared with the rest) were also explicable in terms of crowd identification and perceived support. These findings support a social identity account of crowd behavior and offer a novel perspective on crowd safety management.
---
paper_title: Abnormal crowd behavior detection using social force model
paper_content:
In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow.
---
paper_title: Loveparade 2010: Automatic video analysis of a crowd disaster
paper_content:
On July 24, 2010, 21 people died and more than 500 were injured in a stampede at the Loveparade, a music festival, in Duisburg, Germany. Although this tragic incident is but one among many terrible crowd disasters that occur during pilgrimage, sports events, or other mass gatherings, it stands out for it has been well documented: there were a total of seven security cameras monitoring the Loveparade and the chain of events that led to disaster was meticulously reconstructed. In this paper, we present an automatic, video-based analysis of the events in Duisburg. While physical models and simulations of human crowd behavior have been reported before, to the best of our knowledge, automatic vision systems that detect congestions and dangerous crowd turbulences in real world settings were not reported yet. Derived from lessons learned from the video footage of the Loveparade, our system is able to detect motion patterns that characterize crowd behavior in stampedes. Based on our analysis, we propose methods for the detection and early warning of dangerous situations during mass events. Since our approach mainly relies on optical flow computations, it runs in real-time and preserves privacy of the people being monitored.
---
paper_title: Crowd Density Estimation Using Texture Analysis and Learning
paper_content:
This paper presents an automatic method to detect abnormal crowd density by using texture analysis and learning, which is very important for the intelligent surveillance system in public places. By using the perspective projection model, a series of multi-resolution image cells are generated to make better density estimation in the crowded scene. The cell size is normalized to obtain a uniform representation of texture features. In order to diminish the instability of texture feature measurements, a technique of searching the extrema in the Harris-Laplacian space is also applied. The texture feature vectors are extracted from each input image cell and the support vector machine (SVM) method is utilized to solve the regression problem of calculating the crowd density. Finally, based on the estimated density vectors, the SVM method is used again to solve the classification problem of detecting abnormal density distribution. Experiments on real crowd videos show the effectiveness of the proposed system.
---
paper_title: Inferring Crowd Conditions from Pedestrians' Location Traces for Real-Time Crowd Monitoring during City-Scale Mass Gatherings
paper_content:
There is a need for event organizers and emergency response personnel to detect emerging, potentially critical crowd situations at an early stage during city-wide mass gatherings. In this work, we introduce and describe mathematical methods based on pedestrian-behavior models to infer and visualize crowd conditions from pedestrians' GPS location traces. We tested our approach during the 2011 Lord Mayor's Show in London by deploying a system able to infer and visualize in real-time crowd density, crowd turbulence, crowd velocity and crowd pressure. To collection location updates from festival visitors, a mobile phone app that supplies the user with event-related information and periodically logs the device's location was distributed. We collected around four million location updates from over 800 visitors. The City of London Police consulted the crowd condition visualization to monitor the event. As an evaluation of the usefulness of our approach, we learned through interviews with police officers that our approach helps to assess occurring crowd conditions and to spot critical situations faster compared to the traditional video-based methods. With that, appropriate measure can be deployed quickly helping to resolve a critical situation at an early stage.
---
paper_title: Continuum modeling of crowd turbulence
paper_content:
With the growth in world population, the density of crowds in public places has been increasing steadily, leading to a higher incidence of crowd disasters at high densities. Recent research suggests that emergent chaotic behavior at high densities-known collectively as crowd turbulence-is to blame. Thus, a deeper understanding of crowd turbulence is needed to facilitate efforts to prevent and plan for chaotic conditions in high-density crowds. However, it has been noted that existing algorithms modeling collision avoidance cannot faithfully simulate crowd turbulence. We hypothesize that simulation of crowd turbulence requires modeling of both collision avoidance and frictional forces arising from pedestrian interactions. Accordingly, we propose a model for turbulent crowd simulation, which incorporates a model for interpersonal stress and acceleration constraints similar to real-world pedestrians. Our simulated results demonstrate a close correspondence with observed metrics for crowd turbulence as measured in known crowd disasters. Language: en
---
paper_title: Data-Driven Modeling of Pedestrian Crowds
paper_content:
Numerous crowd disasters occur each year at large gatherings around the world. Unfortunately, the information about the (spatio-temporal) development of these events tend to be qualitative rather than quantitative. Video recordings from the crowd disaster in Mina, Kingdom of Saudi Arabia, on the 12th of January 2006, where hundreds of pilgrims lost their lives during the annual Muslim pilgrimage to Makkah, gave the possibility to scientifically evaluate the dynamics of the crowd. Based on the insights from the analysis of the crowd disaster, new tools and measures to detect and avoid critical crowd conditions have been proposed, and some of them have been implemented in order to reduce the likelihood of similar disasters in the future. In order to enable the revision of previous works and the analysis of the crowd disaster mentioned above, algorithms used for video-tracking have been introduced. The novelty of this work concerns not only the algorithms themselves, but also the uniqueness and quantity of data on which the algorithms have been validated and calibrated.
---
paper_title: A Novel Wireless Sensor Network Architecture for Crowd Disaster Mitigation
paper_content:
Disasters aroused due to dynamic movement of large, uncontrollable crowds are ever increasing. The inherent real-time dynamics of crowd need to be tightly monitored and alerted to avoid such disasters. Most of the existing crowd monitoring systems is difficult to deploy, maintain, and dependent on single component failure. This research work proposes novel network architecture based on the key technologies of wireless sensor network and mobile computing for the effective prediction of causes of crowd disaster particularly stampedes in the crowd and thereby alerting the crowd controlling station to take appropriate actions in time. In the current implemented version of the proposed architecture, the smart phones act as wireless sensor nodes to estimate the probability of occurrence of stampede using data fusion and analysis of embedded sensors such as tri-axial accelerometers, gyroscopes, GPS, light sensors etc. The implementation of the proposed architecture in smart phones provides light weight, easy to deploy, context aware wireless services for effective crowd disaster mitigation.
---
paper_title: An Overview on Soft Computing Techniques
paper_content:
Soft computing is a term applied to a field within computer science which is characterized by the use of inexact solutions to computationally-hard tasks such as the solution of NP-complete problems, for which an exact solution cannot be derived in polynomial time. This paper explains about the soft computing and its components briefly, also explains the need use and efficiency of its components. Soft computing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty, partial truth, and approximation to achieve tractability, robustness and low solution cost.
---
paper_title: An aggregate approach to model evacuee behavior for no-notice evacuation operations
paper_content:
This study proposes an aggregate approach to model evacuee behavior in the context of no-notice evacuation operations. It develops aggregate behavior models for evacuation decision and evacuation route choice to support information-based control for the real-time stage-based routing of individuals in the affected areas. The models employ the mixed logit structure to account for the heterogeneity across the evacuees. In addition, due to the subjectivity involved in the perception and interpretation of the ambient situation and the information received, relevant fuzzy logic variables are incorporated within the mixed logit structure to capture these characteristics. Evacuation can entail emergent behavioral processes as the problem is characterized by a potential threat from the extreme event, time pressure, and herding mentality. Simulation experiments are conducted for a hypothetical terror attack to analyze the models’ ability to capture the evacuation-related behavior at an aggregate level. The results illustrate the value of using a mixed logit structure when heterogeneity is pronounced. They further highlight the benefits of incorporating fuzzy logic to enhance the prediction accuracy in the presence of subjective and linguistic elements in the problem.
---
paper_title: Simulation of agent behavior in a goal finding application
paper_content:
There has been increasing interest in simulation of agent behavior in the context of agent based modeling. This paper concentrates on the use of fuzzy logic in simulating agent based behavior. Our approach is decomposed into two levels. The higher level addresses the agent's goal finding behavior and lower level addresses collision detection and avoidance behavior. Our approach focuses on modeling individual behavior as well as group behavior. Individuals constantly adjust their behavior according to the dynamic factors in the environment. We hypothesize that people with similar characteristics such as race, age, and gender are more likely to collaborate with each other in order to reach a goal. This paper describes an agent based system implementation for crowd behavior. The simulation evaluates different evacuation and damage control decision making strategies beforehand, which allows the execution of the most effective evacuation scheme during real-time emergency scenario.
---
paper_title: Research on pedestrian evacuation simulation based on fuzzy logic
paper_content:
Pedestrian evacuation simulation has many important applications. For example, based on the pedestrian evacuation simulation results one can predict the evacuation time and evaluate the performances of pedestrian facilities. This paper derives a mathematical model for the pedestrian evacuation based on fuzzy logic. In this fuzzy model for the pedestrian evacuation, each pedestrian?s behavior is linguistically depicted by some simple qualitative physical and psychological laws. These qualitative laws are then transformed into a quantitative mathematical model using a fuzzy modeling approach. The evacuation simulation results based on the proposed fuzzy model are consistent with the observed phenomena for the pedestrian evacuation. Moreover, the simulations show that the parameters involved in the proposed fuzzy model have prominent influences on the evacuation time and dynamical features of pedestrian evacuation.
---
paper_title: Social force model for pedestrian dynamics
paper_content:
It is suggested that the motion of pedestrians can be described as if they would be subject to ``social forces.'' These ``forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically.
---
paper_title: An intelligence-based route choice model for pedestrian flow in a transportation station
paper_content:
We developed an artificial neural network (ANN) model to mimic route choice behaviour in crowds which achieved a prediction accuracy of 86%.We demonstrated the feasibility of applying the ANN approach to decision-making in pedestrian flows. Both safety and comfort level inside the stations are potentially improved.This model is useful for both station design and daily operation, as escalators are a critical transportation facility in transportation stations.This ANN approach provides a rapid method for engineers to estimate the loadings of escalators, even for new stations, so that they can optimise their utilisation to achieve maximum efficiency. This study proposes a method that uses an artificial neural network (ANN) to mimic human decision-making about route choice in a crowded transportation station. Although ANN models have been developed rapidly and widely adopted in various fields in the last three decades, their application to predict human decision-making in pedestrian flows is limited, because the video clip technology used to collect pedestrian movement data in crowded conditions is still primitive. Data collection must be carried out manually or semi-manually, which requires extensive resources and is time consuming. This study adopts a semi-manual approach to extract data from video clips to capture the route choice behaviour of travellers, and then applies an ANN to mimic such decision-making. A prediction accuracy of 86% (ANN model with ensemble approach) is achieved, which demonstrates the feasibility of applying the ANN approach to decision-making in pedestrian flows.
---
paper_title: Intelligent agents in a goal finding application for homeland security
paper_content:
Orderly and efficient evacuations are the key in saving lives and are of considerable interest to homeland security. There has been a considerable interest in simulation of intelligent agent behavior in the context of agent based modeling. This paper combines Genetic Algorithm (GA) with Neural Networks (NN) to explore how intelligent agents can look for exits during an evacuation. The agents have the capability to adapt their behavior in the environment and formulate their response by learning from the environment. Our approach focuses on modeling individual behavior as well as group behavior. Individuals constantly adjust their behavior according to the dynamic factors in the environment. We are developing crowdmodeling and emergency behavior modeling capability in a goal finding application. This paper examines an intelligent agentbased evacuation that can help plan emergency evacuations, run numerous event-driven evacuation scenarios, support research in the areas of human behavior, and model the movement of responders and security personnel. The result of this simulation was very promising as we are able to observe the agents use GA and NN to learn how to find the various exits.
---
paper_title: An Artificial Neural-network Based Predictive Model for Pre-evacuation Human Response in Domestic Building Fire
paper_content:
The post-1993 WTC attack study (Proulx and Fahy, In: Proceedings of ASIAFLAM’95—An International Conference on Fire Science and Engineering, Hong Kong, 1995, pp 199–210) revealed that occupants took 1–3 h to leave the 110-storey buildings, and the pre-movement reactions could account for over two-thirds of the overall evacuation time. This indicates that a thorough understanding of the pre-evacuation behavioral response of people under fire situations is of prime importance to fire safety design in buildings, especially for complex and ultra high-rise buildings. In view of the stochastic (the positions of the occupants) and fuzzy (uncertainty) nature of human behavior (Fraser-Mitchell, Fire Mater 23:349–355, 1999), conventional linear and polynomial predictive methods may not satisfactorily predict the people’s response. An alternative approach, Adaptive Network based Fuzzy Inference System (ANFIS), is proposed to predict the pre-evacuation behavior of peoples, which is an artificial neural network (ANN) based predictive model and integrates fuzzy logic (if-then rules) and neural network (based on back propagation learning procedures The ANFIS learning architecture can be trained by structured human behavioral data, and different fuzzy human decision rules. The applicability in simulating human behavior in fire is worth exploring.
---
paper_title: Intelligent Exit-Selection Behaviors during a Room Evacuation
paper_content:
A modified version of the existing cellular automata (CA) model is proposed to simulate an evacuation procedure in a classroom with and without obstacles. Based on the numerous literature on the implementation of CA in modeling evacuation motions, it is notable that most of the published studies do not take into account the pedestrian's ability to select the exit route in their models. To resolve these issues, we develop a CA model incorporating a probabilistic neural network for determining the decision-making ability of the pedestrians, and simulate an exit-selection phenomenon in the simulation. Intelligent exit-selection behavior is observed in our model. From the simulation results, it is observed that occupants tend to select the exit closest to them when the density is low, but if the density is high they will go to an alternative exit so as to avoid a long wait. This reflects the fact that occupants may not fully utilize multiple exits during evacuation. The improvement in our proposed model is valuable for further study and for upgrading the safety aspects of building designs.
---
paper_title: A Bayesian network model for evacuation time analysis during a ship fire
paper_content:
We present an evacuation model for ships while a fire happens onboard. The model is designed by utilizing Bayesian networks (BN) and then simulated in GeNIe software. In our proposed model, the most important factors that have significant influence on a rescue process and evacuation time are identified and analyzed. By applying the probability distribution of the considered factors collected from the literature including IMO, real empirical data and practical experiences, the trend of the rescue process and evacuation time can be evaluated and predicted using the proposed model. The results of this paper help understanding about possible consequences of influential factors on the security of the ship and help to avoid exceeding evacuation time during a ship fire.
---
paper_title: Dynamic decision making for dam-break emergency management – Part 1: Theoretical framework
paper_content:
Abstract. An evacuation decision for dam breaks is a very serious issue. A late decision may lead to loss of lives and properties, but a very early evacuation will incur unnecessary expenses. This paper presents a risk-based framework of dynamic decision making for dam-break emergency management (DYDEM). The dam-break emergency management in both time scale and space scale is introduced first to define the dynamic decision problem. The probability of dam failure is taken as a stochastic process and estimated using a time-series analysis method. The flood consequences are taken as functions of warning time and evaluated with a human risk analysis model (HURAM) based on Bayesian networks. A decision criterion is suggested to decide whether to evacuate the population at risk (PAR) or to delay the decision. The optimum time for evacuating the PAR is obtained by minimizing the expected total loss, which integrates the time-related probabilities and flood consequences. When a delayed decision is chosen, the decision making can be updated with available new information. A specific dam-break case study is presented in a companion paper to illustrate the application of this framework to complex dam-breaching problems.
---
paper_title: Modelling dwelling fire development and occupancy escape using Bayesian network
paper_content:
Abstract The concept of probabilistic modelling under uncertainty within the context of fire and rescue through the application of the Bayesian network (BN) technique is presented in this paper. BNs are capable of dealing with uncertainty in data, a common issue within fire incidents, and can be adapted to represent various fire scenarios. A BN model has been built to study fire development within generic dwellings up to an advanced fire situation. The model is presented in two parts: part I deals with “initial fire development” and part II “occupant response and further fire development”. Likelihoods are assessed for states of human reaction, fire growth, and occupant survival. Case studies demonstrate how the model functions and provide evidence that it could be used for safety assessment, planning and accident investigation. Discussion is undertaken on how the model could be further developed to investigate specific areas of interest affecting dwelling fire outcomes.
---
paper_title: Forecasting model for pedestrian distribution under emergency evacuation
paper_content:
Pedestrian distribution forecasting on the road network is developed to support the evacuation decision-making. The numbers of evacuees distributed on each road link are stochastic, uncertain and multi-dependent. Therefore, a Gaussian Bayesian networks (GBN) based forecasting model is presented, considering the pedestrian flow characteristics, optimization of evacuation route and evacuation decision-making. In the forecasting model, the route choice probabilities obtained by minimizing evacuation time are applied to correct the regression coefficients of GBN. Finally, an example is provided to illustrate the usefulness of this model. Research shows that this model not only reflects the complexity and dynamics of evacuation process but also performs an accurate forecasting on the time development of the pedestrian distributed in the evacuation space.
---
paper_title: Modeling panic in ship fire evacuation using dynamic Bayesian network
paper_content:
In this paper, we model passengers' panic during a ship fire by considering its most influential factors. The qualitative factors are quantified, allowing us to study passengers' panic in a probabilistic manner. Considering the time-varying nature of these factors, we update the state of the factors over time. We utilize a dynamic Bayesian network (DBN) to model passengers' panic, this allows us to represent probabilistic and dynamic elements. By defining several worst-case scenarios and running the simulations, we demonstrate how panic can dynamically vary from passenger to passenger with different physical (mental) conditions. Furthermore, we show how this panic can threaten passengers' health during the evacuation process. The impact of panic on the evacuation time is also investigated. The results in this paper are valuable inputs for rescue teams and marine organizations that aim to mitigate property damages and human fatalities.
---
paper_title: RETRACTED: <Emergency Response Plans Optimization for Unexpected Environmental Pollution Incidents using an Open Space Emergency Evacuation Model
paper_content:
The objective of this research is to model the crowd evacuation process providing dynamic spatial–temporal distribution information, which can minimize the human exposure risk (death or casualties) on the population implied by a specific evacuation policy when he or she was exposed to adverse effects under accidents. An open space evacuation model based on the stochastic Markov process was introduced here to estimate the spatial–temporal distribution of the evacuees during evacuation covering the estimation of affected areas, space discretization, nodes and links creation, etc. Then, according to the solution of the Markov process providing the expected distribution of the evacuees in the nodes of the area as a function of time and the dose–response relationship, the people’s health effects (e.g., death, several kinds of injuries, etc.) suffered in the evacuation process can be calculated, so that the accident’s health consequence can be determined. Finally, different emergency response policies can be evaluated CT ED with corresponding health consequence, so that the emergency policy can be optimized. © 2012 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
---
paper_title: Architectural space planning using Genetic Algorithms
paper_content:
This paper presents a system which can find out space planning for a single flat, arrangement of several flats on a single floor and extend the design for each floor and find out collective plan for a multi-storey apartment building. At each level it generates a plan which supports quick evacuation in case of adversity. Starting with design specifications in terms of constraints over spaces, use of Genetic Algorithm leads to a complete set of consistent conceptual design solutions named topological solutions. These topological solutions which do not presume any precise definitive dimension correspond to the sketching step that an architect carries out from the design specifications on a preliminary design phase in architecture. Further, door placement algorithm has been proposed with modifications in existing Dijkstra's algorithm and dimensions analysis is carried out for the designs selected by the user. If the user wishes to generate a plan for many floors, inputs are taken accordingly and plan is generated which is efficient in terms of evacuation.
---
paper_title: Method to determine the locations of tsunami vertical evacuation shelters
paper_content:
The 2004 Indian Ocean tsunami and the 2011 Great Tohoku Japan earthquake and tsunami focused a great deal of the world’s attention on the effect of tsunamis on buildings and infrastructure. When a tsunami impacts structures in a coastal community, the structures are often not strong enough to withstand the forces and may collapse. Therefore, to maximize the survival probability, people evacuate to higher ground or move outside the inundation zone. However, this is not always possible because of short warning times for near-field tsunamis. Thus, sheltering-in-place or “sheltering-near-place” using vertical evacuation should be considered as an alternative approach to lateral evacuation from a tsunami inundation zone. This paper presents the method and results of a study to develop and demonstrate a methodology that applied genetic optimization to determine optimal tsunami shelter locations with the goal of reducing evacuation time, thereby maximizing the probability of survival for the population in a coastal community. The City of Cannon Beach, Oregon, USA, was used as an illustrative example. Several cases were investigated ranging from a single shelter to multiple shelters with locations of high elevation already in place near the city. The method can provide decision-support for the determination of locations for tsunami vertical evacuation shelters. The optimum location of the shelter(s), which was found to vary depending on the number of shelters considered, can reduce the evacuation time significantly, thereby reducing the number of fatalities and increasing the safety of a community.
---
paper_title: Dynamic Differential Evolution for Emergency Evacuation Optimization
paper_content:
Emergency evacuation in public places has become the hot area of research in recent years. Emergency evacuation route assignment is one of the complex dynamic optimization problems in emergency evaluation. This paper proposed the modified dynamic differential evolution algorithm and studied the emergency evacuation, then applied the multi-strategy dynamic differential evolution for emergency evacuation route assignment in public places. We use the Wuhan Sport Center in Wuhan China as the experiment scenario to test the performance of the proposed algorithm. The results show that the proposed algorithm can effectively solve the complex emergency evacuation route assignment problem.
---
paper_title: GP Generation of Pedestrian Behavioral Rules in an Evacuation Model Based on SCA
paper_content:
This paper presents a research in the context of pedestrian dynamics according to Situated Cellular Agent (SCA), a Multi-Agent Systems approach whose roots are on Cellular Automata (CA). The aim of this work is to apply Genetic Programming (GP) approach, a well known Machine Learning method belonging to the family of Evolutionary Algorithms, to generate suitable behavioral rules for pedestrians in an evacuation scenario. The main contribution of this work is in the design of a testset of GP generated behaviors to represent basic behavioral models of evacuees populating a only locally known environment, a typical scenario for CA-based models.
---
paper_title: Office layout plan evaluation system using evacuation simulation considering other agents' action
paper_content:
In this paper, we propose an office layout plan evaluation system using evacuation simulation considering other agents' action. The proposed system evaluates office layout plans for polygonal space generated by the office layout support system using genetic algorithm. In the proposed system, the office layout plan is given, and then the agents move under the conditions. Each agent decides the escape route based on the information about office layout, impassable spaces, crowded areas, other agents' action and so on, and goes to the entrance. Based on the behavior of agents, the evaluation on the maximum time for escape, average speed, the number of agents who could not reach entrance and so on are carried out. We carried out a series of computer experiments in order to demonstrate the effectiveness of the proposed system and confirmed that the proposed system can evaluate layout plans.
---
paper_title: Hierarchical multi-objective evacuation routing in stadium using ant colony optimization approach
paper_content:
Evacuation planning is a fundamental requirement to ensure that most people can be evacuated to a safe area when a natural accident or an intentional act happens in a stadium environment. The central challenge in evacuation planning is to determine the optimum evacuation routing to safe areas. We describe the evacuation network within a stadium as a hierarchical directed network. We propose a multi-objective optimization approach to solve the evacuation routing problem on the basis of this hierarchical directed network. This problem involves three objectives that need to be achieved simultaneously, such as minimization of total evacuation time, minimization of total evacuation distance and minimal cumulative congestion degrees in an evacuation process. To solve this problem, we designed a modified ant colony optimization (ACO) algorithm, implemented it in the MATLAB software environment, and tested it using a stadium at the Wuhan Sports Center in China. We demonstrate that the algorithm can solve the problem, and has a better evacuation performance in terms of organizing evacuees’ space–time paths than the ACO algorithm, the kth shortest path algorithm and the second generation of non-dominated sorting genetic algorithm were used to improve the results from the kth shortest path algorithm.
---
paper_title: Multi-ant colony system for evacuation routing problem with mixed traffic flow
paper_content:
Evacuation routing problem with mixed traffic flow is complex due to the interaction among different types of evacuees. The positive feedback mechanism of single ant colony system may lead to congestion on some optimum routes. Like different ant colony systems in nature, different components of traffic flow compete and interact with each other during evacuation process. In this paper, an approach based on multi-ant colony system was proposed to tackle evacuation routing problem with mixed traffic flow. Total evacuation time is minimized and traffic load of the whole road network is balanced by this approach. The experimental results show that this approach based on multi-ant colony system can obtain better solutions than single ant colony system and solve mixed traffic flow evacuation problem with reasonable routing plans.
---
paper_title: A research on human emergency evacuation based on revised ACO-CA
paper_content:
In a sudden natural disaster, a large number of people may get detained within a hazardous space. In recent years, the number of such incidents has increased in China, which motivates the research of artificial intelligence-based simulations of emergency rescue and evacuation. This paper proposesam ethodological approach that combined with ant colony optimization and Cellular automata integrating simulation and optimizationto study the complexity and randomness characteristics of human behaviors under the emergency evacuation, for solving an optimal emergency evacuation planning problem in an emergency shelter. The path residual pheromone and heuristic factors of the Ant colony algorithm in this integrated model are treated as personal behavior difference and aggregation, which can be treated as the herb behavior factors and the shortest path first factors,reflecting the randomness and interaction in the process of population evacuation. Through using ant colony algorithm to calculate the transition probability of the interaction among the neighboring cells, and updating the pheromone through taboo list based on local optimal path of the ant colony, the cells can finally finish the simulation of safe evacuation under the principle of optimal transition rules. The method could effectively simulate the delayed population and achieve the simulation of the population distribution in the evacuation area when the natural disaster happens. It would also offer a scientific reference for the research of population emergency evacuation.
---
paper_title: An Adaptive Multi-Objective Artificial Bee Colony with crowding distance mechanism
paper_content:
In this work, we propose an Adaptive Multi-Objective Artificial Bee Colony (A-MOABC) Optimizer that uses Pareto dominance procedure with taking the advantage of crowding distance and windowing mechanism. The employed bees use an adaptive windowing mechanism to select their own leaders and alter their positions. Besides, onlookers update their positions by using food sources presented by employed bees. Pareto dominance notion is used to show the quality of the food sources. Employed or onlooker bees which find poor quality food sources turn into scout bee to search other areas. The suggested method uses crowding distance technique in order to keep diversity in the archive. The method adaptively adjusts the limits of objective function values in the archive iteration by iteration. The experimental results indicate that the proposed approach not only thoroughly competitive compared to other algorithms considered in this work but also finds the result with greater precision.
---
paper_title: Behavior-Based Simulation of Real-Time Crowd Evacuation
paper_content:
Emergency evacuation has many applications in computer animation, virtual reality, architecture planning, safety science, etc. However, current methods most focus on the agent-based modeling and simulation. These simulation results can not consider the human behavior fully and their reliabilities are doubtable. This paper presents a new method to simulate the large-scale crowds in real-time and verify the evacuation data in complex environment. Through analyzing the characteristics of human behavior in emergent condition, a mixed geometry-based ant colony evacuation model is firstly proposed. Then, many behaviors of human are considered to calculate the best evacuation path, including autonomous avoidance, human's warning time, and preferential path selecting. The experimental results show that it is an effective method to simulate large-scale crowds in real time, because the verification makes the simulation more reliable as well as making human behavior logical and the virtual scene realistic.
---
paper_title: Population Classification in Fire Evacuation: A Multiobjective Particle Swarm Optimization Approach
paper_content:
In an emergency evacuation operation, accurate classification of the evacuee population can provide important information to support the responders in decision making; and therefore, makes a great contribution in protecting the population from potential harm. However, real-world data of fire evacuation is often noisy, incomplete, and inconsistent, and the response time of population classification is very limited. In this paper, we propose an effective multiobjective particle swarm optimization method for population classification in fire evacuation operations, which simultaneously optimizes the precision and recall measures of the classification rules. We design an effective approach for encoding classification rules, and use a comprehensive learning strategy for evolving particles and maintaining diversity of the swarm. Comparative experiments show that the proposed method performs better than some state-of-the-art methods for classification rule mining, especially on the real-world fire evacuation dataset. This paper also reports a successful application of our method in a real-world fire evacuation operation that recently occurred in China. The method can be easily extended to many other multiobjective rule mining problems.
---
paper_title: Particle swarm and NSGA-II based evacuation simulation and multi-objective optimization
paper_content:
Because of the high-dense population and complex structure, the large public building faces a unique challenge in developing effective emergency evacuation plans. And due to the large scale and numbers of evacuees in real evacuation, real tests are impractical. Therefore, the simulation of evacuation becomes a wonderful choice in program planning. Particle Swarm is as one of the multi-agent based simulation method that can simulate complex behaviors of individuals. NSGA-II (Non-dominated Sorting Genetic Algorithm II) is a kind of optimization method for multi-objective optimization problem. In this paper, we propose a novel multi-objective evolutionary algorithm (named as PNMO, Particle swarm & NSGA-II based Multi-objective Optimization) which simulates evacuation process as well as optimizing the generated evacuation plans. The experiment shows that this method possesses superior performance in evacuation planning.
---
paper_title: Formative studies for dynamic wayfinding support with in-building situated displays and mobile devices
paper_content:
There is a significant disparity between wayfinding support services available in outdoor and in-building locations. Services such as Google Maps and in-car GPS allow users to examine unknown outdoor locations in advance as well as receive guidance en-route. In contrast, there is relatively little digital technology to support users in complex building architectures, e.g. institution buildings where users are generally limited to using traditional signage or asking for directions at the reception. However, recent advances in pervasive digital display technology are enabling a new range of possibilities and are making this topic increasingly subject to study. In this paper, we describe five formative studies involving 39 participants using situated digital displays, a Person Locator Kiosk, and personal mobile devices. We report our findings by gaining insights and feedback from users in order to develop wayfinding assistance for visitors in an in-building environment.
---
paper_title: Presenting evacuation instructions on mobile devices by means of location-aware 3D virtual environments
paper_content:
Natural and man-made disasters present the need to efficiently and effectively evacuate the people occupying the affected buildings. Providing appropriate evacuation instructions to the occupants of the building is a crucial aspect for the success of the evacuation. This paper presents an approach for giving evacuation instructions on mobile devices based on interactive location-aware 3D models of the building. User's position into the building is determined by using active short-range RFID technology. A preliminary user evaluation of the system has been carried out in the building of our Department.
---
paper_title: A RFID-Based Hybrid Building Fire Evacuation System on Mobile Phone
paper_content:
Building fire is a common disaster happening in our daily life that causes unfortunate casualties and deaths. Successfully escaping from fire depends on the design of evacuation route and time as most of the damage of fire is caused due to lack of evacuation equipments or poor design of the emergency route. In this research work, we designed a hybrid building fire evacuation system (HBFES) on a mobile phone using Radio Frequency Identification (RFID) techniques. The system will be implemented at Tamkang University on Lanyang campus where a central fire alarm system has been installed. Location Based Service (LBS) and several existing computer or mobile phone applications, namely Viewpoint Calculator, Path planner, and MobiX3D viewer will be used on the system to rapidly calculate the reliable evacuation routes when building fire takes place.
---
paper_title: Evacuation Time Analysis and Optimization for Distributed Emergency Guiding Based on Wireless Sensor Networks
paper_content:
This paper proposes a load-balancing framework for emergency guiding based on wireless sensor networks. We design a load-balancing guiding scheme and derive an analytical model in order to reduce the total evacuation time of indoor people. The guiding scheme can provide the fastest path to an exit for people based on the evacuation time estimated by the analytical model. This is the first distributed solution which takes the corridor capacity and length, exit capacity, concurrent move, and people distribution into consideration for estimating evacuation time and planning escape paths. Through the proposed framework, the congestion of certain corridors and exits can be released to significantly reduce the total evacuation time. Analytical and simulation results show that our approach outperforms existing works, which can prevent people from following the local optimal guiding direction with the longer evacuation time in total. We also implement a prototype, called Load-balancing Emergency Guiding System (LEGS), which can compare evacuation time and guiding directions of existing schemes and ours under different people distribution.
---
paper_title: Handheld Augmented Reality Indoor Navigation with Activity-Based Instructions
paper_content:
We present a novel design of an augmented reality interface to support indoor navigation. We combine activity-based instructions with sparse 3D localisation at selected info points in the building. Based on localisation accuracy and the users' activities, such as walking or standing still, the interface adapts the visualisation by changing the density and quality of information shown. We refine and validate our design through user involvement in pilot studies. We finally present the results of a comparative study conducted to validate the effectiveness of our design and to explore how the presence of info points affects users' performance on indoor navigation tasks. The results of this study validate our design and show an improvement in task performance when info points are present, which act as confirmation points and provide an overview of the task.
---
paper_title: Indoor Navigation with Minimal Infrastructure
paper_content:
We describe an indoor navigation system based on dead reckoning localization, "augmented photos", and interactive methods that simplify the process of orientation. Our navigator does not require a fixed infrastructure for the recalibration of inertial data but it can improve its performance if a minimal localization infrastructure is available.
---
paper_title: New Framework of Intelligent Evacuation System of Buildings
paper_content:
On the basis of the analysis about the traditional evacuation and lifesaving facilities, this paper adopts high-tech technological means (e.g. advanced intelligent information-monitoring technique, artificial intelligent technique, computer technology, etc.), integrates the function of building evacuation, and establishes an intelligent evacuation system. This system overcomes the disadvantages and defects of the current intelligent evacuation system, and realizes the intelligent dynamic guidance of the personnel evacuation under the real fire scene through the main control module of the intelligent evacuation system. It aims to actually realize the intellectualization according to the dynamic change of the fire scene, and make the personnel evacuation more scientific, rapid and safer.
---
paper_title: Soft Computing: Overview and Recent Developments in Fuzzy Optimization
paper_content:
Soft Computing (SC) represents a significant paradigm shift in the aims of computing, which reflects the fact that the human mind, unlike present day computers, possesses a remarkable ability to store and process information which is pervasively imprecise, uncertain and lacking in categoricity. At this juncture, the principal constituents of Soft Computing (SC) are: Fuzzy Systems (FS), including Fuzzy Logic (FL); Evolutionary Computation (EC), including Genetic Algorithms (GA); Neural Networks (NN), including Neural Computing (NC); Machine Learning (ML); and Probabilistic Reasoning (PR). In this work, we focus on fuzzy methodologies and fuzzy systems, as they bring basic ideas to other SC methodologies. The other constituents of SC are also briefly surveyed here but for details we refer to the existing vast literature. In Part 1 we present an overview of developments in the individual parts of SC. For each constituent of SC we overview its background, main problems, methodologies and recent developments. We focus mainly on Fuzzy Systems, for which the main literature, main professional journals and other relevant information is also supplied. The other constituencies of SC are reviewed shortly. In Part 2 we investigate some fuzzy optimization systems. First, we investigate Fuzzy Sets we define fuzzy sets within the classical set theory by nested families of sets, and discuss how this concept is related to the usual definition by membership functions. Further, we will bring some important applications of the theory based on generalizations of concave functions. We study a decision problem, i.e. the problem to find a ”best” decision in the set of feasible alternatives with respect to several (i.e. more than one) criteria functions. Within the framework of such a decision situation, we deal with the existence and mutual relationships of three kinds of ”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and Strong Pareto-Maximizers particular alternatives satisfying some natural and rational conditions. We also study the compromise decisions maximizing some aggregation of the criteria. The criteria considered here will be functions defined on the set of feasible alternatives with the values in the unit interval. In Fuzzy mathematical programming problems (FMP) the values of the objective function describe effects from choices of the alternatives. Among others we show that the class of all MP problems with (crisp) parameters can be naturally embedded into the class of FMP problems with fuzzy parameters. Finally, we deal with a class of fuzzy linear programming problems. We show that the class of crisp (classical) LP problems can be embedded into the class of FLP ones. Moreover, for FLP problems we define the concept of duality and prove the weak and strong duality theorems. Further, we investigate special classes of FLP interval LP problems, flexible LP problems, LP problems with interactive coefficients and LP problems with centered coefficients. We present here an original mathematically oriented and unified approach.
---
| Title: Intelligent Evacuation Management Systems: A Review
Section 1: INTRODUCTION
Description 1: Write an introduction that provides an overview of the significance and scope of Intelligent Evacuation Management Systems (IEMS) in crowd management during large-scale events and everyday crowded spaces.
Section 2: CROWD MONITORING
Description 2: Describe technologies and methods used for crowd monitoring, including GPS-based tracking, Bluetooth-based tracking, computer vision-based tracking, RFID, and IR transmitters and receivers.
Section 3: GPS Based Tracking
Description 3: Explain how GPS technology is used for tracking crowds, including specific case studies and applications.
Section 4: Bluetooth Based Tracking
Description 4: Outline the use of Bluetooth technology for crowd tracking and its application in various events.
Section 5: Computer Vision Based Tracking
Description 5: Discuss computer vision techniques for tracking crowds and the challenges associated with these methods in congested scenarios.
Section 6: RFID
Description 6: Detail the application of RFID for crowd monitoring, including specific implementations and limitations.
Section 7: IR Transmitter and Receiver
Description 7: Describe recent developments in using IR transmitters and receivers for crowd monitoring.
Section 8: PREDICTION OF CROWD DISASTER
Description 8: Review methods for predicting crowd disasters, focusing on the critical conditions of crowd density, speed, and flow, and the distinction between video-based and nonvideo-based prediction techniques.
Section 9: Video Based Crowd Disaster Prediction
Description 9: Discuss video-based methods for predicting crowd disasters, including specific metrics and warning signs used in these systems.
Section 10: Nonvideo Based Prediction of Crowd Disaster
Description 10: Explain nonvideo-based techniques for predicting crowd disasters, especially those leveraging smartphone technologies and wireless sensors.
Section 11: EVACUATION MODELLING VIA SOFT COMPUTING METHODS
Description 11: Review the use of soft computing methods such as fuzzy logic, artificial neural networks, probabilistic graphical models, and evolutionary computing for evacuation modelling.
Section 12: Fuzzy Logic Based Evacuation Models
Description 12: Outline different fuzzy logic-based approaches to evacuation modelling, including specific examples and case studies.
Section 13: Overview of Evacuation Models Using Artificial Neural Networks
Description 13: Discuss the application of artificial neural networks in evacuation scenarios and their effectiveness in modelling pedestrian behavior.
Section 14: Evacuation Modelling via Probabilistic Graphical Models
Description 14: Describe the use of probabilistic graphical models like Bayesian networks and Markov networks for evacuation predictions and decision-making.
Section 15: Typical Evolutionary Computing Based Evacuation Models
Description 15: Review evolutionary computing approaches, including genetic algorithms and swarm intelligence models, for solving evacuation problems.
Section 16: EVACUATION PATH GUIDELINES
Description 16: Describe methods and technologies for providing evacuation path guidelines, including the use of mobile phones, visual aids, and wireless sensor networks.
Section 17: DISCUSSION AND CONCLUSION
Description 17: Summarize the key findings and insights from the review, emphasizing the importance of integrating various methods for effective evacuation management.
Section 18: ACKNOWLEDGMENTS
Description 18: Acknowledge the contributions and support received for the research work, including any funding or institutional support. |
Literature review of visual representation of the results of benefit-risk assessments of medicinal products | 8 | ---
paper_title: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods
paper_content:
Abstract The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction...
---
paper_title: Balancing benefit and risk of medicines: a systematic review and classification of available methodologies
paper_content:
BACKGROUND ::: The need for formal and structured approaches for benefit-risk assessment of medicines is increasing, as is the complexity of the scientific questions addressed before making decisions on the benefit-risk balance of medicines. We systematically collected, appraised and classified available benefit-risk methodologies to facilitate and inform their future use. ::: ::: ::: METHODS ::: A systematic review of publications identified benefit-risk assessment methodologies. Methodologies were appraised on their fundamental principles, features, graphical representations, assessability and accessibility. We created a taxonomy of methodologies to facilitate understanding and choice. ::: ::: ::: RESULTS ::: We identified 49 methodologies, critically appraised and classified them into four categories: frameworks, metrics, estimation techniques and utility survey techniques. Eight frameworks describe qualitative steps in benefit-risk assessment and eight quantify benefit-risk balance. Nine metric indices include threshold indices to measure either benefit or risk; health indices measure quality-of-life over time; and trade-off indices integrate benefits and risks. Six estimation techniques support benefit-risk modelling and evidence synthesis. Four utility survey techniques elicit robust value preferences from relevant stakeholders to the benefit-risk decisions. ::: ::: ::: CONCLUSIONS ::: Methodologies to help benefit-risk assessments of medicines are diverse and each is associated with different limitations and strengths. There is not a 'one-size-fits-all' method, and a combination of methods may be needed for each benefit-risk assessment. The taxonomy introduced herein may guide choice of adequate methodologies. Finally, we recommend 13 of 49 methodologies for further appraisal for use in the real-life benefit-risk assessment of medicines.
---
paper_title: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods
paper_content:
Abstract The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction...
---
paper_title: Helping Patients Decide: Ten Steps to Better Risk Communication
paper_content:
With increasing frequency, patients are being asked to make complex decisions about cancer screening, prevention, and treatment. These decisions are fraught with emotion and cognitive difficulty simultaneously. Many Americans have low numeracy skills making the cognitive demands even greater whenever, as is often the case, patients are presented with risk statistics and asked to make comparisons between the risks and benefits of multiple options and to make informed medical decisions. In this commentary, we highlight 10 methods that have been empirically shown to improve patients' understanding of risk and benefit information and/or their decision making. The methods range from presenting absolute risks using frequencies (rather than presenting relative risks) to using a risk format that clarifies how treatment changes risks from preexisting baseline levels to using plain language. We then provide recommendations for how health-care providers and health educators can best to communicate this complex medical information to patients, including using plain language, pictographs, and absolute risks instead of relative risks.
---
paper_title: Graph Literacy: A Cross-Cultural Comparison
paper_content:
BACKGROUND ::: Visual displays are often used to communicate important medical information to patients. However, even the simplest graphs are not understood by everyone. ::: ::: ::: OBJECTIVE ::: To develop and test a scale to measure health-related graph literacy and investigate the level of graph literacy in the United States and Germany. ::: ::: ::: DESIGN ::: Experimental and questionnaire studies. Setting. Computerized studies in the laboratory and on probabilistic national samples in the United States and Germany. Participants. Nationally representative samples of people 25 to 69 years of age in Germany (n = 495) and the United States (n = 492). Laboratory pretest on 60 younger and 60 older people. Measurements. Psychometric properties of the scale (i.e., reliability, validity, discriminability) and level of graph literacy in the two countries. ::: ::: ::: RESULTS ::: The new graph literacy scale predicted which patients can benefit from visual aids and had promising measurement properties. Participants in both countries completed approximately 9 of 13 items correctly (in Germany, x¯ = 9.4, s = 2.6; in the United States, x¯ = 9.3, s = 2.9). Approximately one third of the population in both countries had both low graph literacy and low numeracy skills. Limitations. The authors focused on basic graph literacy only. They used a computerized scale; comparability with paper-and-pencil versions should be checked. ::: ::: ::: CONCLUSIONS ::: The new graph literacy scale seems to be a suitable tool for assessing whether patients understand common graphical formats and shows that not everyone profits from standard visual displays. Research is needed on communication formats that can overcome the barriers of both low numeracy and graph literacy.
---
paper_title: The impact of the format of graphical presentation on health-related knowledge and treatment choices.
paper_content:
OBJECTIVE ::: To evaluate the ability of six graph formats to impart knowledge about treatment risks/benefits to low and high numeracy individuals. ::: ::: ::: METHODS ::: Participants were randomized to receive numerical information about the risks and benefits of a hypothetical medical treatment in one of six graph formats. Each described the benefits of taking one of two drugs, as well as the risks of experiencing side effects. Main outcome variables were verbatim (specific numerical) and gist (general impression) knowledge. Participants were also asked to rate their perceptions of the graphical format and to choose a treatment. ::: ::: ::: RESULTS ::: 2412 participants completed the survey. Viewing a pictograph was associated with adequate levels of both types of knowledge, especially for lower numeracy individuals. Viewing tables was associated with a higher likelihood of having adequate verbatim knowledge vs. other formats (p<0.001) but lower likelihood of having adequate gist knowledge (p<0.05). All formats were positively received, but pictograph was trusted by both high and low numeracy respondents. Verbatim and gist knowledge were significantly (p<0.01) associated with making a medically superior treatment choice. ::: ::: ::: CONCLUSION ::: Pictographs are the best format for communicating probabilistic information to patients in shared decision making environments, particularly among lower numeracy individuals. ::: ::: ::: PRACTICE IMPLICATIONS ::: Providers can consider using pictographs to communicate risk and benefit information to patients of different numeracy levels.
---
paper_title: The Visual Communication of Risk
paper_content:
This paper 1) provides reasons why graphics should be effective aids to communicate risk; 2) reviews the use of visuals, especially graphical displays, to communicate risk; 3) discusses issues to consider when designing graphs to communicate risk; and 4) provides suggestions for future research. Key articles and materials were obtained from MEDLINE(R) and PsychInfo(R) databases, from reference article citations, and from discussion with experts in risk communication. Research has been devoted primarily to communicating risk magnitudes. Among the various graphical displays, the risk ladder appears to be a promising tool for communicating absolute and relative risks. Preliminary evidence suggests that people understand risk information presented in histograms and pie charts. Areas that need further attention include 1) applying theoretical models to the visual communication of risk, 2) testing which graphical displays can be applied best to different risk communication tasks (e.g., which graphs best convey absolute or relative risks), 3) communicating risk uncertainty, and 4) testing whether the lay public's perceptions and understanding of risk varies by graphical format and whether the addition of graphical displays improves comprehension substantially beyond numerical or narrative translations of risk and, if so, by how much. There is a need to ascertain the extent to which graphics and other visuals enhance the public's understanding of disease risk to facilitate decision-making and behavioral change processes. Nine suggestions are provided to help achieve these ends.
---
paper_title: The effect of format on parents' understanding of the risks and benefits of clinical research: a comparison between text, tables, and graphics.
paper_content:
There is a paucity of information regarding the optimal method of presenting risk/benefit information to parents of pediatric research subjects. This study, therefore, was designed to examine the effect of different message formats on parents' understanding of research risks and benefits. An Internet-administered survey was completed by 4,685 parents who were randomized to receive risk/benefit information about a study of pediatric postoperative pain control presented in different message formats (text, tables, and pictographs). Survey questions assessed participants' gist and verbatim understanding of the information and their perceptions of the risks and benefits. Pictographs were associated with significantly (p < .05) greater likelihood of adequate gist and verbatim understanding compared with text and tables regardless of the participants' numeracy. Parents who received the information in pictograph format perceived the risks to be lower and the benefits to be higher compared with the other formats (p < .001). Furthermore, compared with text and tables, pictographs were perceived as more "effective," "helpful," and "trustworthy" in presenting risk/benefit information. These results underscore the difficulties associated with presenting risk/benefit information for clinical research but suggest a simple method for enhancing parents' informed understanding of the relevant statistics.
---
paper_title: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods
paper_content:
Abstract The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction...
---
paper_title: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods
paper_content:
Abstract The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction...
---
paper_title: Graph Literacy: A Cross-Cultural Comparison
paper_content:
BACKGROUND ::: Visual displays are often used to communicate important medical information to patients. However, even the simplest graphs are not understood by everyone. ::: ::: ::: OBJECTIVE ::: To develop and test a scale to measure health-related graph literacy and investigate the level of graph literacy in the United States and Germany. ::: ::: ::: DESIGN ::: Experimental and questionnaire studies. Setting. Computerized studies in the laboratory and on probabilistic national samples in the United States and Germany. Participants. Nationally representative samples of people 25 to 69 years of age in Germany (n = 495) and the United States (n = 492). Laboratory pretest on 60 younger and 60 older people. Measurements. Psychometric properties of the scale (i.e., reliability, validity, discriminability) and level of graph literacy in the two countries. ::: ::: ::: RESULTS ::: The new graph literacy scale predicted which patients can benefit from visual aids and had promising measurement properties. Participants in both countries completed approximately 9 of 13 items correctly (in Germany, x¯ = 9.4, s = 2.6; in the United States, x¯ = 9.3, s = 2.9). Approximately one third of the population in both countries had both low graph literacy and low numeracy skills. Limitations. The authors focused on basic graph literacy only. They used a computerized scale; comparability with paper-and-pencil versions should be checked. ::: ::: ::: CONCLUSIONS ::: The new graph literacy scale seems to be a suitable tool for assessing whether patients understand common graphical formats and shows that not everyone profits from standard visual displays. Research is needed on communication formats that can overcome the barriers of both low numeracy and graph literacy.
---
paper_title: Frequency or Probability? A Qualitative Study of Risk Communication Formats Used in Health Care
paper_content:
Background. The communication of probabilistic outcomes is an essential aspect of shared medical decision making.Methods. The authors conducted a qualitative study using focus groups to evaluate the response of women to various formats used in the communication of breast cancer risk.Findings. Graphic discrete frequency formats using highlighted human figures had greater salience than continuous probability formats using bar graphs. Potential biases in the estimation of risk magnitude were associated with the use of highlighted human figures versus bar graphs and the denominator size in graphics using highlighted human figures. The presentation of uncertainty associated with risk estimates caused some to loose trust in the information, whereas others were accepting of uncertainty in scientific data.Conclusion. The qualitative study identified new constructs with regard to how patients process probabilistic information. Further research in the clinical setting is needed to provide a theoretical justificatio...
---
paper_title: Individual Differences in Graph Literacy: Overcoming Denominator Neglect in Risk Comprehension
paper_content:
Graph literacy is an often neglected skill that influences decision making performance. We conducted an experiment to investigate whether individual differences in graph literacy affect the extent to which people benefit from visual aids (icon arrays) designed to reduce a common judgment bias (i.e., denominator neglect—a focus on numerators in ratios while neglecting denominators). Results indicated that icon arrays more often increased risk comprehension accuracy and confidence among participants with high graph literacy as compared with those with low graph literacy. Results held regardless of how the health message was framed (chances of dying versus chances of surviving). Findings contribute to our understanding of the ways in which individual differences in cognitive abilities interact with the comprehension of different risk representation formats. Theoretical, methodological, and prescriptive implications of the results are discussed (e.g., the effective communication of quantitative medical data). Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: A Longitudinal Model and Graphic for Benefit-risk Analysis, with Case Study
paper_content:
A novel method for simultaneously visualizing benefit and risk over time is presented. The underlying model represents a subject's benefit-risk state at a given time as one of five discrete clinical states, one being premature study withdrawal. The new graphic uses colors to represent each subject's changing state over the course of the clinical trial. The user can grasp how a treatment affects subjects in aggregate, then further examine how individuals are affected. It is possible to tell whether the beneficial and harmful outcomes are correlated. The method is particularly appropriate for treatments that provide only symptomatic relief. An approved drug for chronic pain is presented as a worked example.
---
paper_title: Explaining risks: turning numerical data into meaningful pictures
paper_content:
The way in which information is presented affects both how health professionals introduce it and how patients use it ::: ::: The “information age” has profound implications for the way we work. The volume of information derives from biomedical and clinical evaluative sciences and is increasingly available to clinicians and patients through the world wide web.1 We need to process information, derive knowledge, and disseminate the knowledge into clinical practice. This is particularly challenging for doctors in the context of the consultation. Information often highlights uncertainties, including collective professional uncertainty, which we address with more and better research; individual professional uncertainty, which we address with professional education and support for decisions; and stochastic uncertainty (the irreducible element of chance), which we address with effective risk communication about the harms and benefits of different options for treatment or care. ::: ::: In this article we discuss whether the shift towards a greater use of information in consultations is helpful and summarise the current literature on risk communication. We also explore how information can be used without losing the benefits that are traditionally associated with the art, rather than the science, of medicine. ::: ::: #### Summary points ::: ::: Patients often desire more information than is currently provided ::: ::: Communicating about risks should be a two way process in which professionals and patients exchange information and opinions about those risks ::: ::: Professionals need to support patients in making choices by turning raw data into information that is more helpful to the discussions than the data ::: ::: “Framing” manipulations of information, such as using information about relative risk in isolation of base rates, to achieve professionally determined goals should be avoided ::: ::: “Decision aids” can be useful as they often include visual presentations of risk information and relate the information to more familiar risks RETURN TO TEXT ::: ::: This paper draws on systematic reviews and other key literature in the …
---
paper_title: Effective visualization of integrated knowledge and data to enable informed decisions in drug development and translational medicine
paper_content:
Integrative understanding of preclinical and clinical data is imperative to enable informed decisions and reduce the attrition rate during drug development. The volume and variety of data generated during drug development have increased tremendously. A new information model and visualization tool was developed to effectively utilize all available data and current knowledge. The Knowledge Plot integrates preclinical, clinical, efficacy and safety data by adding two concepts: knowledge from the different disciplines and protein binding.Internal and public available data were gathered and processed to allow flexible and interactive visualizations. The exposure was expressed as the unbound concentration of the compound and the treatment effect was normalized and scaled by including expert opinion on what a biologically meaningful treatment effect would be.The Knowledge Plot has been applied both retrospectively and prospectively in project teams in a number of different therapeutic areas, resulting in closer collaboration between multiple disciplines discussing both preclinical and clinical data. The Plot allows head to head comparisons of compounds and was used to support Candidate Drug selections and differentiation from comparators and competitors, back translation of clinical data, understanding the predictability of preclinical models and assays, reviewing drift in primary endpoints over the years, and evaluate or benchmark compounds in due diligence comparing multiple attributes.The Knowledge Plot concept allows flexible integration and visualization of relevant data for interpretation in order to enable scientific and informed decision-making in various stages of drug development. The concept can be used for communication, decision-making, knowledge management, and as a forward and back translational tool, that will result in an improved understanding of the competitive edge for a particular project or disease area portfolio. In addition, it also builds up a knowledge and translational continuum, which in turn will reduce the attrition rate and costs of clinical development by identifying poor candidates early.
---
paper_title: Communicating contraceptive effectiveness: A randomized controlled trial to inform a World Health Organization family planning handbook
paper_content:
Objective The objective of the study was to compare 3 different approaches for increasing clients' understanding of contraceptive effectiveness. Study design We randomized 900 reproductive-age women in India and Jamaica to 1 of 3 charts presenting pregnancy risk. Results The most important reason for choosing a contraceptive was how well it prevents pregnancy (54%) followed by few side effects (17%). At baseline, knowledge about contraceptive effectiveness was poor. About half knew oral contraceptive pills are more effective than condoms (46%) and intrauterine devices are more effective than injectables (50%). All 3 charts improved knowledge significantly (P .05). The chart ranking contraceptive methods on a continuum was judged slightly easier to understand than the other 2 charts. Conclusion Only with accurate understanding of pregnancy risk can clients make informed choices. Our results have already informed a global handbook for family planning providers to use the chart ranking contraceptive methods on a continuum.
---
paper_title: The impact of the format of graphical presentation on health-related knowledge and treatment choices.
paper_content:
OBJECTIVE ::: To evaluate the ability of six graph formats to impart knowledge about treatment risks/benefits to low and high numeracy individuals. ::: ::: ::: METHODS ::: Participants were randomized to receive numerical information about the risks and benefits of a hypothetical medical treatment in one of six graph formats. Each described the benefits of taking one of two drugs, as well as the risks of experiencing side effects. Main outcome variables were verbatim (specific numerical) and gist (general impression) knowledge. Participants were also asked to rate their perceptions of the graphical format and to choose a treatment. ::: ::: ::: RESULTS ::: 2412 participants completed the survey. Viewing a pictograph was associated with adequate levels of both types of knowledge, especially for lower numeracy individuals. Viewing tables was associated with a higher likelihood of having adequate verbatim knowledge vs. other formats (p<0.001) but lower likelihood of having adequate gist knowledge (p<0.05). All formats were positively received, but pictograph was trusted by both high and low numeracy respondents. Verbatim and gist knowledge were significantly (p<0.01) associated with making a medically superior treatment choice. ::: ::: ::: CONCLUSION ::: Pictographs are the best format for communicating probabilistic information to patients in shared decision making environments, particularly among lower numeracy individuals. ::: ::: ::: PRACTICE IMPLICATIONS ::: Providers can consider using pictographs to communicate risk and benefit information to patients of different numeracy levels.
---
paper_title: Graph Design for the Eye and Mind
paper_content:
1. Looking with the Eye and Mind 2. Choosing a Graph Format 3. Creating the Framework, Labels, and Title 4. Creating the Pie Graphs, Divided-Bar Charts, and Visual Tables 5. Creating Bar-Graph Variants 6. Creating Line-Graph Variants and Scatterplots 7. Creating color, filling, and optional components 8. How People Lie with Graphs 9. Beyond the Graph Appendix 1: Elementary Statistics for Graphs Appendix 2: Analyzing Graphics Programs Appendix 3: Summary of Psychological Principles References Sources of Data and Figures
---
paper_title: Formats for Improving Risk Communication in Medical Tradeoff Decisions
paper_content:
ABSTRACT To make treatment decisions, patients should consider not only a treatment option's potential consequences but also the probability of those consequences. Many laypeople, however, have difficulty using probability information. This Internet-based study (2,601 participants) examined a hypothetical medical tradeoff situation in which a treatment would decrease one risk but increase another. Accuracy was assessed in terms of the ability to determine correctly whether the treatment would increase or decrease the total risk. For these tradeoff problems, accuracy was greater when the following occurred: (1) the amount of cognitive effort required to evaluate the tradeoff was reduced; (2) probability information was presented as a graphical display rather than as text only; and (3) information was presented as percentages rather than as frequencies (n in 100). These findings provide suggestions of ways to present risk probabilities that may help patients understand their treatment options.
---
paper_title: Influence of framing and graphic format on comprehension of risk information among American Indian tribal college students.
paper_content:
We evaluated methods for presenting risk information by administering six versions of an anonymous survey to 489 American Indian tribal college students. All surveys presented identical numeric information, but framing varied. Half expressed prevention benefits as relative risk reduction, half as absolute risk reduction. One third of surveys used text to describe prevention benefits; one third used text plus bar graph; one third used text plus modified bar graph incorporating a culturally tailored image. The odds ratio (OR) for correct risk interpretation for absolute risk framing vs. relative risk framing was 1.40 (95 % CI = 1.01, 1.93). The OR for correct interpretation of text plus bar graph vs. text only was 2.16 (95 % CI = 1.46, 3.19); OR for text plus culturally tailored bar graph vs. text only was 1.72 (95 % CI = 1.14, 2.60). Risk information including a bar graph was better understood than text-only information; a culturally tailored graph was no more effective than a standard graph.
---
paper_title: The Visual Communication of Risk
paper_content:
This paper 1) provides reasons why graphics should be effective aids to communicate risk; 2) reviews the use of visuals, especially graphical displays, to communicate risk; 3) discusses issues to consider when designing graphs to communicate risk; and 4) provides suggestions for future research. Key articles and materials were obtained from MEDLINE(R) and PsychInfo(R) databases, from reference article citations, and from discussion with experts in risk communication. Research has been devoted primarily to communicating risk magnitudes. Among the various graphical displays, the risk ladder appears to be a promising tool for communicating absolute and relative risks. Preliminary evidence suggests that people understand risk information presented in histograms and pie charts. Areas that need further attention include 1) applying theoretical models to the visual communication of risk, 2) testing which graphical displays can be applied best to different risk communication tasks (e.g., which graphs best convey absolute or relative risks), 3) communicating risk uncertainty, and 4) testing whether the lay public's perceptions and understanding of risk varies by graphical format and whether the addition of graphical displays improves comprehension substantially beyond numerical or narrative translations of risk and, if so, by how much. There is a need to ascertain the extent to which graphics and other visuals enhance the public's understanding of disease risk to facilitate decision-making and behavioral change processes. Nine suggestions are provided to help achieve these ends.
---
paper_title: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods
paper_content:
Abstract The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction...
---
paper_title: Review article: explaining risks of inflammatory bowel disease therapy to patients.
paper_content:
BACKGROUND ::: Medical treatment for inflammatory bowel disease (IBD) has advanced significantly over the past decade, but it is important to communicate effectively the balance of benefits and risks of therapy to patients to facilitate informed medical decisions. ::: ::: ::: AIM ::: To review the available data describing the risk of side effects of IBD medications and to describe effective methods for communicating risk. ::: ::: ::: METHODS ::: To identify relevant articles for this review, a PubMed search was conducted using relevant key words and phrases. In addition, reference lists from identified manuscripts were searched and recent abstracts from National meetings were reviewed. ::: ::: ::: RESULTS ::: The steroid-sparing medications used for the treatment of IBD all carry risks of both common and rare adverse events. Trade-offs need to be made between the risks of these medications vs. the risks of poorly treated disease and corticosteroids. There has been significant research on how best to present risk data to patients, which is summarized in this review. ::: ::: ::: CONCLUSIONS ::: To ensure that our patients understand their choices and feel comfortable with their treatment, we need to communicate risk data to patients clearly. Patients comprehend absolute numbers better than relative risk, and when available, pictorial representations of data are preferred over solely presenting numerical outcomes.
---
paper_title: Individual Differences in Graph Literacy: Overcoming Denominator Neglect in Risk Comprehension
paper_content:
Graph literacy is an often neglected skill that influences decision making performance. We conducted an experiment to investigate whether individual differences in graph literacy affect the extent to which people benefit from visual aids (icon arrays) designed to reduce a common judgment bias (i.e., denominator neglect—a focus on numerators in ratios while neglecting denominators). Results indicated that icon arrays more often increased risk comprehension accuracy and confidence among participants with high graph literacy as compared with those with low graph literacy. Results held regardless of how the health message was framed (chances of dying versus chances of surviving). Findings contribute to our understanding of the ways in which individual differences in cognitive abilities interact with the comprehension of different risk representation formats. Theoretical, methodological, and prescriptive implications of the results are discussed (e.g., the effective communication of quantitative medical data). Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: Helping doctors and patients make sense of health statistics
paper_content:
Many doctors, patients, journalists, and politicians alike do not understand what health statistics mean or draw wrong conclusions without noticing. Collective statistical illiteracy refers to the widespread inability to understand the meaning of numbers. For instance, many citizens are unaware that higher survival rates with cancer screening do not imply longer life, or that the statement that mammography screening reduces the risk of dying from breast cancer by 25% in fact means that 1 less woman out of 1,000 will die of the disease. We provide evidence that statistical illiteracy (a) is common to patients, journalists, and physicians; (b) is created by nontransparent framing of information that is sometimes an unintentional result of lack of understanding but can also be a result of intentional efforts to manipulate or persuade people; and (c) can have serious consequences for health.The causes of statistical illiteracy should not be attributed to cognitive biases alone, but to the emotional nature of ...
---
paper_title: What are the chances? Evaluating risk and benefit information in consumer health materials.
paper_content:
Much consumer health information addresses issues of disease risk or treatment risks and benefits, addressing questions such as "How effective is this treatment?" or "What is the likelihood that this test will give a false positive result?" Insofar as it addresses outcome likelihood, this information is essentially quantitative in nature, which is of critical importance, because quantitative information tends to be difficult to understand and therefore inaccessible to consumers. Information professionals typically examine reading level to determine the accessibility of consumer health information, but this measure does not adequately reflect the difficulty of quantitative information, including materials addressing issues of risk and benefit. As a result, different methods must be used to evaluate this type of consumer health material. There are no standard guidelines or assessment tools for this task, but research in cognitive psychology provides insight into the best ways to present risk and benefit information to promote understanding and minimize interpretation bias. This paper offers an interdisciplinary bridge that brings these results to the attention of information professionals, who can then use them to evaluate consumer health materials addressing risks and benefits.
---
paper_title: The Visual Communication of Risk
paper_content:
This paper 1) provides reasons why graphics should be effective aids to communicate risk; 2) reviews the use of visuals, especially graphical displays, to communicate risk; 3) discusses issues to consider when designing graphs to communicate risk; and 4) provides suggestions for future research. Key articles and materials were obtained from MEDLINE(R) and PsychInfo(R) databases, from reference article citations, and from discussion with experts in risk communication. Research has been devoted primarily to communicating risk magnitudes. Among the various graphical displays, the risk ladder appears to be a promising tool for communicating absolute and relative risks. Preliminary evidence suggests that people understand risk information presented in histograms and pie charts. Areas that need further attention include 1) applying theoretical models to the visual communication of risk, 2) testing which graphical displays can be applied best to different risk communication tasks (e.g., which graphs best convey absolute or relative risks), 3) communicating risk uncertainty, and 4) testing whether the lay public's perceptions and understanding of risk varies by graphical format and whether the addition of graphical displays improves comprehension substantially beyond numerical or narrative translations of risk and, if so, by how much. There is a need to ascertain the extent to which graphics and other visuals enhance the public's understanding of disease risk to facilitate decision-making and behavioral change processes. Nine suggestions are provided to help achieve these ends.
---
| Title: Literature review of visual representation of the results of benefit-risk assessments of medicinal products
Section 1: Abstract
Description 1: Summarize the primary objectives, methods, results, and conclusions of the paper.
Section 2: Background
Description 2: Provide context for the review, including the objectives of the PROTECT Benefit-risk group and the significance of visual representation in benefit-risk assessment.
Section 3: Methods
Description 3: Describe the approach of the systematic review, including literature search strategy, inclusion criteria, and data extraction process.
Section 4: Results
Description 4: Discuss the findings of the review, including the variety of visual formats identified, their classifications, and their respective strengths and weaknesses.
Section 5: Discussion
Description 5: Analyze the implications of the findings, emphasizing the absence of a universally superior visual format and the importance of context-specific visual creation.
Section 6: Recommendations
Description 6: Present recommendations on creating effective visual displays for benefit-risk communication, focusing on audience-visual compatibility and specific visual formats to explore.
Section 7: Conclusions
Description 7: Summarize the main insights and final recommendations derived from the review.
Section 8: Supplemental Materials
Description 8: Reference additional detailed descriptions and tables used to support the appraisals and recommendations in the paper. |
Scheduling problems with a common due window assignment: A survey | 6 | ---
paper_title: COMMON DUE-WINDOW SCHEDULING
paper_content:
In this paper, we solve common due-window scheduling problems within the just-in-time window concept, i.e., scheduling problems including both earliness and tardiness penalties. We assume that jobs share the same due window and incur no penalty as long as they are completed within the due window. We further assume that the earliness and tardiness penalty factors are constant and that the size of the window is a given parameter. ::: ::: ::: ::: For cases where the location of the due window is a decision variable, we provide a polynomial algorithm with complexity O(n * log (n)) to solve the problem. For cases where the location of the due window is a given parameter, we use dynamic programming with pseudopolynomial complexity to solve the problem.
---
paper_title: Modern approaches to modeling user requirements on resource and task allocation in hierarchical computational grids
paper_content:
Tasks scheduling and resource allocation are among crucial issues in any large scale distributed system, including Computational Grids (CGs). These issues are commonly investigated using traditional computational models and resolution methods that yield near-optimal scheduling strategies. One drawback of such approaches is that they cannot effectively tackle the complex nature of CGs. On the one hand, such systems account for many administrative domains with their own access policies, user privileges, etc. On the other, CGs have hierarchical nature and therefore any computational model should be able to effectively express the hierarchical architecture in the optimization model. Recently, researchers have been investigating the use of game theory for modeling user requirements regarding task and resource allocation in grid scheduling problems. In this paper we present two general non-cooperative game approaches, namely, the symmetric non-zero sum game and the asymmetric Stackelberg game for modeling grid user behavior defined as user requirements. In our game-theoretic approaches we are able to cast new requirements arising in allocation problems, such as asymmetric users relations, security and reliability restrictions in CGs. For solving the games, we designed and implemented GA-based hybrid schedulers for approximating the equilibrium points for both games. The proposed hybrid resolution methods are experimentally evaluated through the grid simulator under heterogeneity, and large-scale and dynamics conditions. The relative performance of the schedulers is measured in terms of the makespan and flowtime metrics. The experimental analysis showed high efficiency of meta-heuristics in solving the game-based models, especially in the case of an additional cost of secure task scheduling to be paid by the users.
---
paper_title: COMMON DUE-WINDOW SCHEDULING
paper_content:
In this paper, we solve common due-window scheduling problems within the just-in-time window concept, i.e., scheduling problems including both earliness and tardiness penalties. We assume that jobs share the same due window and incur no penalty as long as they are completed within the due window. We further assume that the earliness and tardiness penalty factors are constant and that the size of the window is a given parameter. ::: ::: ::: ::: For cases where the location of the due window is a decision variable, we provide a polynomial algorithm with complexity O(n * log (n)) to solve the problem. For cases where the location of the due window is a given parameter, we use dynamic programming with pseudopolynomial complexity to solve the problem.
---
paper_title: Single-machine scheduling with time windows and earliness/tardiness penalties
paper_content:
Abstract We study the single machine earliness/tardiness problem with arbitrary time windows (STW). We show that STW is NP-hard and then decompose it into the subproblems of finding a good job sequence and optimally inserting idle time into a given sequence. We propose heuristics for the sequencing subproblem by appropriately modifying heuristics originally developed for other single machine scheduling problems. Experimentation with randomly generated problems shows that one of the proposed heuristics is computationally efficient and capable of finding good solutions to problems of arbitrary size. We also propose an algorithm to optimally insert idle time into a given job sequence.
---
paper_title: A production scheduling strategy with a common due window
paper_content:
This paper studies a production scheduling problem in a multinational company in Hong Kong, the headquarters, sales branches and factories of which are located in different countries and regions. Some orders are required to go for processing in a particular factory. Based on a Just-In-Time (JIT) strategy, an objective function is defined for the production scheduling problem with a due window. A series of Lemmas and a polynomial-time algorithm are presented to determine the optimal due window and sequence. The algorithm is tested on data collected from the company. Computational results demonstrate the effectiveness of the algorithm.
---
paper_title: Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
paper_content:
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
---
paper_title: Minmax scheduling problems with a common due-window
paper_content:
This paper focuses on a minmax due-window assignment problem. The goal is to schedule the jobs and the due-window such that the highest cost among all jobs is minimized. The objective function contains four cost components: for earliness, tardiness, due-window starting time and due-window size. We present a polynomial time solution for the case of a single machine and for a two-machine flow-shop. The cases of parallel identical machines and uniform machines are NP-hard, and simple heuristics and lower bounds are introduced and tested numerically. Scope and purpose Contracts between suppliers and customers often contain a time interval (a due-window), such that goods delivered within this interval are considered to be on time and are not penalized. The due-window starting time and size are determined during sales negotiations with customers. A late and/or large due-window clearly makes the supplier less attractive, and hence the total cost function should increase in the due-window starting time and size. This paper focuses on due-window assignment problems on several classical machine settings: a single machine, a two-machine flow-shop and parallel identical and uniform machines.
---
paper_title: Due‐window assignment with unit processing‐time jobs
paper_content:
In due-window assignment problems, jobs completed within a designated time interval are regarded as being on time, whereas early and tardy jobs are penalized. The objective is to determine the location and size of the due-window, as well as the job schedule. We address a common due-window assignment problem on parallel identical machines with unit processing time jobs. We show that the number of candidate values for the optimal due-window starting time and for the optimal due-window completion time are bounded by 2. We also prove that the starting time of the first job on each of the machines is either 0 or 1, thus introducing a fairly simple, constant-time solution for the problem. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004
---
paper_title: Soft Due Window Assignment and Scheduling on Parallel Machines
paper_content:
We study problems of scheduling jobs on identical parallel machines, in which a due window has to be assigned to each job. If a job is completed within its due window, then it incurs no scheduling cost. Otherwise, it incurs earliness or tardiness cost. Two due window models are considered. In both models, the due window size is a decision variable common for all jobs. In the first model, called a constant due window, the due window starting time is a decision variable common for all jobs, and in the second, called a slack due window, the due window starting time is equal to the job processing time plus a decision variable common for all jobs. The objective is to find a job schedule as well as the size and location(s) of the due window(s) such that a weighted maximum or sum of costs associated with job earliness, job tardiness, and due window size is minimized. We establish the properties of optimal solutions of these minmax and minsum problems. For a constant due window model, we prove that the minmax problem with arbitrary weights and the minsum problem with equal weights are polynomially equivalent to the classical parallel machine scheduling problem to minimize the makespan. We further show that the problems for a constant due window model and slack due window model with the same objective function are reversible in the sense that their optimal solutions are mirror images of each other. These results imply O(n) and O(n log n) time algorithms for the considered problems when m=1.
---
paper_title: COMMON DUE-WINDOW SCHEDULING
paper_content:
In this paper, we solve common due-window scheduling problems within the just-in-time window concept, i.e., scheduling problems including both earliness and tardiness penalties. We assume that jobs share the same due window and incur no penalty as long as they are completed within the due window. We further assume that the earliness and tardiness penalty factors are constant and that the size of the window is a given parameter. ::: ::: ::: ::: For cases where the location of the due window is a decision variable, we provide a polynomial algorithm with complexity O(n * log (n)) to solve the problem. For cases where the location of the due window is a given parameter, we use dynamic programming with pseudopolynomial complexity to solve the problem.
---
paper_title: A due-window assignment problem with position-dependent processing times
paper_content:
We extend a classical single-machine due-window assignment problem to the case of position-dependent processing times. In addition to the standard job scheduling decisions, one has to assign a time interval (due-window), such that jobs completed within this interval are assumed to be on time and not penalized. The cost components are: total earliness, total tardiness and due-window location and size. We introduce an O(n3) solution algorithm, where n is the number of jobs. We also investigate several special cases, and examine numerically the sensitivity of the solution (schedule and due-window) to the different cost parameters.
---
paper_title: Common due window size and location determination in a single machine scheduling problem
paper_content:
We consider a single machine static and deterministic scheduling problem in which jobs have a common due window. Jobs completed within the window incur no penalties, other jobs incur either earliness or tardiness penalties. The objective is to find the optimal size and location of the window as well as an optimal sequence to minimise a cost function based on earliness, tardiness, window size, and window location. We propose an O(n log n) algorithm to solve the problem.
---
paper_title: Scheduling Problems with Optimal Due Interval Assignment Subject to Some Generalized Criteria
paper_content:
The paper deals with two problems of scheduling jobs on identical parallel machines, in which a due interval should be assigned to each job. Due interval is a generalization of well known classical due date and describes a time interval, in which a job should be finished. In the first problem, we have to find a schedule of jobs and a common due interval such that the sum of the total tardiness, the total earliness and due interval parameters is minimized. The second problem is to find a schedule of jobs and an assignment of due interval to all jobs, which minimize the maximum of the following three parts: the maximum tardiness, the maximum earliness and the due int erval parameters. We proved that the considered problems are NP-hard and outlined some methods how to solve them approximately as well as optimally.
---
paper_title: Single processor scheduling problems with various models of a due window assignment
paper_content:
In the paper we investigate four single processor scheduling problems, which deal with the process of the negotiation between a producer and a customer about delivery time of final products. This process is modeled by a due window, which is a generalization of well known classical due date and describes a time interval, in which a job should be finished. Due window assignment is a new approach, which has been investigated in the scientific literature for a few years. In this paper we consider various models of due window assignment. To solve the formulated problems we have to find such a schedule of jobs and such an assignment of due windows to each job, which minimizes a given criterion dependent on the maximum or total earliness and tardiness of jobs and due window parameters. One of the main results is the mirror image of the solutions of the considered problems and other problems presented in the scientific literature. The wide survey of the literature is also given.
---
paper_title: COMMON DUE-WINDOW SCHEDULING
paper_content:
In this paper, we solve common due-window scheduling problems within the just-in-time window concept, i.e., scheduling problems including both earliness and tardiness penalties. We assume that jobs share the same due window and incur no penalty as long as they are completed within the due window. We further assume that the earliness and tardiness penalty factors are constant and that the size of the window is a given parameter. ::: ::: ::: ::: For cases where the location of the due window is a decision variable, we provide a polynomial algorithm with complexity O(n * log (n)) to solve the problem. For cases where the location of the due window is a given parameter, we use dynamic programming with pseudopolynomial complexity to solve the problem.
---
paper_title: Scheduling about an unrestricted common due window with arbitrary earliness/tardiness penalty rates
paper_content:
We consider the NP-hard problem of scheduling jobs on a single machine about an unrestricted due window to minimize total weighted earliness and tardiness cost. Each job has an earliness penalty rate and a tardiness penalty rate that are allowed to be arbitrary. Earliness or tardiness cost is assessed when a job completes outside the due window, which may be an instant in time or a time increment defining acceptable job completion. In this paper we present properties that characterize the structure of an optimal schedule, present a lower bound, propose a two-step branch and bound algorithm, and report results from a computational experiment. We find that optimal solutions can be quickly obtained for medium-sized problem instances.
---
paper_title: Earliness-tardiness scheduling problems with a common delivery window
paper_content:
This paper deals with the scheduling of n jobs on a single machine to minimize the sum of weighted earliness and weighted number of tardy jobs given a delivery window. Penalties are not incurred if jobs are completed within the delivery window. The length of this delivery window (which corresponds to the time period within which the customer is willing to take deliveries) is a given constant. We consider two cases of the problem; one where the position of the delivery parameter (restricted window case). We present some optimal properties, prove that the problem (even for the unrestricted window case) is NP-complete, and present dynamic programming algorithms for both cases.
---
paper_title: Earliness-Tardiness Scheduling Problems, I: Weighted Deviation of Completion Times About a Common Due Date
paper_content:
This paper and its companion Part II concern the scheduling of jobs with cost penalties for both early and late completion. In Part I, we consider the problem of minimizing the weighted sum of earliness and tardiness of jobs scheduled on a single processor around a common due date, d. We assume that d is not early enough to constrain the scheduling decision. The weight of a job does not depend on whether the job is early or late, but weights may vary between jobs. We prove that the recognition version of this problem is NP-complete in the ordinary sense. We describe optimality conditions, and present a computationally efficient dynamic programming algorithm. When the weights are bounded by a polynomial function of the number of jobs, a fully polynomial approximation scheme is given. We also describe four special cases for which the problem is polynomially solvable. Part II provides similar results for the unweighted version of this problem, where d is arbitrary.
---
paper_title: A note: a due-window assignment problem on parallel identical machines
paper_content:
We solve a due-window assignment problem on parallel identical machines. In addition to the standard objective of finding the optimal job schedule, in due-window assignment problems one has to assign a time interval during which goods are delivered to customers with no cost. Jobs scheduled prior to or after the due-window are penalized according to their earliness/tardiness value. We assume that jobs have identical processing times, but may have job-dependent earliness and tardiness costs (eg, due to possible different destinations). We show that the problem can be reduced to a non-standard asymmetric assignment problem, and introduce an efficient (O(n4)) solution procedure.
---
paper_title: Scheduling with a common due-window: Polynomially solvable cases
paper_content:
The single machine scheduling problem to minimize maximum weighted absolute deviations of job completion times from a common due-date, is known to be NP-hard. However, two special cases have been shown to have polynomial time solutions: the case of unit processing time jobs, and the case of due-date assignment for a given job sequence. We extend both cases to a setting of a common due-window. We show that the unit-job problem includes 12 different sub-cases, depending on the size and location of the (given) due-window. Scheduling and due-window assignment for a given job sequence is solved for a single machine, for parallel identical machines and for flow-shops. For each of the above cases, an appropriate special-structured linear program is presented.
---
paper_title: A single machine scheduling problem with common due window and controllable processing times
paper_content:
A static deterministic single machine scheduling problem with a common due window is considered. Job processing times are controllable to the extent that they can be reduced, up to a certain limit, at a cost proportional to the reduction. The window location and size, along with the associated job schedule that minimizes a certain cost function, are to be determined. This function is made up of costs associated with the window location, its size, processing time reduction as well as job earliness and tardiness. We show that the problem can be formulated as an assignment problem and thus can be solved with well-known algorithms.
---
paper_title: Impact of learning and fatigue factors on single machine scheduling with penalties for tardy jobs
paper_content:
Abstract We consider a single machine scheduling problem in which the machine experiences the effects of learning of fatigue as it continues to work and the jobs have due dates and are subject to penalties if they are not completed on time. Because of the effects of learning or fatigue, the performance rate of the machine varies over time. As a result, the processing time of a job depends on its work content as well as the total work content of the jobs completed prior to its loading. In this paper, we prove that even when the machine works at a variable rate, the pair-wise interchange of jobs minimizes the maximum tardiness and a simple modification to the well-known Moore-Hodgson's algorithm yields the minimum number of tardy jobs. Further, we formulate the problem of minimizing the total penalty for tardy jobs as a 0–1 knapsack problem with nested constraints, and solve it by using dynamic programming recursion as well as the maximum-weighted network path algorithm. Then we combine these two techniques and solve the 0–1 knapsack problem, by inducing a nested constraint structure and constructing a network with fewer nodes.
---
paper_title: Minimizing Flow Time on Parallel Identical Processors with Variable Unit Processing Time
paper_content:
We show that minimizing total flow time on parallel identical processors with nonincreasing unit processing time is achieved with shortest processing time SPT scheduling: Sequence the tasks in an order of nondecreasing lengths. Following this order and starting with the shortest task, process any task by the first processor that becomes available. If the unit processing time is allowed to be increasing, e.g., quadratically, the problem becomes NP-hard.
---
paper_title: Scheduling a maintenance activity and due-window assignment on a single machine
paper_content:
We study a single machine scheduling and due-window assignment problem. In addition to the traditional decisions regarding sequencing the jobs and scheduling the due-window, we allow an option for performing a maintenance activity. This activity requires a fixed time interval during which the machine is turned off and no production is performed. On the other hand, after the maintenance time, the machine becomes more efficient, as reflected in the new shortened job processing times. The objective is to schedule the jobs, the due-window and the maintenance activity, so as to minimize the total cost consisting of earliness, tardiness, and due-window starting time and size. We introduce an efficient (polynomial time) solution for this problem.
---
paper_title: Common due-window assignment and scheduling of linear time-dependent deteriorating jobs and a deteriorating maintenance activity
paper_content:
Due-window assignment and production scheduling are important issues in operations management. In this study we investigate the problem of common due-window assignment and scheduling of deteriorating jobs and a maintenance activity simultaneously on a single-machine. We assume that the maintenance duration depends on its starting time. We provide polynomial time solutions for the problem and some of its special cases, where the objective is to simultaneously minimize the earliness, tardiness, due-window starting time, and due-window size costs.
---
paper_title: A bi-criteria two-machine flowshop scheduling problem with a learning effect
paper_content:
This paper addresses a bi-criteria two-machine flowshop scheduling problem when the learning effect is present. The objective is to find a sequence that minimizes a weighted sum of the total completion time and the maximum tardiness. In this article, a branch-and-bound method, incorporating several dominance properties and a lower bound, is presented to search for the exact solution for small job-size problems. In addition, two heuristic algorithms are proposed to overcome the inefficiency of the branch-and-bound algorithm for large job-size problems. Finally, computational results for this problem are provided to evaluate the performance of the proposed algorithms.
---
paper_title: Flow-shop scheduling with a learning effect
paper_content:
The paper is devoted to some flow-shop scheduling problems with a learning effect. The objective is to minimize one of the two regular performance criteria, namely, makespan and total flowtime. A heuristic algorithm with worst-case bound m for each criteria is given, where m is the number of machines. Furthermore, a polynomial algorithm is proposed for both of the special cases: identical processing time on each machine and an increasing series of dominating machines. An example is also constructed to show that the classical Johnson's rule is not the optimal solution for the two-machine flow-shop scheduling to minimize makespan with a learning effect. Some extensions of the problem are also shown.
---
paper_title: Single-machine scheduling with learning considerations
paper_content:
The focus of this work is to analyze learning in single-machine scheduling problems. It is surprising that the well-known learning effect has never been considered in connection with scheduling problems. It is shown in this paper that even with the introduction of learning to job processing times two important types of single-machine problems remain polynomially solvable.
---
paper_title: A makespan study of the two-machine flowshop scheduling problem with a learning effect
paper_content:
Abstract This paper studies the problem of minimizing the makespan in a two-machine flowshop scheduling problem with learning considerations. Johnson’s rule might not provide the optimal solution for this problem. Thus, this paper not only develops several dominance relations and proposes a branch-and-bound algorithm based on them but also establishes two heuristic algorithms. Computational experiments show the accuracy of Johnson’s rule and both proposed heuristics, and a comparison of the results to the optimal solution obtained by a branch-and-bound technique is provided for small job sizes. In addition, the performances of the three heuristics, including Johnson’s rule, are also reported for large job sizes.
---
paper_title: Impact of learning and fatigue factors on single machine scheduling with penalties for tardy jobs
paper_content:
Abstract We consider a single machine scheduling problem in which the machine experiences the effects of learning of fatigue as it continues to work and the jobs have due dates and are subject to penalties if they are not completed on time. Because of the effects of learning or fatigue, the performance rate of the machine varies over time. As a result, the processing time of a job depends on its work content as well as the total work content of the jobs completed prior to its loading. In this paper, we prove that even when the machine works at a variable rate, the pair-wise interchange of jobs minimizes the maximum tardiness and a simple modification to the well-known Moore-Hodgson's algorithm yields the minimum number of tardy jobs. Further, we formulate the problem of minimizing the total penalty for tardy jobs as a 0–1 knapsack problem with nested constraints, and solve it by using dynamic programming recursion as well as the maximum-weighted network path algorithm. Then we combine these two techniques and solve the 0–1 knapsack problem, by inducing a nested constraint structure and constructing a network with fewer nodes.
---
paper_title: Minimizing Flow Time on Parallel Identical Processors with Variable Unit Processing Time
paper_content:
We show that minimizing total flow time on parallel identical processors with nonincreasing unit processing time is achieved with shortest processing time SPT scheduling: Sequence the tasks in an order of nondecreasing lengths. Following this order and starting with the shortest task, process any task by the first processor that becomes available. If the unit processing time is allowed to be increasing, e.g., quadratically, the problem becomes NP-hard.
---
paper_title: Soft Due Window Assignment and Scheduling on Parallel Machines
paper_content:
We study problems of scheduling jobs on identical parallel machines, in which a due window has to be assigned to each job. If a job is completed within its due window, then it incurs no scheduling cost. Otherwise, it incurs earliness or tardiness cost. Two due window models are considered. In both models, the due window size is a decision variable common for all jobs. In the first model, called a constant due window, the due window starting time is a decision variable common for all jobs, and in the second, called a slack due window, the due window starting time is equal to the job processing time plus a decision variable common for all jobs. The objective is to find a job schedule as well as the size and location(s) of the due window(s) such that a weighted maximum or sum of costs associated with job earliness, job tardiness, and due window size is minimized. We establish the properties of optimal solutions of these minmax and minsum problems. For a constant due window model, we prove that the minmax problem with arbitrary weights and the minsum problem with equal weights are polynomially equivalent to the classical parallel machine scheduling problem to minimize the makespan. We further show that the problems for a constant due window model and slack due window model with the same objective function are reversible in the sense that their optimal solutions are mirror images of each other. These results imply O(n) and O(n log n) time algorithms for the considered problems when m=1.
---
paper_title: COMMON DUE-WINDOW SCHEDULING
paper_content:
In this paper, we solve common due-window scheduling problems within the just-in-time window concept, i.e., scheduling problems including both earliness and tardiness penalties. We assume that jobs share the same due window and incur no penalty as long as they are completed within the due window. We further assume that the earliness and tardiness penalty factors are constant and that the size of the window is a given parameter. ::: ::: ::: ::: For cases where the location of the due window is a decision variable, we provide a polynomial algorithm with complexity O(n * log (n)) to solve the problem. For cases where the location of the due window is a given parameter, we use dynamic programming with pseudopolynomial complexity to solve the problem.
---
paper_title: Common due window size and location determination in a single machine scheduling problem
paper_content:
We consider a single machine static and deterministic scheduling problem in which jobs have a common due window. Jobs completed within the window incur no penalties, other jobs incur either earliness or tardiness penalties. The objective is to find the optimal size and location of the window as well as an optimal sequence to minimise a cost function based on earliness, tardiness, window size, and window location. We propose an O(n log n) algorithm to solve the problem.
---
paper_title: Single processor scheduling problems with various models of a due window assignment
paper_content:
In the paper we investigate four single processor scheduling problems, which deal with the process of the negotiation between a producer and a customer about delivery time of final products. This process is modeled by a due window, which is a generalization of well known classical due date and describes a time interval, in which a job should be finished. Due window assignment is a new approach, which has been investigated in the scientific literature for a few years. In this paper we consider various models of due window assignment. To solve the formulated problems we have to find such a schedule of jobs and such an assignment of due windows to each job, which minimizes a given criterion dependent on the maximum or total earliness and tardiness of jobs and due window parameters. One of the main results is the mirror image of the solutions of the considered problems and other problems presented in the scientific literature. The wide survey of the literature is also given.
---
| Title: Scheduling problems with a common due window assignment: A survey
Section 1: Introduction
Description 1: This section introduces the context and importance of the scheduling problems with a common due window assignment, elaborating on the practical applications and challenges faced in the manufacturing systems.
Section 2: Problem notation
Description 2: This section presents the problem notation used in the paper, defining key variables and parameters essential for understanding the scheduling problems discussed.
Section 3: Classical models with job-independent earliness/tardiness penalty functions
Description 3: This section analyzes the problems with common due window assignment and job-independent earliness/tardiness penalty functions, discussing various sum type criteria and min-max type criteria.
Section 4: Classical models with job-dependent earliness/tardiness penalty functions
Description 4: This section delves into more complex scheduling problems with job-dependent penalty functions, discussing various models and their computational complexities.
Section 5: Other models
Description 5: This section explores scheduling models with a common due window assignment extended to include additional complexities like changeable job processing times, deteriorating jobs, learning and aging effects, and maintenance activities.
Section 6: Discussion and recommendations
Description 6: This section discusses the current state of research, practical applications, and provides recommendations for future studies, emphasizing the need for real-life examples in certain models.
Section 7: Conclusion
Description 7: This section presents a summary of the state of the art in scheduling problems with a common due window assignment and earliness/tardiness penalty functions, along with observations on computational complexities and solution algorithms. |
The Nuts and Bolts of Micropayments: A Survey | 9 | ---
paper_title: A Framework for Micropayment Evaluation
paper_content:
Lacking payment systems become a bottleneck for the vision of the Information Economy. In many cases, the payments of a fraction of a cent, the so-called micropayments, are of particular interest. In this paper we propose a framework to evaluate the payment systems. The framework consists of a well structured parameter vector of the desired attributes. For the evaluation of attribute values, we suggest to use VTS diagrams from object-oriented analysis and design. The framework is applied to DigiCash, SET and First Virtual.
---
paper_title: User Perceptions of Sharing, Advertising, and Tracking
paper_content:
Extending earlier work, we conducted an online user study to investigate users’ understanding of online behavioral advertising (OBA) and tracking prevention tools (TPT), and whether users’ willingness to share data with advertising companies varied depending on the type of first party website. We presented results of 368 participant responses across four types of websites an online banking site, an online shopping site, a search engine and a social networking site. In general, we identified that participants had positive responses for OBA and that they demonstrated clear preferences for which classes of information they would like to disclose online. Our results generalize over a variety of website categories containing data with different levels of sensitivity, as opposed to only the medical context as was shown in previous work by Leon et al. In our study, participants’ privacy attitudes significantly dominated their sharing willingness. Interestingly, participants appreciated the idea of user-customized targeted ads and some would be more willing to share data if given prior control mechanisms for tracking protection tools.
---
paper_title: The Case against Micropayments
paper_content:
Micropayments are likely to continue disappointing their advocates. They are an interesting technology. However, there are many non-technological reasons why they will take far longer than is generally expected to be widely used, and most probably will play only a minor role in the economy.
---
paper_title: MICRO PAYMENT GATEWAYS
paper_content:
The main objective of this thesis is to develop an architecture of the hybrid payment system.
---
paper_title: A Framework for Micropayment Evaluation
paper_content:
Lacking payment systems become a bottleneck for the vision of the Information Economy. In many cases, the payments of a fraction of a cent, the so-called micropayments, are of particular interest. In this paper we propose a framework to evaluate the payment systems. The framework consists of a well structured parameter vector of the desired attributes. For the evaluation of attribute values, we suggest to use VTS diagrams from object-oriented analysis and design. The framework is applied to DigiCash, SET and First Virtual.
---
paper_title: The case against micropayments
paper_content:
In the coking of spent ammonia and low pH sodium base sulfite wood pulping liquors in the liquid phase under pressure, gel-type coke formation is avoided by rapid heating to the coking temperature. Such rapid heating can be accomplished by the use of a metal bath having high heat transfer capacity, by using high pressure-high temperature steam injection or by means of a fluidized bed combustion system. The heating rate used is one of about 110 DEG to about 150 DEG F. per minute in the temperature interval of about 350 DEG to about 550 DEG F.
---
paper_title: Chrg-http: A Tool for Micropayments on the World Wide Web
paper_content:
Chrg-http is a simple and secure protocol for electronic payments over the Internet, especially in an intranet environment. It is designed to support those micropayments (or more specific, electronic publishing, which have costs ranging from pennies to a few dollars. A widely used secure system Kerberos V5 has been incorporated into the http protocol. The security and authentication of a transactions is provided by Kerberos, without expensive public key cryptographic computations, or on-line processing through a centralized payment processing server, which is the case to most of the existing electronic payment systems on the World Wide Web. Our implementation is based on the billing model (or the subscription model). The simplicity of the model also helps to reduce the charging cost overhead.
---
paper_title: NetCents : A Lightweight Protocol for Secure Micropayments
paper_content:
NetCents is a lightweight, flexible and secure protocol for electronic commerce over the Internet that is designed to support purchases ranging in value from a fraction of a penny and up. NetCents differs from previous protocols in several respects: NetCents uses (vendor-independent) floating scrips as signed containers of electronic currency, passed from vendor to vendor. This allows NetCents to incorporate decentralized verification of electronic currency at a vendor's server with offline payment capture. Customer trust is not required within this protocol and a probabilistic verification scheme is used to effectively limit vendor fraud. An online arbiter is implemented that will ensure proper delivery of purchased goods and that can settle most customer/vendor disputes. NetCents can be extended to support fully anonymous payments. In this paper we describe the NetCents protocol and present experimental results of a prototype implementation.
---
paper_title: A Payment Scheme Using Vouchers
paper_content:
Electronic payment schemes are traditionally based on the physical model of commerce where customers withdraw cash from bank accounts and spend it at merchants' establishments in return for goods and services. This may not be the most ideal model on which to base electronic payment. A new payment scheme using a different paradigm has been developed. Vouchers are prepared by the bank and the merchant. These are distributed to customers who can redeem them for electronic goods with the help of the bank. This new scheme requires fewer online messages to be transmitted than previous payment schemes involving electronic goods such as NetBill and thus also requires less online processing. The voucher scheme also provides some properties desired by payment schemes based on electronic coins.
---
paper_title: Agora: a minimal distributed protocol for electronic commerce
paper_content:
Agora is a Web protocol for electronic commerce, which is intended to support high-volume of transactions each with low incurred cost. Agora has the following novel properties: • Minimal. The incurred cost of Agora transactions is close to free Web browsing where cost is determined by the number of messages. • Distributed. Agora is fully distributed. Merchants can authenticate customers without access to a central authority. Customers with valid accounts can purchase from any merchant without any preparations (such as prior registration at the merchant or at a broker). • On-line arbitration. An on-line arbiter can settle certain customer/merchant disputes. • Fraud control. Agora can limit the degree of fraud to a pre-determined (low) level. ::: ::: Agora is authenticated secure and can not be repudiated. It can use regular (insecure) communication channels.
---
paper_title: Pricing via processing or combatting junk mail
paper_content:
We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus preventing frivolous use. To this end we suggest several pricing Junctions, based on, respectively, extracting square roots modulo a prime, the Fiat-Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signature scheme.
---
paper_title: The S/KEY One-Time Password System
paper_content:
This document describes the S/KEY* One-Time Password system as released for public use by Bellcore and as described in reference [3]. A reference implementation and documentation are available by anonymous ftp from ftp.bellcore.com in the directories pub/nmh/...
---
paper_title: NetPay: An off-line, decentralized micro-payment system for thin-client applications
paper_content:
Micro-payment systems have become popular in recent times as the desire to support low-value, high-volume transactions of text, music, clip-art, video and other media has increased. We describe NetPay, a micro-payment system characterized by de-centralized, off-line processing, customer anonymity and relatively high performance and security using one-way hashing functions for encryption. We describe the motivation for NetPay and its basic protocol, describe a software architecture and two NetPay prototypes we have developed, and report the results of several evaluations of these prototypes.
---
paper_title: MiMi: A Java Implementation of the MicroMint Scheme
paper_content:
In this paper we describe an experimental implementation of the MicroMint micropayment scheme in Java. We apply this scheme to purchasing Web pages. A prerequisite was to accomplish this without having to change the code of either the Web server or the Web client. We discuss the implementation issues and security considerations. Our implementation requires the local protocol handler feature offered by Sun Microsystems’ HotJava 1.0 browser.
---
paper_title: A pay word-based micropayment protocol supporting multiple payments
paper_content:
In this paper, we propose an efficient micropayment protocol by improving PayWord, which is one of the representative micropayment protocols. In the original PayWord system, it was designed for a customer who generates paywords by performing hash chain operation for payments to an only designated vendor. In other words, the customer has to create new paywords in order to establish commercial transactions with different vendors on the Internet. To supplement this drawback, our proposed scheme provides a useful method to do business with multiple vendors for a customer with only one hash chain operation. In our proposed protocol, a broker creates a new series of hash chain values along with a certificate for the certificate request of a customer. This certificate is signed by the broker to give authority enabling the customer to make paywords. Our proposed scheme provides an efficient means for the customer to do business with multiple vendors.
---
paper_title: Hashcash - A Denial of Service Counter-Measure
paper_content:
Hashcash was originally proposed as a mechanism to throttle systematic abuse of un-metered internet resources such as email, and anonymous remailers in May 1997. Five years on, this paper captures in one place the various applications, improvements suggested and related subsequent publications, and describes initial experience from experiments using hashcash. The hashcash CPU cost-function computes a token which can be used as a proof-of-work. Interactive and noninteractive variants of cost-functions can be constructed which can be used in situations where the server can issue a challenge (connection oriented interactive protocol), and where it can not (where the communication is store–and– forward, or packet oriented) respectively.
---
paper_title: Electronic Payments of Small Amounts
paper_content:
This note considers the application of electronic cash to transactions in which many small amounts must be paid to the same payee and in which it is not possible to just pay the total amount afterwards. The most notable example of such a transaction is payment for phone calls. If currently published electronic cash systems are used and a full payment protocol is executed for each of the small amounts, the overall complexity of the system will be prohibitively large (time, storage and communication). This note describes how such payments can be handled in a wide class of payment systems. The solution is very easy to adapt as it only influences the payment and deposit transactions involving such payments. Furthermore, making and verifying each small payment requires very little computation and communication, and the total complexity of both transactions is comparable to that of a payment of a fixed amount.
---
paper_title: An Implementation of MicroMint
paper_content:
This thesis describes a prototype implementation of MicroMint, an Internet micropayment system designed to facilitate very small scale monetary transactions over the World Wide Web. By implementing a proposed system, we can determine its feasibility for real commercial applications, and identify advantages and disadvantages inherit in different systems. This prototype implementation pays special attention to details necessary for end users to adopt the payment system easily. Thesis Supervisor: Ronald L. Rivest Title: Professor
---
paper_title: Password authentication with insecure communication
paper_content:
A method of user password authentication is described which is secure even if an intruder can read the system's data, and can tamper with or eavesdrop on the communication between the user and the system. The method assumes a secure one-way encryption function and can be implemented with a microcomputer in the user's terminal.
---
paper_title: Transactions Using Bets
paper_content:
Small cash transactions, electronic or otherwise, can have their overhead costs reduced by Transactions Using Bets (TUB), using probablistic expectation (betting) as a component. Other types of protocols may also benefit from this idea.
---
paper_title: An Efficient Micropayment System Based on Probabilistic Polling
paper_content:
Existing software proposals for electronic payments can be divided into “on-line” schemes that require participation of a trusted party (the bank) in every transaction and are secure against overspending, and the “off-line” schemes that do not require a third party and guarantee only that overspending is detected when vendors submit their transaction records to the bank (usually at the end of the day). We propose a new hybrid scheme that combines the advantages of both of the above traditional design strategies. It allows for control of overspending at a cost of only a modest increase in communication compared to the off-line schemes. Our protocol is based on probabilistic polling. During each transaction, with some small probability, the vendor forwards information about this transaction to the bank. This enables the bank to maintain an accurate approximation of a customer's spending. The frequency of polling messages is related to the monetary value of transactions and the amount of overspending the bank is willing to risk.
---
paper_title: Electronic lottery tickets as micropayments
paper_content:
We present a new micropayment scheme based on the use of “electronic lottery tickets.” This scheme is exceptionally efficient since the bank handles only winning tickets, instead of handling each micropayment.
---
paper_title: A p2p market place based on aggregate signatures
paper_content:
A peer-to-peer market place is likely to be based on some underlying micro-payment scheme where each user can act both as a customer and as a merchant. Such systems, even when designed for largely distributed domains, may be implemented according to hybrid topologies where trusted third intermediaries (e.g. the broker) are single points of failures. For this reason it is crucial that such central entities scale well w.r.t. the overall number of transactions. In this paper, we focus on PPay as a case study, to show how the broker would greatly benefit in terms of computational cost if aggregate signatures are adopted instead of RSA signatures.
---
paper_title: To Share or not to Share: An Analysis of Incentives to Contribute in Collaborative File Sharing Environments
paper_content:
Projects developing infrastructure for the pooling of distributed resources (data, storage, or computation) [1, 2] often assume that resource owners have committed their resources and that the chief task is to integrate and use them efficiently. Such projects frequently ignore the question of whether individual resource owners are willing to share their personal resources for the overall good of the community. However, experiences [3-6] with peer-to-peer (P2P) file sharing systems like Gnutella, Napster, and Kazaa suggest that users are not altruistic. In Gnutella, for example, 70% of all users do not share files, and 50% of all requests are satisfied by the top 1% sharing hosts. Thus, incentive mechanisms that motivate users to contribute resources may be critical to eventual success of such systems. Various approaches to incentives have been proposed, including pricing or micro-currency schemes [7] and so-called “soft incentive” or nonpricing schemes [8]. However, the effectiveness of these different schemes is not well understood. In this paper we take a step towards understanding the performance of incentive schemes by defining and applying an analytic model based on Schelling’s Multi-Person Prisoner’s Dilemma (MPD) [9]. We use both this framework and simulations to study the effectiveness of different schemes for encouraging sharing in distributed file sharing systems. We consider three such schemes: the soft-incentive, reputation-based Peer-Approved and Service-Quality, and the Token-Exchange pricing scheme. After introducing the MPD model (Section 2), we use it to explain the rational behavior of users in a P2P file sharing community without incentives (Section 3). We then analyze user behavior when Peer-Approved is used as the incentive mechanism and find that it is effective in incentivizing rational users to share more files (Section 4). We then shift to the use of simulations so as to look beyond the assumptions of the MPD model. We measure the effectiveness of the reputation-based soft-incentive Peer-Approved scheme and compare it with that of the Token-Exchange pricing scheme (in Section 5). We find that even simple soft-incentive schemes can motivate users of P2P file-sharing systems to increase contributions in a way that benefits all users, including themselves.
---
paper_title: P2P-NetPay: An Off-line Micro-payment System for Content Sharing in P2P-Networks
paper_content:
Micro-payment systems have the potential to provide non-intrusive, high-volume and low-cost pay-asyou- use services for a wide variety of web-based applications. We proposed a new model, P2P-NetPay, a micro-payment protocol characterized by off-line processing, suitable for peer-to-peer network service charging. P2P micro-payment systems must provide a secure, highly efficient, flexible, usable and reliable environment, the key issues in P2P micro-payment systems development. Therefore, in order to assist in the design and implementation of an efficient micro-payment system suitable for P2P networks, we describe a prototype architecture for a new P2P-based micro-payment model based on NetPay micropayment system. We present an object-oriented design and describe a prototype implementation of P2P-NetPay for a file-sharing P2P system. We report on initial evaluation results deploying our P2P-NetPay prototype and outline directions for future research in P2P micro-payment implementations.
---
paper_title: Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies
paper_content:
Besides attracting a billion dollar economy, Bitcoin revolutionized the field of digital currencies and influenced many adjacent areas. This also induced significant scientific interest. In this survey, we unroll and structure the manyfold results and research directions. We start by introducing the Bitcoin protocol and its building blocks. From there we continue to explore the design space by discussing existing contributions and results. In the process, we deduce the fundamental structures and insights at the core of the Bitcoin protocol and its applications. As we show and discuss, many key ideas are likewise applicable in various other fields, so that their impact reaches far beyond Bitcoin itself.
---
paper_title: Scalability evaluation of a peer-to-peer market place based on micro payments
paper_content:
A fair peer-to-peer market place should protect intellectual properties as well as account peers that act as distributors of the source. FairPeers is a scheme in which some central authorities are necessary, with the drawback that when the number of transactions grows, these entities can represent single points of failure. This paper proposes a generic model to analytically evaluate such a market place and estimate its performance in terms of scalability w.r.t. the total number of printed coins and the overall transactions that can occur in the given peer-to-peer system.
---
paper_title: WhoPay: A Scalable and Anonymous Payment System for Peer-to-Peer Environments
paper_content:
An electronic payment system ideally should provide security, anonymity, fairness, transferability and scalability. Existing payment schemes often lack either anonymity or scalability. In this paper we propose WhoPay, a peer-topeer payment system that provides all the above properties. For anonymity, we represent coins with public keys; for scalability, we distribute coin transfer load across all peers, rather than rely on a central entity such as the broker. This basic version of WhoPay is as secure and scalable as existing peer-to-peer payment schemes, while providing a much higher level of user anonymity. We also introduce the idea of real-time double spending detection by making use of distributed hash tables (DHT). Simulation results show that the majority of the system load is handled by the peers under typical peer availability, indicating that WhoPay should scale well.
---
paper_title: Off-Line micro-payment system for content sharing in p2p networks
paper_content:
Micro-payment systems have the potential to provide non-intrusive, high-volume and low-cost pay-as-you-use services for a wide variety of web-based applications. We propose an extension, P2P-NetPay, a micro-payment protocol characterized by off-line processing, suitable for peer-to-peer network services sharing. Our approach provides high performance and security using one-way hashing functions for e-coin encryption. In our P2P-NetPay protocol, each peer's transaction does not involve any broker and double spending is detected during the redeeming transaction. We describe the motivation for P2P-NetPay and describe three transactions of the P2P-NetPay protocol in detail to illustrate the approach. We then discuss future research on this protocol.
---
paper_title: KARMA: A Secure Economic Framework for Peer-to-Peer Resource Sharing
paper_content:
Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their karma. A set of nodes, called a bankset, keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing
---
paper_title: Constructing Fair-Exchange P2P File Market
paper_content:
P2P is a promising technology to construct the underlying supporting layer of a Grid. It is known that contribution from all the peers is vital for the sustainability of a P2P community, but peers are often selfish and unwilling to contribute. In this paper we describe how to construct a fair file-exchanging P2P community. We name this community a P2P file market. Our scheme forces peers to contribute by a micropayment-based incentive mechanism.
---
paper_title: Regarding timeliness in the context of fair exchange
paper_content:
In this paper we discuss the often overlooked timeliness property of fair exchange protocols. We gather different available definitions of this property, and propose a new and stronger interpretation for timeliness in the context of security protocols. We discuss common timeliness-related pitfalls in fair exchange protocol design, and show a particular timeliness attack effective in several optimistic protocols proposed in the literature. Finally, we provide guidelines that may help to avoid common mistakes in protocol design, and propose our own protocol that ensures both fairness and timeliness.
---
paper_title: PPay: micropayments for peer-to-peer systems
paper_content:
Emerging economic P2P applications share the common need for an efficient, secure payment mechanism. In this paper, we present PPay, a micropayment system that exploits unique characteristics of P2P systems to maximize efficiency while maintaining security properties. We show how the basic PPay protocol far outperforms existing micropayment schemes, while guaranteeing that all coin fraud is detectable, traceable and unprofitable. We also present and analyze several extensions to PPay that further improve efficiency.
---
paper_title: A fistful of bitcoins: characterizing payments among men with no names
paper_content:
Bitcoin is a purely online virtual currency, unbacked by either physical commodities or sovereign obligation; instead, it relies on a combination of cryptographic protection and a peer-to-peer protocol for witnessing settlements. Consequently, Bitcoin has the unintuitive property that while the ownership of money is implicitly anonymous, its flow is globally visible. In this paper we explore this unique characteristic further, using heuristic clustering to group Bitcoin wallets based on evidence of shared authority, and then using re-identification attacks (i.e., empirical purchasing of goods and services) to classify the operators of those clusters. From this analysis, we characterize longitudinal changes in the Bitcoin market, the stresses these changes are placing on the system, and the challenges for those seeking to use Bitcoin for criminal or fraudulent purposes at scale.
---
paper_title: Micropayments for Decentralized Currencies
paper_content:
Electronic financial transactions in the US, even those enabled by Bitcoin, have relatively high transaction costs. As a result, it becomes infeasible to make micropayments, i.e. payments that are pennies or fractions of a penny. In order to circumvent the cost of recording all transactions, Wheeler (1996) and Rivest (1997) suggested the notion of a probabilistic payment, that is, one implements payments that have expected value on the order of micro pennies by running an appropriately biased lottery for a larger payment. While there have been quite a few proposed solutions to such lottery-based micropayment schemes, all these solutions rely on a trusted third party to coordinate the transactions; furthermore, to implement these systems in today's economy would require a a global change to how either banks or electronic payment companies (e.g., Visa and Mastercard) handle transactions. We put forth a new lottery-based micropayment scheme for any ledger-based transaction system, that can be used today without any change to the current infrastructure. We implement our scheme in a sample web application and show how a single server can handle thousands of micropayment requests per second. We provide an analysis for how the scheme can work at Internet scale.
---
paper_title: Decentralized Anonymous Micropayments
paper_content:
Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to “micro” sums of money.
---
paper_title: Towards Bitcoin Payment Networks
paper_content:
Bitcoin as deployed today does not scale. Scalability research has focused on two directions: 1 redesigning the Blockchain protocol, and 2 facilitating 'off-chain transactions' and only consulting the Blockchain if an adjudicator is required. In this paper we focus on the latter and provide an overview of Bitcoin payment networks. These consist of two components: payment channels to facilitate off-chain transactions between two parties, and the capability to fairly exchange bitcoins across multiple channels. We compare Duplex Micropayment Channels and Lightning Channels, before discussing Hashed Time-Locked Contracts which enable Bitcoin-based payment networks. Finally, we highlight challenges for route discovery in these networks.
---
paper_title: 1 Beware the Middleman: Empirical Analysis of Bitcoin-Exchange Risk
paper_content:
Bitcoin has enjoyed wider adoption than any previous crypto- currency; yet its success has also attracted the attention of fraudsters who have taken advantage of operational insecurity and transaction irreversibility. We study the risk investors face from Bitcoin exchanges, which convert between Bitcoins and hard currency. We examine the track record of 40 Bitcoin exchanges established over the past three years, and find that 18 have since closed, with customer account balances often wiped out. Fraudsters are sometimes to blame, but not always. Using a proportional hazards model, we find that an exchange’s transaction volume indicates whether or not it is likely to close. Less popular exchanges are more likely to be shut than popular ones. We also present a logistic regression showing that popular exchanges are more likely to suffer a security breach.
---
paper_title: 1 Beware the Middleman: Empirical Analysis of Bitcoin-Exchange Risk
paper_content:
Bitcoin has enjoyed wider adoption than any previous crypto- currency; yet its success has also attracted the attention of fraudsters who have taken advantage of operational insecurity and transaction irreversibility. We study the risk investors face from Bitcoin exchanges, which convert between Bitcoins and hard currency. We examine the track record of 40 Bitcoin exchanges established over the past three years, and find that 18 have since closed, with customer account balances often wiped out. Fraudsters are sometimes to blame, but not always. Using a proportional hazards model, we find that an exchange’s transaction volume indicates whether or not it is likely to close. Less popular exchanges are more likely to be shut than popular ones. We also present a logistic regression showing that popular exchanges are more likely to suffer a security breach.
---
paper_title: Bitcoin: Perils of an Unregulated Global P2P Currency
paper_content:
Bitcoin has, since 2009, become an increasingly popular online currency, in large part because it resists regulation and provides anonymity. We discuss how Bitcoin has become both a highly useful tool for criminals and a lucrative target for crime, and argue that this arises from the same essential ideological and design choices that have driven Bitcoin's success to date. In this paper, we survey the landscape of Bitcoin-related crime, such as dark markets and bitcoin theft, and speculate about possible future possibilities, including tax evasion and money laundering.
---
paper_title: 1 Beware the Middleman: Empirical Analysis of Bitcoin-Exchange Risk
paper_content:
Bitcoin has enjoyed wider adoption than any previous crypto- currency; yet its success has also attracted the attention of fraudsters who have taken advantage of operational insecurity and transaction irreversibility. We study the risk investors face from Bitcoin exchanges, which convert between Bitcoins and hard currency. We examine the track record of 40 Bitcoin exchanges established over the past three years, and find that 18 have since closed, with customer account balances often wiped out. Fraudsters are sometimes to blame, but not always. Using a proportional hazards model, we find that an exchange’s transaction volume indicates whether or not it is likely to close. Less popular exchanges are more likely to be shut than popular ones. We also present a logistic regression showing that popular exchanges are more likely to suffer a security breach.
---
paper_title: For sale : your data: by : you
paper_content:
Monetizing personal information is a key economic driver of online industry. End-users are becoming more concerned about their privacy, as evidenced by increased media attention. This paper proposes a mechanism called 'transactional' privacy that can be applied to personal information of users. Users decide what personal information about themselves is released and put on sale while receiving compensation for it. Aggregators purchase access to exploit this information when serving ads to a user. Truthfulness and efficiency, attained through an unlimited supply auction, ensure that the interests of all parties in this transaction are aligned. We demonstrate the effectiveness of transactional privacy for web-browsing using a large mobile trace from a major European capital. We integrate transactional privacy in a privacy-preserving system that curbs leakage of information. These mechanisms combine to form a market of personal information that can be managed by a trusted third party.
---
paper_title: The Case against Micropayments
paper_content:
Micropayments are likely to continue disappointing their advocates. They are an interesting technology. However, there are many non-technological reasons why they will take far longer than is generally expected to be widely used, and most probably will play only a minor role in the economy.
---
paper_title: Digital goods and markets: Emerging issues and challenges
paper_content:
This research commentary examines the changing landscape of digital goods, and discusses important emerging issues for IS researchers to explore. We begin with a discussion of the major technological milestones that have shaped digital goods industries such as music, movies, software, books, video games, and recently emerging digital goods. Our emphasis is on economic and legal issues, rather than on design science or sociological issues. We explore how research has been influenced by the major technological milestones and discuss the major findings of prior research. Based on this, we offer a roadmap for future researchers to explore the emergent changes in the digital goods arena, covering different aspects of digital goods industries such as risk management, value chain, legal aspects, transnational and cross-cultural issues.
---
paper_title: Collaborative Micropayment Systems
paper_content:
Around the world many different micropayment systems are in use. Because of this variety, content providers and customers may rely on different systems. As a result, customers may be unable to buy content from providers using a different system. This paper proposes a novel approach that allows existing micropayment systems to collaborate. This collaboration is realized by introducing an intermediate system, called Payment Gateway, that interconnects different payment systems. This payment gateway enables content providers and customer to use their micropayment system of choice.
---
paper_title: SEMOPS: design of a new payment service
paper_content:
One of the most promising future applications in the domain of mCommerce is the mobile payment. Different approaches come to the market and try to address existing needs, but up to day no global solution exists. In this paper we take an insight on the SEMOPS project, analyze some of the requirements that have been its guiding force as well as the business model it supports.
---
| Title: The Nuts and Bolts of Micropayments: A Survey
Section 1: INTRODUCTION
Description 1: In this section, introduce the topic of micropayments, discuss the reasons for renewed interest, and outline the contributions of this survey.
Section 2: BACKGROUND
Description 2: This section provides a definition of micropayments, discusses their properties, and traces the historical development of micropayment systems.
Section 3: A Short History of Micropayment Systems
Description 3: Summarize the different generations of micropayment systems and the innovations and shortcomings associated with each.
Section 4: CRYPTOGRAPHY-BASED SYSTEMS
Description 4: Discuss various cryptographic solutions for micropayments, classified into categories such as centralized solutions, voucher-based solutions, commitment-based systems, crypto-token solutions, probabilistic mechanisms, peer-to-peer systems, and enabling micropayments on Bitcoin.
Section 5: COMMERCIAL SOLUTIONS
Description 5: Examine commercial micropayment solutions, categorized by their different operational models including pre-paid systems, facilitating merchants, facilitating customers, and aggregation.
Section 6: OUTSTANDING CHALLENGES AND FUTURE DIRECTIONS
Description 6: Outline the key challenges in micromanagement system design and deployment, including security concerns, legal and ethical issues, cognitive costs, business models, and deployment strategies.
Section 7: CONCLUSION
Description 7: Summarize the survey, highlighting the current state of micropayment systems, the lessons learned from past implementations, and the future direction of research and development in this area.
Section 8: APPENDIX A: A PRIMER ON BUSINESS MODELS FOR WEB PUBLISHING
Description 8: Provide an overview of various business models for web publishing, including advertising, paywalls, and subscription-based models.
Section 9: APPENDIX B: STANDARDIZATION
Description 9: Discuss standardization efforts by organizations like the World Wide Web Consortium (W3C) and other industry groups aimed at developing universal micropayment protocols. |
Jpeg Image Compression Using Discrete Cosine Transform - A Survey | 11 | ---
paper_title: Introduction to Data Compression
paper_content:
Preface 1 Introduction 1.1 Compression Techniques 1.1.1 Lossless Compression 1.1.2 Lossy Compression 1.1.3 Measures of Performance 1.2 Modeling and Coding 1.3 Organization of This Book 1.4 Summary 1.5 Projects and Problems 2 Mathematical Preliminaries 2.1 Overview 2.2 A Brief Introduction to Information Theory 2.3 Models 2.3.1 Physical Models 2.3.2 Probability Models 2.3.3. Markov Models 2.3.4 Summary 2.5 Projects and Problems 3 Huffman Coding 3.1 Overview 3.2 "Good" Codes 3.3. The Huffman Coding Algorithm 3.3.1 Minimum Variance Huffman Codes 3.3.2 Length of Huffman Codes 3.3.3 Extended Huffman Codes 3.4 Nonbinary Huffman Codes 3.5 Adaptive Huffman Coding 3.5.1 Update Procedure 3.5.2 Encoding Procedure 3.5.3 Decoding Procedure 3.6 Applications of Huffman Coding 3.6.1 Lossless Image Compression 3.6.2 Text Compression 3.6.3 Audio Compression 3.7 Summary 3.8 Projects and Problems 4 Arithmetic Coding 4.1 Overview 4.2 Introduction 4.3 Coding a Sequence 4.3.1 Generating a Tag 4.3.2 Deciphering the Tag 4.4 Generating a Binary Code 4.4.1 Uniqueness and Efficiency of the Arithmetic Code 4.4.2 Algorithm Implementation 4.4.3 Integer Implementation 4.5 Comparison of Huffman and Arithmetic Coding 4.6 Applications 4.6.1 Bi-Level Image Compression-The JBIG Standard 4.6.2 Image Compression 4.7 Summary 4.8 Projects and Problems 5 Dictionary Techniques 5.1 Overview 5.2 Introduction 5.3 Static Dictionary 5.3.1 Diagram Coding 5.4 Adaptive Dictionary 5.4.1 The LZ77 Approach 5.4.2 The LZ78 Approach 5.5 Applications 5.5.1 File Compression-UNIX COMPRESS 5.5.2 Image Compression-the Graphics Interchange Format (GIF) 5.5.3 Compression over Modems-V.42 bis 5.6 Summary 5.7 Projects and Problems 6 Lossless Image Compression 6.1 Overview 6.2 Introduction 6.3 Facsimile Encoding 6.3.1 Run-Length Coding 6.3.2 CCITT Group 3 and 4-Recommendations T.4 and T.6 6.3.3 Comparison of MH, MR, MMR, and JBIG 6.4 Progressive Image Transmission 6.5 Other Image Compression Approaches 6.5.1 Linear Prediction Models 6.5.2 Context Models 6.5.3 Multiresolution Models 6.5.4 Modeling Prediction Errors 6.6 Summary 6.7 Projects and Problems 7 Mathematical Preliminaries 7.1 Overview 7.2 Introduction 7.3 Distortion Criteria 7.3.1 The Human Visual System 7.3.2 Auditory Perception 7.4 Information Theory Revisted 7.4.1 Conditional Entropy 7.4.2 Average Mutual Information 7.4.3 Differential Entropy 7.5 Rate Distortion Theory 7.6 Models 7.6.1 Probability Models 7.6.2 Linear System Models 7.6.3 Physical Models 7.7 Summary 7.8 Projects and Problems 8 Scalar Quantization 8.1 Overview 8.2 Introduction 8.3 The Quantization Problem 8.4 Uniform Quantizer 8.5 Adaptive Quantization 8.5.1 Forward Adaptive Quantization 8.5.2 Backward Adaptive Quantization 8.6 Nonuniform Quantization 8.6.1 pdf-Optimized Quantization 8.6.2 Companded Quantization 8.7 Entropy-Coded Quantization 8.7.1 Entropy Coding of Lloyd-Max Quantizer Outputs 8.7.2 Entropy-Constrained Quantization 8.7.3 High-Rate Optimum Quantization 8.8 Summary 8.9 Projects and Problems 9 Vector Quantization 9.1 Overview 9.2 Introduction 9.3 Advantages of Vector Quantization over Scalar Quantization 9.4 The Linde-Buzo-Gray Algorithm 9.4.1 Initializing the LBG Algorithm 9.4.2 The Empty Cell Problem 9.4.3 Use of LBG for Image Compression 9.5 Tree-Structured Vector Quantizers 9.5.1 Design of Tree-Structured Vector Quantizers 9.6 Structured Vector Quantizers 9.6.1 Pyramid Vector Quantization 9.6.2 Polar and Spherical Vector Quantizers 9.6.3 Lattice Vector Quantizers 9.7 Variations on the Theme 9.7.1 Gain-Shape Vector Quantization 9.7.2 Mean-Removed Vector Quantization 9.7.3 Classified Vector Quantization 9.7.4 Multistage Vector Quantization 9.7.5 Adaptive Vector Quantization 9.8 Summary 9.9 Projects and Problems 10 Differential Encoding 10.1 Overview 10.2 Introduction 10.3 The Basic Algorithm 10.4 Prediction in DPCM 10.5 Adaptive DPCM (ADPCM) 10.5.1 Adaptive Quantization in DPCM 10.5.2 Adaptive Prediction in DPCM 10.6 Delta Modulation 10.6.1 Constant Factor Adaptive Delta Modulation (CFDM) 10.6.2 Continuously Variable Slope Delta Modulation 10.7 Speech Coding 10.7.1 G.726 10.8 Summary 10.9 Projects and Problems 11 Subband Coding 11.1 Overview 11.2 Introduction 11.3 The Frequency Domain and Filtering 11.3.1 Filters 11.4 The Basic Subband Coding Algorithm 11.4.1 Bit Allocation 11.5 Application to Speech Coding-G.722 11.6 Application to Audio Coding-MPEG Audio 11.7 Application to Image Compression 11.7.1 Decomposing an Image 11.7.2 Coding the Subbands 11.8 Wavelets 11.8.1 Families of Wavelets 11.8.2 Wavelets and Image Compression 11.9 Summary 11.10 Projects and Problems 12 Transform Coding 12.1 Overview 12.2 Introduction 12.3 The Transform 12.4 Transforms of Interest 12.4.1 Karhunen-Loeve Transform 12.4.2 Discrete Cosine Transform 12.4.3 Discrete Sine Transform 12.4.4 Discrete Walsh-Hadamard Transform 12.5 Quantization and Coding of Transform Coefficients 12.6 Application to Image Compression-JPEG 12.6.1 The Transform 12.6.2 Quantization 12.6.3 Coding 12.7 Application to Audio Compression 12.8 Summary 12.9 Projects and Problems 13 Analysis/Synthesis Schemes 13.1 Overview 13.2 Introduction 13.3 Speech Compression 13.3.1 The Channel Vocoder 13.3.2 The Linear Predictive Coder (Gov.Std.LPC-10) 13.3.3 Code Excited Linear Prediction (CELP) 13.3.4 Sinusoidal Coders 13.4 Image Compression 13.4.1 Fractal Compression 13.5 Summary 13.6 Projects and Problems 14 Video Compression 14.1 Overview 14.2 Introduction 14.3 Motion Compensation 14.4 Video Signal Representation 14.5 Algorithms for Videoconferencing and Videophones 14.5.1 ITU_T Recommendation H.261 14.5.2 Model-Based Coding 14.6 Asymmetric Applications 14.6.1 The MPEG Video Standard 14.7 Packet Video 14.7.1 ATM Networks 14.7.2 Compression Issues in ATM Networks 14.7.3 Compression Algorithms for Packet Video 14.8 Summary 14.9 Projects and Problems A Probability and Random Processes A.1 Probability A.2 Random Variables A.3 Distribution Functions A.4 Expectation A.5 Types of Distribution A.6 Stochastic Process A.7 Projects and Problems B A Brief Review of Matrix Concepts B.1 A Matrix B.2 Matrix Operations C Codes for Facsimile Encoding D The Root Lattices Bibliography Index
---
paper_title: Applied Linear Algebra
paper_content:
Chapter 1. Linear Algebraic Systems 1.1. Solution of Linear Systems 1.2. Matrices and Vectors 1.3. Gaussian Elimination - Regular Case 1.4. Pivoting and Permutations 1.5. Matrix Inverses 1.6. Transposes and Symmetric Matrices 1.7. Practical Linear Algebra 1.8. General Linear Systems 1.9. Determinants Chapter 2. Vector Spaces and Bases 2.1. Vector Spaces 2.2. Subspaces 2.3. Span and Linear Independence 2.4. Bases 2.5. The Fundamental Matrix Subspaces 2.6. Graphs and Incidence Matrices Chapter 3. Inner Products and Norms 3.1. Inner Products 3.2. Inequalities 3.3. Norms 3.4. Positive Definite Matrices 3.5. Completing the Square 3.6. Complex Vector Spaces Chapter 4. Minimization and Least Squares Approximation 4.1. Minimization Problems 4.2. Minimization of Quadratic Functions 4.3. Least Squares and the Closest Point 4.4. Data Fitting and Interpolation Chapter 5. Orthogonality 5.1. Orthogonal Bases 5.2. The Gram-Schmidt Process 5.3. Orthogonal Matrices 5.4. Orthogonal Polynomials 5.5. Orthogonal Projections and Least Squares 5.6. Orthogonal Subspaces Chapter 6. Equilibrium 6.1. Springs and Masses 6.2. Electrical Networks 6.3. Structures Chapter 7. Linearity 7.1. Linear Functions 7.2. Linear Transformations 7.3. Affine Transformations and Isometries 7.4. Linear Systems 7.5. Adjoints Chapter 8. Eigenvalues 8.1. Simple Dynamical Systems 8.2. Eigenvalues and Eigenvectors 8.3. Eigenvector Bases and Diagonalization 8.4. Eigenvalues of Symmetric Matrices 8.5. Singular Values 8.6. Incomplete Matrices and the Jordan Canonical Form Chapter 9. Linear Dynamical Systems 9.1. Basic Solution Methods 9.2. Stability of Linear Systems 9.3. Two-Dimensional Systems 9.4. Matrix Exponentials 9.5. Dynamics of Structures 9.6. Forcing and Resonance Chapter 10. Iteration of Linear Systems 10.1. Linear Iterative Systems 10.2. Stability 10.3. Matrix Norms 10.4. Markov Processes 10.5. Iterative Solution of Linear Systems 10.6. Numerical Computation of Eigenvalues Chapter 11. Boundary Value Problems in One Dimension 11.1. Elastic Bars 11.2. Generalized Functions and the Green's Function 11.3. Adjoints and Minimum Principles 11.4. Beams and Splines 11.5. Sturm-Liouville Boundary Value Problems 11.6. Finite Elements
---
| Title: Jpeg Image Compression Using Discrete Cosine Transform - A Survey
Section 1: INTRODUCTION
Description 1: Write about the significance of image compression in the digital age and explain the necessity for efficient digital information management.
Section 2: Principles Behind Compression
Description 2: Discuss the key principles behind image compression techniques, including spatial redundancy, spectral redundancy, and psycho-visual redundancy, and describe various compression categorizations.
Section 3: Atypical Image Coder
Description 3: Describe the components of a typical lossy image compression system, including the source encoder, quantizer, and entropy encoder.
Section 4: Performance Criteria in Image Compression
Description 4: Introduce the performance criteria for image compression, focusing on compression ratio (CR) and quality measurement of the reconstructed image (PSNR).
Section 5: DCT TRANSFORMATION
Description 5: Explain the Discrete Cosine Transform (DCT) and its importance in image compression, including its applications and advantages.
Section 6: JPEG COMPRESSION
Description 6: Detail the JPEG compression process, its steps, and the significance of JPEG as a standard for image compression using the DCT.
Section 7: Discrete Cosine Transform
Description 7: Describe the transformation of image data into frequency components using DCT and the subsequent steps in the JPEG process.
Section 8: Quantization in JPEG
Description 8: Explain the quantization step in JPEG compression, including the process of data reduction and the role of frequency components.
Section 9: Huffman Encoding
Description 9: Describe the entropy encoding process in JPEG compression, particularly Huffman coding, and its effectiveness in compressing data.
Section 10: Decompression
Description 10: Outline the steps involved in the decompression process, reversing the compression steps, and restoring the image data.
Section 11: CONCLUSION & FUTURE WORK
Description 11: Summarize the survey's findings on JPEG image compression using DCT and discuss future work, including comparisons with other compression techniques like Discrete Wavelet Transform. |
A survey of the S-lemma | 19 | ---
paper_title: Topics in matrix analysis
paper_content:
1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.
---
paper_title: Trust Region Methods
paper_content:
Preface 1. Introduction Part I. Preliminaries: 2. Basic Concepts 3. Basic Analysis and Optimality Conditions 4. Basic Linear Algebra 5. Krylov Subspace Methods Part II. Trust-Region Methods for Unconstrained Optimization: 6. Global Convergence of the Basic Algorithm 7.The Trust-Region Subproblem 8. Further Convergence Theory Issues 9. Conditional Models 10. Algorithmic Extensions 11. Nonsmooth Problems Part III. Trust-Region Methods for Constrained Optimization with Convex Constraints: 12. Projection Methods for Convex Constraints 13. Barrier Methods for Inequality Constraints Part IV. Trust-Region Mewthods for General Constained Optimization and Systems of Nonlinear Equations: 14. Penalty-Function Methods 15. Sequential Quadratic Programming Methods 16. Nonlinear Equations and Nonlinear Fitting Part V. Final Considerations: Practicalities Afterword Appendix: A Summary of Assumptions Annotated Bibliography Subject and Notation Index Author Index.
---
paper_title: On Cones of Nonnegative Quadratic Functions
paper_content:
We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.
---
paper_title: On a subproblem of trust region algorithms for constrained optimization
paper_content:
We study a subproblem that arises in some trust region algorithms for equality constrained optimization. It is the minimization of a general quadratic function with two special quadratic constraints. Properties of such subproblems are given. It is proved that the Hessian of the Lagrangian has at most one negative eigenvalue, and an example is presented to show that the Hessian may have a negative eigenvalue when one constraint is inactive at the solution.
---
paper_title: Multivariate nonnegative quadratic mappings
paper_content:
In this paper, we study several issues related to the characterization of specific classes of multivariate quadratic mappings that are nonnegative over a given domain, with nonnegativity defined by a prespecified conic order. In particular, we consider the set (cone) of nonnegative quadratic mappings, defined with respect to the positive semidefinite matrix cone, and study when it can be represented by linear matrix inequalities. We also discuss the applications of the results in robust optimization, especially the robust quadratic matrix inequalities and the robust linear programming models. In the latter application the implementational errors of the solution are taken into account, and the problem is formulated as a semidefinite program.
---
paper_title: On Cones of Nonnegative Quadratic Functions
paper_content:
We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.
---
paper_title: On Cones of Nonnegative Quadratic Functions
paper_content:
We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.
---
paper_title: Convexity of quadratic transformations and its use in control and optimization
paper_content:
Quadratic transformations have the hidden convexity property which allows one to deal with them as if they were convex functions. This phenomenon was encountered in various optimization and control problems, but it was not always recognized as consequence of some general property. We present a theory on convexity and closedness of a 3D quadratic image of ℝn, n≥3, which explains many disjoint known results and provides some new ones.
---
paper_title: Degenerate Nonlinear Programming with a Quadratic Growth Condition
paper_content:
We show that the quadratic growth condition and the Mangasarian--Fromovitz constraint qualification (MFCQ) imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an $L_\infty$ exact penalty sequential quadratic programming algorithm will induce at least R-linear convergence of the iterates to such a local minimum. We construct an example of a degenerate nonlinear program with a unique local minimum satisfying the quadratic growth and the MFCQ but for which no positive semidefinite augmented Lagrangian exists. We present numerical results obtained using several nonlinear programming packages on this example and discuss its implications for some algorithms.
---
paper_title: An Energy Amplification Condition for Decentralized Adaptive Stabilization
paper_content:
The author is interested here in the problem of global decentralized adaptive regulation (of the plant output to zero) of square multivariable linear time-invariant systems without any restrictions on relative degrees or matching assumptions. The first solution to this problem was recently reported by the author using Morse's new dynamic certainty equivalent adaptive controller to prove that global stabilization is possible if the unmodeled interconnections do not induce "amplification of the energy of the signals in all channels". In this paper the author shows that to preserve global convergence, it is actually enough to have only one "nonamplifying channel". Instrumental for the establishment of the authors' result is the fundamental S-procedure losslessness theorem of Megretsky and Treil, together with some basic loop transformations and D-scalings.
---
paper_title: A Remark On The Rank Of Positive Semidefinite Matrices Subject To Affine Constraints
paper_content:
Let K n be the cone of positive semidefinite n X n matrices and let A be an affine subspace of the space of symmetric matrices such that the intersection K n ?A is nonempty and bounded. Suppose that n ? 3 and that \codim A = r+2 \choose 2 for some 1 ≤ r ≤ n-2 . Then there is a matrix X ? K n ?A such that rank X ≤ r . We give a short geometric proof of this result, use it to improve a bound on realizability of weighted graphs as graphs of distances between points in Euclidean space, and describe its relation to theorems of Bohnenblust, Friedland and Loewy, and Au-Yeung and Poon.
---
paper_title: Ridge regression: biased estimation for nonorthogonal problems
paper_content:
In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X′X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X′X to obtain biased estimates with smaller mean square error.
---
paper_title: A Superlinearly Convergent Sequential Quadratically Constrained Quadratic Programming Algorithm For Degenerate Nonlinear Programming
paper_content:
We present an algorithm that achieves superlinear convergence for nonlinear programs satisfying the Mangasarian--Fromovitz constraint qualification and the quadratic growth condition. This convergence result is obtained despite the potential lack of a locally convex augmented Lagrangian. The algorithm solves a succession of subproblems that have quadratic objectives and quadratic constraints, both possibly nonconvex. By the use of a trust-region constraint we guarantee that any stationary point of the subproblem induces superlinear convergence, which avoids the problem of computing a global minimum. We compare this algorithm with sequential quadratic programming algorithms on several degenerate nonlinear programs.
---
paper_title: Robust Portfolio Selection Problems
paper_content:
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "uncertainty structures" for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as second-order cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures employed to estimate the market parameters. Finally, we demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
---
paper_title: On some generalizations of convex sets and convex functions
paper_content:
A set $C$ in a topological vector space is said to be weakly convex if for any $x,y$ in $C$ there exists $p$ in $(0,1)$ such that $(1-p)x+py\in C$. If the same holds with $p$ independent of $x,y$, then $C$ is said to be $p$-convex. Some basic results are established for such sets, for instance: any weakly convex closed set is convex.
---
paper_title: ON THE RANK OF EXTREME MATRICES IN SEMIDEFINITE PROGRAMS AND THE MULTIPLICITY OF OPTIMAL EIGENVALUES
paper_content:
We derive some basic results on the geometry of semidefinite programming (SDP) and eigenvalue-optimization, i.e., the minimization of the sum of the k largest eigenvalues of a smooth matrix-valued function. We provide upper bounds on the rank of extreme matrices in SDPs, and the first theoretically solid explanation of a phenomenon of intrinsic interest in eigenvalue-optimization. In the spectrum of an optimal matrix, the kth and (k + 1)st largest eigenvalues tend to be equal and frequently have multiplicity greater than two. This clustering is intuitively plausible and has been observed as early as 1975. When the matrix-valued function is affine, we prove that clustering must occur at extreme points of the set of optimal solutions, if the number of variables is sufficiently large. We also give a lower bound on the multiplicity of the critical eigenvalue. These results generalize to the case of a general matrix-valued function under appropriate conditions.
---
paper_title: Convexity of the joint numerical range
paper_content:
Let A=(A1, . . ., Am) be an m-tuple of n × n Hermitian matrices. For $1 \le k \le n$, the $k${\rm th} joint numerical range of A is defined by $$W_k(A) = \{ ({\rm \tr}(X^*A_1X), \dots, {\rm \tr}(X^*A_mX) ): X \in {\bf C}^{n\times k}, X^*X = I_k \}.$$ We consider linearly independent families of Hermitian matrices {A1, . . . , Am} so that Wk(A) is convex. It is shown that m can reach the upper bound 2k(n-k)+1. A key idea in our study is relating the convexity of Wk(A) to the problem of constructing rank k orthogonal projections under linear constraints determined by A. The techniques are extended to study the convexity of other generalized numerical ranges and the corresponding matrix construction problems.
---
paper_title: Convexity of quadratic transformations and its use in control and optimization
paper_content:
Quadratic transformations have the hidden convexity property which allows one to deal with them as if they were convex functions. This phenomenon was encountered in various optimization and control problems, but it was not always recognized as consequence of some general property. We present a theory on convexity and closedness of a 3D quadratic image of ℝn, n≥3, which explains many disjoint known results and provides some new ones.
---
paper_title: ON THE FIELD OF VALUES OF A MATRIX
paper_content:
W is clearly a compact and connected set. Toeplitz showed in [3] that W has a convex outer boundary, and a short time later F. Hausdorff [1] proved that W itself is convex. Since then several investigations have been made concerning the geometry of W. An example of a recent one is the dissertation of R. Kippenhahn [2] in which W is described as the convex hull of a certain algebraic curve of degree n obtainable from C. Writing
---
paper_title: Convexity of the joint numerical range
paper_content:
Let A=(A1, . . ., Am) be an m-tuple of n × n Hermitian matrices. For $1 \le k \le n$, the $k${\rm th} joint numerical range of A is defined by $$W_k(A) = \{ ({\rm \tr}(X^*A_1X), \dots, {\rm \tr}(X^*A_mX) ): X \in {\bf C}^{n\times k}, X^*X = I_k \}.$$ We consider linearly independent families of Hermitian matrices {A1, . . . , Am} so that Wk(A) is convex. It is shown that m can reach the upper bound 2k(n-k)+1. A key idea in our study is relating the convexity of Wk(A) to the problem of constructing rank k orthogonal projections under linear constraints determined by A. The techniques are extended to study the convexity of other generalized numerical ranges and the corresponding matrix construction problems.
---
paper_title: ON THE FIELD OF VALUES OF A MATRIX
paper_content:
W is clearly a compact and connected set. Toeplitz showed in [3] that W has a convex outer boundary, and a short time later F. Hausdorff [1] proved that W itself is convex. Since then several investigations have been made concerning the geometry of W. An example of a recent one is the dissertation of R. Kippenhahn [2] in which W is described as the convex hull of a certain algebraic curve of degree n obtainable from C. Writing
---
paper_title: A Superlinearly Convergent Sequential Quadratically Constrained Quadratic Programming Algorithm For Degenerate Nonlinear Programming
paper_content:
We present an algorithm that achieves superlinear convergence for nonlinear programs satisfying the Mangasarian--Fromovitz constraint qualification and the quadratic growth condition. This convergence result is obtained despite the potential lack of a locally convex augmented Lagrangian. The algorithm solves a succession of subproblems that have quadratic objectives and quadratic constraints, both possibly nonconvex. By the use of a trust-region constraint we guarantee that any stationary point of the subproblem induces superlinear convergence, which avoids the problem of computing a global minimum. We compare this algorithm with sequential quadratic programming algorithms on several degenerate nonlinear programs.
---
paper_title: Convexity of the joint numerical range
paper_content:
Let A=(A1, . . ., Am) be an m-tuple of n × n Hermitian matrices. For $1 \le k \le n$, the $k${\rm th} joint numerical range of A is defined by $$W_k(A) = \{ ({\rm \tr}(X^*A_1X), \dots, {\rm \tr}(X^*A_mX) ): X \in {\bf C}^{n\times k}, X^*X = I_k \}.$$ We consider linearly independent families of Hermitian matrices {A1, . . . , Am} so that Wk(A) is convex. It is shown that m can reach the upper bound 2k(n-k)+1. A key idea in our study is relating the convexity of Wk(A) to the problem of constructing rank k orthogonal projections under linear constraints determined by A. The techniques are extended to study the convexity of other generalized numerical ranges and the corresponding matrix construction problems.
---
paper_title: Jordan‐Algebraic Approach to Convexity Theorems for Quadratic Mappings
paper_content:
We describe a Jordan-algebraic version of results related to convexity of images of quadratic mappings as well as related results on exactness of symmetric relaxations of certain classes of nonconvex optimization problems. The exactness of relaxations is proved based on rank estimates. Our approach provides a unifying viewpoint on a large number of classical results related to cones of Hermitian matrices over real and complex numbers. We describe (apparently new) results related to cones of Hermitian matrices with quaternion entries and to the exceptional 27-dimensional Euclidean Jordan algebra.
---
paper_title: On Cones of Nonnegative Quadratic Functions
paper_content:
We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.
---
paper_title: On Cones of Nonnegative Quadratic Functions
paper_content:
We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.
---
paper_title: Jordan‐Algebraic Approach to Convexity Theorems for Quadratic Mappings
paper_content:
We describe a Jordan-algebraic version of results related to convexity of images of quadratic mappings as well as related results on exactness of symmetric relaxations of certain classes of nonconvex optimization problems. The exactness of relaxations is proved based on rank estimates. Our approach provides a unifying viewpoint on a large number of classical results related to cones of Hermitian matrices over real and complex numbers. We describe (apparently new) results related to cones of Hermitian matrices with quaternion entries and to the exceptional 27-dimensional Euclidean Jordan algebra.
---
paper_title: A Remark On The Rank Of Positive Semidefinite Matrices Subject To Affine Constraints
paper_content:
Let K n be the cone of positive semidefinite n X n matrices and let A be an affine subspace of the space of symmetric matrices such that the intersection K n ?A is nonempty and bounded. Suppose that n ? 3 and that \codim A = r+2 \choose 2 for some 1 ≤ r ≤ n-2 . Then there is a matrix X ? K n ?A such that rank X ≤ r . We give a short geometric proof of this result, use it to improve a bound on realizability of weighted graphs as graphs of distances between points in Euclidean space, and describe its relation to theorems of Bohnenblust, Friedland and Loewy, and Au-Yeung and Poon.
---
paper_title: Hidden convexity in some nonconvex quadratically constrained quadratic programming
paper_content:
We consider the problem of minimizing an indefinite quadratic objective function subject to twosided indefinite quadratic constraints. Under a suitable simultaneous diagonalization assumption (which trivially holds for trust region type problems), we prove that the original problem is equivalent to a convex minimization problem with simple linear constraints. We then consider a special problem of minimizing a concave quadratic function subject to finitely many convex quadratic constraints, which is also shown to be equivalent to a minimax convex problem. In both cases we derive the explicit nonlinear transformations which allow for recovering the optimal solution of the nonconvex problems via their equivalent convex counterparts. Special cases and applications are also discussed. We outline interior-point polynomial-time algorithms for the solution of the equivalent convex programs.
---
paper_title: Trust Region Methods
paper_content:
Preface 1. Introduction Part I. Preliminaries: 2. Basic Concepts 3. Basic Analysis and Optimality Conditions 4. Basic Linear Algebra 5. Krylov Subspace Methods Part II. Trust-Region Methods for Unconstrained Optimization: 6. Global Convergence of the Basic Algorithm 7.The Trust-Region Subproblem 8. Further Convergence Theory Issues 9. Conditional Models 10. Algorithmic Extensions 11. Nonsmooth Problems Part III. Trust-Region Methods for Constrained Optimization with Convex Constraints: 12. Projection Methods for Convex Constraints 13. Barrier Methods for Inequality Constraints Part IV. Trust-Region Mewthods for General Constained Optimization and Systems of Nonlinear Equations: 14. Penalty-Function Methods 15. Sequential Quadratic Programming Methods 16. Nonlinear Equations and Nonlinear Fitting Part V. Final Considerations: Practicalities Afterword Appendix: A Summary of Assumptions Annotated Bibliography Subject and Notation Index Author Index.
---
paper_title: Optimality Conditions for the Minimization of a Quadratic with Two Quadratic Constraints
paper_content:
The trust region method has been proven to be very successful in both unconstrained and constrained optimization. It requires the global minimum of a general quadratic function subject to ellipsoid constraints. In this paper, we generalize the trust region subproblem by allowing two general quadratic constraints. Conditions and properties of its solution are discussed.
---
paper_title: Indefinite Trust Region Subproblems And Nonsymmetric Eigenvalue Perturbations
paper_content:
This paper extends the theory of trust region subproblems in two ways: (i) it allows indefinite inner products in the quadratic constraint, and (ii) it uses a two-sided (upper and lower bound) quadratic constraint. Characterizations of optimality are presented that have no gap between necessity and sufficiency. Conditions for the existence of solutions are given in terms of the definiteness of a matrix pencil. A simple dual program is introduced that involves the maximization of a strictly concave function on an interval. This dual program simplifies the theory and algorithms for trust region subproblems. It also illustrates that the trust region subproblems are implicit convex programming problems, and thus explains why they are so tractable.The duality theory also provides connections to eigenvalue perturbation theory. Trust region subproblems with zero linear term in the objective function correspond to eigenvalue problems, and adding a linear term in the objective function is seen to correspond to a p...
---
paper_title: On a subproblem of trust region algorithms for constrained optimization
paper_content:
We study a subproblem that arises in some trust region algorithms for equality constrained optimization. It is the minimization of a general quadratic function with two special quadratic constraints. Properties of such subproblems are given. It is proved that the Hessian of the Lagrangian has at most one negative eigenvalue, and an example is presented to show that the Hessian may have a negative eigenvalue when one constraint is inactive at the solution.
---
paper_title: Global optimality conditions in maximizing a convex quadratic function under convex quadratic constraints
paper_content:
For the problem of maximizing a convex quadratic function under convex quadratic constraints, we derive conditions characterizing a globally optimal solution. The method consists in exploiting the global optimality conditions, expressed in terms of e-subdifferentials of convex functions and e-normal directions, to convex sets. By specializing the problem of maximizing a convex function over a convex set, we find explicit conditions for optimality.
---
paper_title: Conditions for Global Optimality 2
paper_content:
In this paper bearing the same title as our earlier survey-paper [11] we pursue the goal of characterizing the global solutions of an optimization problem, i.e. getting at necessary and sufficient conditions for a feasible point to be a global minimizer (or maximizer) of the objective function. We emphasize nonconvex optimization problems presenting some specific structures like ‘convex-anticonvex’ ones or quadratic ones.
---
paper_title: On the finite convergence of successive SDP relaxation methods
paper_content:
Abstract Let F be a subset of the n -dimensional Euclidean space R n represented in terms of a compact convex subset of R n and a set of finitely or infinitely many quadratic inequalities. This paper investigates some fundamental properties related to the finite convergence of the successive semidefinite programming relaxation method proposed by the authors for approximating the convex hull of F .
---
paper_title: Semidefinite Programming Relaxation for NonconvexQuadratic Programs
paper_content:
This paper applies the SDP (semidefinite programming) relaxation originally developed for a 0-1 integer program to a general nonconvex QP (quadratic program) having a linear objective function and quadratic inequality constraints, and presents some fundamental characterizations of the SDP relaxation including its equivalence to a relaxation using convex-quadratic valid inequalities for the feasible region of the QP.
---
paper_title: On Semidefinite Bounds for Maximization of a Non-convex Quadratic Objective over the L1-Unit Ball
paper_content:
We consider the non-convex quadratic maximization problem subject ::: to the l 1 unit ball constraint. The nature of the l 1 norm ::: structure makes this problem extremely hard to analyze, and as a ::: consequence, the same difficulties are encountered when trying to ::: build suitable approximations for this problem by some tractable ::: convex counterpart formulations. We explore some properties of ::: this problem, derive SDP-like relaxations and raise open ::: questions.
---
paper_title: New Results on Quadratic Minimization
paper_content:
In this paper we present several new results on minimizing an indefinite quadratic function under quadratic/linear constraints. The emphasis is placed on the case in which the constraints are two quadratic inequalities. This formulation is termed the extended trust region subproblem in this paper, to distinguish it from the ordinary trust region subproblem, in which the constraint is a single ellipsoid. The computational complexity of the extended trust region subproblem in general is still unknown. In this paper we consider several interesting cases related to this problem and show that for those cases the corresponding semidefinite programming relaxation admits no gap with the true optimal value, and consequently we obtain polynomial-time procedures for solving those special cases of quadratic optimization. For the extended trust region subproblem itself, we introduce a parameterized problem and prove the existence of a trajectory that will lead to an optimal solution. Combining this with a result obtained in the first part of the paper, we propose a polynomial-time solution procedure for the extended trust region subproblem arising from solving nonlinear programs with a single equality constraint.
---
paper_title: Recent improvements in the complexity of the effective Nullstellensatz
paper_content:
Abstract We bring up to date the estimates on the complexity of the effective Nullstellensatz and the membership problem.
---
paper_title: On the intrinsic complexity of the arithmetic Nullstellensatz
paper_content:
We show several arithmetic estimates for Hilbert's Nullstellensatz. This includes an algorithmic procedure computing the polynomials and constants occurring in a Bezout identity, whose complexity is polynomial in the geometric degree of the system. Moreover, we show for the first time height estimates of intrinsic type for the polynomials and constants appearing, again polynomial in the geometric degree and linear in the height of the system. These results are based on a suitable representation of polynomials by straight-line programs and duality techniques using the Trace Formula for Gorenstein algebras. As an application we show more precise upper bounds for the function πS(x) counting the number of primes yielding an inconsistent modular polynomial equation system. We also give a computationally interesting lower bound for the density of small prime numbers of controlled bit length for the reduction to positive characteristic of inconsistent systems. Again, this bound is given in terms of intrinsic parameters.
---
paper_title: Algorithms in Real Algebraic Geometry
paper_content:
Algebraically Closed Fields.- Real Closed Fields.- Semi-Algebraic Sets.- Algebra.- Decomposition of Semi-Algebraic Sets.- Elements of Topology.- Quantitative Semi-algebraic Geometry.- Complexity of Basic Algorithms.- Cauchy Index and Applications.- Real Roots.- Cylindrical Decomposition Algorithm.- Polynomial System Solving.- Existential Theory of the Reals.- Quantifier Elimination.- Computing Roadmaps and Connected Components of Algebraic Sets.- Computing Roadmaps and Connected Components of Semi-algebraic Sets.
---
paper_title: Principles of Algebraic Geometry
paper_content:
A comprehensive, self-contained treatment presenting general results of the theory. Establishes a geometric intuition and a working facility with specific geometric practices. Emphasizes applications through the study of interesting examples and the development of computational tools. Coverage ranges from analytic to geometric. Treats basic techniques and results of complex manifold theory, focusing on results applicable to projective varieties, and includes discussion of the theory of Riemann surfaces and algebraic curves, algebraic surfaces and the quadric line complex as well as special topics in complex manifolds.
---
paper_title: Real Algebraic Geometry
paper_content:
1. Ordered Fields, Real Closed Fields.- 2. Semi-algebraic Sets.- 3. Real Algebraic Varieties.- 4. Real Algebra.- 5. The Tarski-Seidenberg Principle as a Transfer Tool.- 6. Hilbert's 17th Problem. Quadratic Forms.- 7. Real Spectrum.- 8. Nash Functions.- 9. Stratifications.- 10. Real Places.- 11. Topology of Real Algebraic Varieties.- 12. Algebraic Vector Bundles.- 13. Polynomial or Regular Mappings with Values in Spheres.- 14. Algebraic Models of C? Manifolds.- 15. Witt Rings in Real Algebraic Geometry.- Index of Notation.
---
paper_title: Feasibility Testing for Systems of Real Quadratic Equations
paper_content:
We consider the problem of deciding whether a given system of quadratic homogeneous equations over the reals has nontrivial solution. We design an algorithm which, for a fixed number of equations, uses a number of arithmetic operations bounded by a polynomial in the number of variables only.
---
paper_title: ON THE S-PROCEDURE AND SOME VARIANTS
paper_content:
We give a concise review and extension of S-procedure that is an instrumental tool in control theory and robust optimization analysis. We also discuss the approximate S-Lemma as well as its applications in robust optimization.
---
paper_title: Polynomial-time computing over quadratic maps I: sampling in real algebraic sets
paper_content:
Given a quadratic map $$Q:\mathbb{K}^n \to \mathbb{K}^k $$ defined over a computable subring D of a real closed field $$\mathbb{K},$$ and p ?D[Y1,...,Yk] of degree d, we consider the zero set $$Z = Z(p(Q(X)), \mathbb{K}^n) \subseteq \mathbb{K}^n$$ of p(Q(X1,...,Xn)) ?D[X1,...,Xn]. We present a procedure that computes, in (dn)O(k) arithmetic operations in D, a set $$\mathcal{S}$$ of (real univariate representations of) sampling points in $$\mathbb{K}^n$$ that intersects nontrivially each connected component of Z. As soon as k=o(n), this is faster than the known methods that all have exponential dependence on n in the complexity. In particular, our procedure is polynomial-time for constant k. In contrast, the best previously known procedure is only capable of deciding in $$n^{O(k^2 )} $$ operations the nonemptiness (rather than constructing sampling points) of the set Z in the case of p(Y)=?iYi2 and homogeneous Q. ::: ::: A by-product of our procedure is a bound (dn)O(k) on the number of connected components of Z. ::: ::: The procedure consists of exact symbolic computations in D and outputs vectors of algebraic numbers. It involves extending $$\mathbb{K}$$ by infinitesimals and subsequent limit computation by a novel procedure that utilizes knowledge of an explicit isomorphism between real algebraic sets.
---
| Title: A Survey of the S-Lemma
Section 1: Proofs for the Basic S-Lemma
Description 1: In this section, we present three proofs for the basic S-lemma. The proofs include the original proof of Yakubovich, a modern proof based on LMIs, and an elementary analytic proof.
Section 2: The Two Faces of the S-Lemma
Description 2: Here, the central result of the theory is presented, showing the main directions of the rest of the survey. This includes both the logical consequence and feasibility problem perspectives.
Section 3: The Traditional Approach
Description 3: This section elucidates the traditional proof of the S-lemma using convexity results and associated propositions.
Section 4: A Modern Approach
Description 4: We present a modern proof which extends the classical proof to nonhomogeneous cases using advanced linear algebra techniques.
Section 5: An Elementary Proof
Description 5: This section delivers an elementary proof based on simpler concepts and direct verifications, applicable in specific settings.
Section 6: Special Results and Counterexamples
Description 6: This section contains related results, counterexamples, and showcases the limitations and scope of the S-lemma under various conditions.
Section 7: Other Variants
Description 7: In this part, we enumerate and discuss various useful forms of the basic S-lemma which arise from slight modifications in the original reasoning.
Section 8: General Results
Description 8: This section discusses known results on the system solvability and dual vector existence, with a focus on the failures and limitations in a general setting.
Section 9: Counterexamples
Description 9: Specific counterexamples are provided to illustrate the failure of the S-lemma under certain general conditions and assumptions.
Section 10: Stability Analysis
Description 10: Real-world applications of the S-lemma in stability analysis are presented, demonstrating its utility in practical problems.
Section 11: Sum of Two Ellipsoids
Description 11: This section describes the application of the S-lemma to the problem of finding an ellipsoid that contains the sum of two other ellipsoids.
Section 12: Convexity of the Joint Numerical Range
Description 12: We investigate the theory and the general convexity results related to the joint numerical range, highlighting its relevance to the S-lemma.
Section 13: Results over Real Numbers
Description 13: This section provides a deep dive into convexity results over real numbers and their implications for the S-lemma.
Section 14: Results over Complex Numbers
Description 14: An exploration of results over complex numbers, illustrating how these findings affect the applications and extensions of the S-lemma.
Section 15: Implications
Description 15: The implications of convexity results for practical problems and theoretical investigations are discussed here.
Section 16: Further Extensions
Description 16: We discuss the generalization of the S-lemma to more advanced and generalized forms, including polynomial systems and more complex transformations.
Section 17: Miscellaneous Topics
Description 17: Various related topics, such as trust region problems and SDP relaxation, are discussed to illustrate broader applications of the S-lemma.
Section 18: Summary
Description 18: A concise summary of the survey’s findings, emphasizing key points and insights derived from the in-depth analysis of the S-lemma.
Section 19: Future Research
Description 19: This section outlines open problems and suggests directions for future research in areas related to the S-lemma and its applications. |
Structure based Data Extraction from Hidden Web Sources: A Review | 13 | ---
paper_title: Automatic Data Records Extraction from List Page in Deep Web Sources
paper_content:
with the explosive growth and popularity of the World Wide Web, a wealth of online e-commerce information resources become available. List pages in these web sites are usually automatically generated from the back-end DBMS using scripts. In order to provide value-added services and convenience for users, it is very necessary to integrate web sources of the same domain. Given the huge number of these web pages, it is difficult and even impossible to use a manual approach to extract data records from these list pages on a large scale. According to characteristics of the template-based list pages, in this paper, we propose a LBDRF algorithm to solve the problem of automatic data records extraction from web pages in Deep Web. Our experimental results show that the proposed method performs well.
---
paper_title: Mining data records in Web pages
paper_content:
A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.
---
paper_title: Automatic Data Records Extraction from List Page in Deep Web Sources
paper_content:
with the explosive growth and popularity of the World Wide Web, a wealth of online e-commerce information resources become available. List pages in these web sites are usually automatically generated from the back-end DBMS using scripts. In order to provide value-added services and convenience for users, it is very necessary to integrate web sources of the same domain. Given the huge number of these web pages, it is difficult and even impossible to use a manual approach to extract data records from these list pages on a large scale. According to characteristics of the template-based list pages, in this paper, we propose a LBDRF algorithm to solve the problem of automatic data records extraction from web pages in Deep Web. Our experimental results show that the proposed method performs well.
---
paper_title: VIPS : a Vision-based Page Segmentation Algorithm
paper_content:
A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques such as DOM tree, our approach is independent to the HTML documentation representation. Our method can work well even when the HTML structure is quite different from the visual layout structure. Several experiments show the effectiveness of our method.
---
| Title: Structure based Data Extraction from Hidden Web Sources: A Review
Section 1: Introduction
Description 1: Introduce the two main categories of the World Wide Web (Surface web and Hidden web) and the importance of extracting data from the Hidden web.
Section 2: Understanding Data Structures in Web Pages
Description 2: Describe the different types of data structures found in web pages: unstructured, semi-structured, and structured.
Section 3: Layout Based Data Region Finding (LBDRF)
Description 3: Explain the LBDRF system for data region identification in structured web pages, including its components and processes.
Section 4: Extracting the Data Records in the Data Region
Description 4: Discuss the method of extracting data records from identified data regions using the LBDRF algorithm.
Section 5: Mining Data Records in Web Pages (MDR)
Description 5: Introduce the MDR method proposed by Liu and Grossman for mining data records, including steps like building an HTML tag tree and identifying data regions.
Section 6: Building the HTML Tag Tree
Description 6: Detail the process of constructing an HTML tag tree to facilitate data record extraction.
Section 7: Mining Data Regions and Identifying Data Records
Description 7: Describe the steps for identifying and extracting data regions and records from a structured web page using the MDR technique.
Section 8: String Comparison Using Edit Distance
Description 8: Explain the string comparison method using normalized edit distance for identifying similar data regions.
Section 9: Machine Learning Based Approach for Table Detection
Description 9: Explore the machine learning approach proposed by Yalin Wang and Jianying Hu for detecting data-rich tables in web pages.
Section 10: Vision Based Approach for Data Extraction
Description 10: Present the vision-based approach by P. S. Hiremath and Siddu P. Algur for extracting data items using visual clues.
Section 11: Identification and Extraction of Data Regions
Description 11: Detail the method for identifying and extracting the most relevant data regions from web pages using visual-based techniques.
Section 12: Identification of Data Records and Items
Description 12: Explain the techniques for identifying data records and items from the extracted data regions.
Section 13: Conclusion and Future Scope
Description 13: Summarize the paper, discussing the advantages and disadvantages of the reviewed techniques, and propose future directions for data extraction systems. |
Survey on 3D channel modeling : From theory to standardization | 7 | ---
paper_title: Wireless flexible personalised communications : COST 259 : European co-operation in mobile radio research
paper_content:
Preface. Table of Contents. List of Acronyms. Introduction. Radio Systems Aspects. Antennas and Propagation. Network Aspects. Annex I: List of Contributors. Annex II: List of Participating Institutions. Index.
---
paper_title: 3D beamforming trials with an active antenna array
paper_content:
Beamforming techniques for mobile wireless communication systems like LTE using multiple antenna arrays for transmission and reception to increase the signal-to-noise-and-interference ratio (SINR) are state of the art. The increased SINR is not only due to a larger gain in the direction of the desired user, but also due to a better control of the spatial distribution of interference in the cell. To further enhance the system performance not only the horizontal, but also the vertical dimension can be exploited for beam pattern adaptation, thus giving an additional degree of freedom for interference avoidance among adjacent cells. This horizontal and vertical beam pattern adaptation is also referred to as 3D beamforming in the following. This paper describes investigations of the potential of 3D beamforming with lab and field trial setups and provides initial performance results.
---
paper_title: A comparison of specific space diversity techniques for reduction of fast fading in UHF mobile radio systems
paper_content:
Over the past few years a variety of space diversity system techniques have been considered for the purpose of reducing the rapid fading encountered in microwave mobile radio systems. Basic diversity methods are first reviewed in the framework of mobile propagation effects, and then specific techniques are compared from the standpoint of transmitter power required to achieve a certain performance. Criteria of comparison used included baseband SNR while moving and reliability when the vehicle stops at random. System parameters are type and order of diversity and transmission bandwidth. Tradeoffs between performance properties and system parameters are indicated. The calculations show that relatively modest use of diversity techniques can afford savings in transmitter power of 10-20 dB. For example, at a range of 2 mi, to obtain 30-dB baseband SNR while moving and 99.9-percent reliability when stopped requires a transmitted power of 8 W for a conventional FM system with no diversity. Two-branch selection diversity provides the same performance for a transmitter power of only 300 mW.
---
paper_title: MIMO Wireless Networks: Channels, Techniques and Standards for Multi-Antenna, Multi-User and Multi-Cell Systems
paper_content:
The second edition of MIMO Wireless Networks is unique in bridging the gap between multiple-input/multiple-output (MIMO) radio propagation and signal processing techniques and presenting robust MIMO network designs for real-world wireless channels. The book emphasizes how propagation mechanisms impact MIMO system performance under practical power constraints, examining how real-world propagation affects the capacity and the error performance of MIMO transmission schemes. Combining a solid mathematical analysis with a physical and intuitive approach, the book progressively derives innovative designs, taking into consideration that MIMO channels are usually far from ideal. Reflecting developments since the first edition was published, the book has been updated and now includes several new chapters covering Multi-user MIMO, Multi-Cell MIMO, Receiver Design, MIMO in 3GPP and WiMAX and realistic system level evaluations of MIMO network performance.Reviews physical models and analytical representations of MIMO propagation channels, highlighting their strengths and weaknessesGives overview of space-time coding techniques, covering both classical and more recent schemes Shows innovative and practical designs of robust space-time coding, precoding and antenna selection techniques for realistic propagation (including single-carrier and MIMO-OFDM transmissions)
---
paper_title: An interim channel model for beyond-3G systems: extending the 3GPP spatial channel model (SCM)
paper_content:
This paper reports on the interim beyond-3G (B3G) channel model developed by and used within the European WINNER project. The model is a comprehensive spatial channel model for 2 and 5 GHz frequency bands and supports bandwidths up to 100 MHz in three different outdoor environments. It further features time-evolution of system-level parameters for challenging advanced communication algorithms, as well as a reduced-variability tapped delay-line model for improved usability in calibration and comparison simulations.
---
paper_title: Wireless Information Networks
paper_content:
Preface. PART I INTRODUCTION TO WIRELESS NETWORKS. 1 Overview of Wireless Networks. 1.1 Introduction. 1.2 Network Architecture and Design Issues. 1.3 Key Trends in Wireless Networking. 1.4 Outline of the Book. Questions. 2 Evolution of the Wireless Industry. 2.1 Introduction. 2.2 Three Views of the Wireless Industry. 2.3 Three Generations of Cellular Networks. 2.4 Trends in Wireless Technologies. Questions. PART II CHARACTERISTICS OF RADIO PROPAGATION. 3 Characterization of Radio Propagation. 3.1 Introduction. 3.2 Multipath Fading and the Distance-Power Relationship. 3.3 Local Movements and Doppler Shift. 3.4 Multipath for Wideband Signals. 3.5 Classical Uncorrelated Scattering Model. 3.6 Indoor and Urban Radio Propagation Modeling. Questions. Problems. Projects. 4 Modeling and Simulation of Narrowband Signal Characteristics. 4.1 Introduction. 4.2 Modeling Path Loss and Slow Shadow Fading. 4.3 Doppler Spectrum of Fast Envelope Fading. 4.4 Statistical Behavior of Fast Envelope Fading. 4.5 Simulation of Fast Envelope Fading. Questions. Problems. Projects. 5 Measurement of Wideband and UWB Channel Characteristics. 5.1 Introduction. 5.2 Time-Domain Measurement Techniques. 5.3 Frequency-Domain Measurement Techniques. 5.4 Advances in Frequency-Domain Channel Measurement. Questions. Problems. Project. 6 Modeling of Wideband Radio Channel Characteristics. 6.1 Introduction. 6.2 Wideband Time-Domain Statistical Modeling. 6.3 Wideband Frequency-Domain Channel Modeling. 6.4 Comparison Between Statistical Models. 6.5 Ray-Tracing Algorithms. 6.6 Direct Solution of Radio Propagation Equations. 6.7 Comparison of Deterministic and Statistical Modeling. 6.8 Site-Specific Statistical Model. Appendix 6A: GSM-Recommended Multipath Propagation Models. Appendix 6B: Wideband Multipath Propagation Models. Questions. Problems. Projects. PART III MODEM DESIGN. 7 Narrowband Modem Technology. 7.1 Introduction. 7.2 Basic Modulation Techniques. 7.3 Theoretical Limits and Practical Impairments. 7.4 Traditional Modems for Wide-Area Wireless Networks. 7.5 Other Aspects of Modem Implementation. Questions. Problems. Projects. 8 Fading, Diversity, and Coding. 8.1 Introduction. 8.2 Radio Communication on Flat Rayleigh Fading Channels. 8.3 Diversity Combining. 8.4 Error-Control Coding for Wireless Channels. 8.5 Space-Time Coding. 8.6 MIMO and STC. Questions. Problems. Projects. 9 Broadband Modem Technologies. 9.1 Introduction. 9.2 Effects of Frequency-Selective Multipath Fading. 9.3 Discrete Multipath Fading Channel Model. 9.4 Adaptive Discrete Matched Filter. 9.5 Adaptive Equalization. 9.6 Sectored Antennas. 9.7 Multicarrier, OFDM, and Frequency Diversity. 9.8 Comparison of Traditional Broadband Modems. 9.9 MIMO in Frequency-Selective Fading. Appendix 9A: Analysis of the Equalizers. Questions. Problems. Projects. 10 Spread-Spectrum and CDMA Technology. 10.1 Introduction. 10.2 Principles of Frequency-Hopping Spread Spectrum. 10.3 Principles of Direct-Sequence Spread Spectrum. 10.4 Interference in Spread-Spectrum Systems. 10.5 Performance of CDMA Systems. Questions. Problems. PART IV SYSTEMS ASPECTS. 11 Topology, Medium Access, and Performance. 11.1 Introduction. 11.2 Topologies for Local Networks. 11.3 Cellular Topology for Wide-Area Networks. 11.4 Centrally Controlled Assigned Access Methods. 11.5 Distributed Contention-Based Access Control. Questions. Problems. Project. 12 Ultrawideband Communications. 12.1 Introduction. 12.2 UWB Channel Characteristics. 12.3 Impulse Radio and Time-Hopping Access. 12.4 Direct-Sequence UWB. 12.5 Multiband OFDM. Questions. Problems. 13 RF Location Sensing. 13.1 Introduction. 13.2 RF Location-Sensing Techniques. 13.3 Modeling The Behavior of RF Sensors. 13.4 Wireless Positioning Algorithms. Questions. Problems. 14 Wireless Optical Networks. 14.1 Introduction. 14.2 Implementation. 14.3 Eye Safety. 14.4 IR Channel Characterization and Data-Rate Limitations. 14.5 Modulation Techniques for Optical Communications. 14.6 Multiple Access and Data Rate. Questions. 15 Systems and Standards. 15.1 Introduction. 15.2 GSM, GPRS, and EDGE. 15.3 CDMA and HDR. 15.4 Other Historical Systems. 15.5 Wireless LANs. 15.6 Speech Coding in Wireless Systems. Questions. References. Index. About the Authors.
---
paper_title: A generic channel model in multi-cluster environments
paper_content:
We present a new method of defining spatio-temporal clusters using geometric modeling approach, which simulate the phenomenon that has a geometric analog. Various clusters are given different shapes or forms to scatterer distributions based on their logical appeal to certain radio environments. This classification methodology is based on the semi-geometrically-based statistical model developed in this paper. We then derive the analytical expressions of power azimuthal spectrum, power delay spectrum and cluster power using the generic model. Armed with the model preliminaries, we further parameterize some important geometric clusters, which reproduce the significant effects of typical scattering environments. The modeling approach presented in this paper is useful for system-level simulation and performance evaluation.
---
paper_title: Geometrical-based statistical macrocell channel model for mobile environments
paper_content:
We develop a statistical geometric propagation model for a macrocell mobile environment that provides the statistics of angle-of-arrival (AOA) of the multipath components, which are required to test adaptive array algorithms for cellular applications. This channel model assumes that each multipath component of the propagating signal undergoes only one bounce traveling from the transmitter to the receiver and that scattering objects are located uniformly within a circle around the mobile. This geometrically based single bounce macrocell (GBSBM) channel model provides three important parameters that characterize a channel: the power of the multipath components, the time-of-arrival (TOA) of the components, and the AOA of the components. Using the GBSBM model, we analyze the effect of directional antennas at the base station on the fading envelopes. The level crossing rate of the fading envelope is reduced and the envelope correlation increases significantly if a directional antenna is employed at the base station.
---
paper_title: A generic model for MIMO wireless propagation channels in macro- and microcells
paper_content:
This paper derives a generic model for the multiple-input multiple-output (MIMO) wireless channel. The model incorporates important effects, including i) interdependency of directions-of-arrival and directions-of-departure, ii) large delay and angle dispersion by propagation via far clusters, and iii) rank reduction of the transfer function matrix. We propose a geometry-based model that includes the propagation effects that are critical for MIMO performance: i) single scattering around the BS and MS, ii) scattering by far clusters, iii) double-scattering, iv) waveguiding, and v) diffraction by roof edges. The required parameters for the complete definition of the model are enumerated, and typical parameter values in macro and microcellular environments are discussed.
---
paper_title: The COST259 Directional Channel Model-Part I: Overview and Methodology
paper_content:
This paper describes a model for mobile radio channels that includes consideration of directions of arrival and is thus suitable for simulations of the performance of wireless systems that use smart antennas. The model is specified for 13 different types of environments, covering macro- micro- and picocells. In this paper, a hierarchy of modeling concepts is described, as well as implementation aspects that are valid for all environments. The model is based on the specification of directional channel impulse response functions, from which the impulse response functions at all antenna elements can be obtained. A layered approach, which distinguishes between external (fixed), large-scale-, and small-scale- parameters allows an efficient parameterization. Different implementation methods, based on either a tapped-delay line or a geometrical model, are described. The paper also derives the transformation between those two approaches. Finally, the concepts of clusters and visibility regions are used to account for large delay and angular spreads that have been measured. In two companion papers, the environment-specific values of the model parameters are explained and justified
---
paper_title: The double-directional radio channel
paper_content:
We introduce the concept of the double-directional mobile radio channel. It is called this because it includes angular information at both link ends, e.g., at the base station and at the mobile station. We show that this angular information can be obtained with synchronized antenna arrays at both link ends. In wideband high-resolution measurements, we use a switched linear array at the receiver and a virtual-cross array at the transmitter. We evaluate the raw measurement data with a technique that alternately used estimation and beamforming, and that relied on ESPRIT (estimation of signal parameters via rotational invariance techniques) to obtain superresolution in both angular domains and in the delay domain. In sample microcellular scenarios (open and closed courtyard, line-of-sight and obstructed line-of-sight), up to 50 individual propagation paths are determined. The major multipath components are matched precisely to the physical environment by geometrical considerations. Up to three reflection/scattering points per propagation path are identified and localized, lending insight into the multipath spreading properties in a microcell. The extracted multipath parameters allow unambiguous scatterer identification and channel characterization, independently of a specific antenna, its configuration (single/array), and its pattern. The measurement results demonstrate a considerable amount of power being carried via multiply reflected components, thus suggesting revisiting the popular single-bounce propagation models. It turns out that the wideband double-directional evaluation is a most complete method for separating multipath components. Due to its excellent spatial resolution, the double-directional concept provides accurate estimates of the channel's multipath-richness, which is the important parameter for the capacity of multiple-input multiple-output (MIMO) channels.
---
paper_title: Wireless flexible personalised communications : COST 259 : European co-operation in mobile radio research
paper_content:
Preface. Table of Contents. List of Acronyms. Introduction. Radio Systems Aspects. Antennas and Propagation. Network Aspects. Annex I: List of Contributors. Annex II: List of Participating Institutions. Index.
---
paper_title: Wireless Networking: Understanding Internetworking Challenges
paper_content:
This book focuses on providing a detailed and practical explanation of key existing and emerging wireless networking technologies and trends,while minimizing the amount of theoretical background information. The book also goes beyond simply presenting what the technology is, but also examines why the technology is the way it is, the history of its development, standardization, and deployment. The book also describes how each technology is used, what problems it was designed to solve, what problems it was not designed to solve., how it relates to other technologies in the marketplace, and internetworking challenges faced withing the context of the Internet, as well as providing deployment trends and standardization trends. Finally, this book decomposes evolving wireless technologies to identify key technical and usage trends in order to discuss the likely characteristics of future wireless networks.
---
paper_title: Propagation factors controlling mean field strength on urban streets
paper_content:
Calculation of mean field strength for urban mobile radio has been made on a ray-theoretical basis assuming an ideal city structure with uniform building heights. The result shows that building height, street width, and street orientation as well as mobile station antenna height are controlling propagation parameters in addition to the ordinary factors. The major theoretical characteristics agree approximately with experimental data including conventional empirical predictions. This suggests a way of theoretically predicting mean field strength in an urban area.
---
| Title: Survey on 3D Channel Modeling: From Theory to Standardization
Section 1: INTRODUCTION
Description 1: Introduce the motivation, importance, and evolution of 3D channel modeling in the context of wireless communication technologies.
Section 2: MODELS
Description 2: Discuss the principles of existing channel models, focusing on large-scale effects and both SISO and MIMO channel models.
Section 3: STANDARDIZED CHANNEL MODELS
Description 3: Explain the generation of channels within system-level approach based standards, and highlight how standardized channel models differ in scenarios, frequency ranges, and other parameters.
Section 4: Antenna configuration
Description 4: Describe the antenna configurations proposed in current standardization works, including the organization of antenna ports and elements in LTE.
Section 5: 3D Beamforming
Description 5: Detail the ongoing work on 3D beamforming by the TSG-RAN-WG1 group and its implications for future wireless communication systems.
Section 6: System level simulations
Description 6: Discuss the system level simulations being prepared by 3GPP, including new elements, calibration phases, and the expected outcomes of these simulations.
Section 7: CONCLUSION
Description 7: Summarize the key points of the survey, emphasizing the current state and future directions of 3D channel modeling and standardization efforts. |
Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey | 6 | ---
paper_title: Agent-Based Cloud Computing
paper_content:
Agent-based cloud computing is concerned with the design and development of software agents for bolstering cloud service discovery, service negotiation, and service composition. The significance of this work is introducing an agent-based paradigm for constructing software tools and testbeds for cloud resource management. The novel contributions of this work include: 1) developing Cloudle: an agent-based search engine for cloud service discovery, 2) showing that agent-based negotiation mechanisms can be effectively adopted for bolstering cloud service negotiation and cloud commerce, and 3) showing that agent-based cooperative problem-solving techniques can be effectively adopted for automating cloud service composition. Cloudle consists of 1) a service discovery agent that consults a cloud ontology for determining the similarities between providers' service specifications and consumers' service requirements, and 2) multiple cloud crawlers for building its database of services. Cloudle supports three types of reasoning: similarity reasoning, compatibility reasoning, and numerical reasoning. To support cloud commerce, this work devised a complex cloud negotiation mechanism that supports parallel negotiation activities in interrelated markets: a cloud service market between consumer agents and broker agents, and multiple cloud resource markets between broker agents and provider agents. Empirical results show that using the complex cloud negotiation mechanism, agents achieved high utilities and high success rates in negotiating for cloud resources. To automate cloud service composition, agents in this work adopt a focused selection contract net protocol (FSCNP) for dynamically selecting cloud services and use service capability tables (SCTs) to record the list of cloud agents and their services. Empirical results show that using FSCNP and SCTs, agents can successfully compose cloud services by autonomously selecting services.
---
paper_title: A Distributed Algorithm for Anytime Coalition Structure Generation
paper_content:
A major research challenge in multi-agent systems is the problem of partitioning a set of agents into mutually disjoint coalitions, such that the overall performance of the system is optimized. This problem is difficult because the search space is very large: the number of possible coalition structures increases exponentially with the number of agents. Although several algorithms have been proposed to tackle this Coalition Structure Generation (CSG) problem, all of them suffer from being inherently centralized, which leads to the existence of a performance bottleneck and a single point of failure. In this paper, we develop the first decentralized algorithm for solving the CSG problem optimally. In our algorithm, the necessary calculations are distributed among the agents, instead of being carried out centrally by a single agent (as is the case in all the available algorithms in the literature). In this way, the search can be carried out in a much faster and more robust way, and the agents can share the burden of the calculations. The algorithm combines, and improves upon, techniques from two existing algorithms in the literature, namely DCVC [5] and IP [9], and applies novel techniques for filtering the input and reducing the inter-agent communication load.
---
paper_title: Self-organization through bottom-up coalition formation
paper_content:
We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system.
---
paper_title: Agent Organized Networks Redux
paper_content:
Individual robots or agents will often need to form coalitions to accomplish shared tasks, e.g., in sensor networks or markets. Furthermore, in most real systems it is infeasible for entities to interact with all peers. The presence of a social network can alleviate this problem by providing a neighborhood system within which entities interact with a reduced number of peers. Previous research has shown that the topology of the underlying social network has a dramatic effect on the quality of coalitions formed and consequently on system performance (Gaston & deslardins 2005a). It has also been shown that it is feasible to develop agents which dynamically alter connections to improve an organization's ability to form coalitions on the network. However those studies have not analysed the network topologies that result from connectivity adaptation strategies. In this paper the resulting network topologies were analysed and it was found that high performance and rapid convergence were attained because scale free networks were being formed. However it was observed that organizational performance is not impacted by limiting the number of links per agent to the total number of skills available within the population. implying that bandwidth was wasted by previous approaches. We used these observations to inform the design of a token based algorithm that attains higher performance using an order of magnitude less messages for both uniform and non-uniform distributions of skills.
---
paper_title: Task inference and distributed task management in the Centibots robotic system
paper_content:
We describe the Centibots system, a very large scale distributed robotic system, consisting of more than 100 robots, that has been successfully deployed in large, unknown indoor environments, over extended periods of time (i.e., durations corresponding to several power cycles). Unlike most multiagent systems, the set of tasks about which teams must collaborate is not given a priori. We first describe a task inference algorithm that identifies potential team commitments that collectively balance constraints such as reachability, sensor coverage, and communication access. We then describe a dispatch algorithm for task distribution and management that assigns resources depending on either task density or replacement requirements stemming from failures or power shortages. The targeted deployment environments are expected to lack a supporting communication infrastructure; robots manage their own network and reason about the concomitant localization constraints necessary to maintain team communication. Finally, we present quantitative results in terms of a "search and rescue problem" and discuss the team-oriented aspects of the system in the context of prevailing theories of multiagent collaboration.
---
paper_title: Distributed Sensor Networks: A Multiagent Perspective
paper_content:
1. Introduction to a Multiagent Perspective V. Lesser, C.L. Ortiz, Jr., M. Tambe. Part I: The Sensor Network Challenge Problem. 2. The Radsim Simulator J.H. Lawton. 3. Challenge Problem Testbed P. Zemany, M. Gaughan. 4. Visualization and Debugging Tools A. Egyed, B. Horling, R. Becker, R. Balzer. 5. Target Tracking with Bayesian Estimation J.E. Vargas, K. Tvalarparti, Zhaojun Wu. Part II: Distributed Resource Allocation: Architectures and Protocols. 6. Dynamic resource-bounded negotiation in non-additive domains C.L. Ortiz, Jr., T.W. Rauenbusch, E. Hsu, R. Vincent. 7. A satisficing, negotiated, and learning coalition formation architecture Leen-Kiat Soh, C. Tsatsoulis, H. Sevay. 8. Using Autonomy, Organizational Design and Negotiation in a DSN B. Horling, R. Mailler, Jiaying Shen, R. Vincent, V. Lesser. 9. Scaling-up Distributed Sensor Networks O. Yadgar, S. Kraus, C.L. Ortiz, Jr. 10. Distributed Resource Allocation P.J. Modi, P. Scerri, Wei-Min Shen, M. Tambe. 11. Distributed Coordination through Anarchic Optimization S. Fitzpatrick, L. Meertens. Part III: Insights into Distributed Resource Allocation Protocols based on Formal Analyses. 12. Communication and Computation in Distributed CSP Algorithms C. Fernandez, R. Bejar, B. Krishnamachari, C. Gomes, B. Selman. 13. A Comparative Study of Distributed Constraint Algorithms Weixiong Zhang, Guandong Wang, Zhao Xing, L. Wittenburg. 14. Analysis of Negotiation Protocols by Distributed Search Guandong Wang, Weixiong Zhang, R. Mailler, V. Lesser.
---
paper_title: Self-Organisation and Emergence in MAS: An Overview
paper_content:
The spread of the Internet and the evolution of mobile communication, have created new possibilities for software applications such as ubiquitous computing, dynamic supply chains and medical home care. Such systems need to operate in dynamic, heterogeneous environments and face the challenge of handling frequently changing requirements; therefore they must be flexible, robust and capable of adapting to the circumstances. It is widely believed that multi-agent systems coordinated by selforganisation and emergence mechanisms are an effective way to design these systems. This paper aims to define the concepts of self-organisation and emergence and to provide a state of the art survey about the different classes of self-organisation mechanisms applied in the multi-agent systems domain. Furthermore, the strengths and limits of these approaches are examined and research issues are provided. Povzetek: Clanek opisuje pregled samoorganizacije v MAS.
---
paper_title: Coalition Formation with Spatial and Temporal Constraints
paper_content:
The coordination of emergency responders and robots to undertake a number of tasks in disaster scenarios is a grand challenge for multi-agent systems. Central to this endeavour is the problem of forming the best teams (coalitions) of responders to perform the various tasks in the area where the disaster has struck. Moreover, these teams may have to form, disband, and reform in different areas of the disaster region. This is because in most cases there will be more tasks than agents. Hence, agents need to schedule themselves to attempt each task in turn. Second, the tasks themselves can be very complex: requiring the agents to work on them for different lengths of time and having deadlines by when they need to be completed. The problem is complicated still further when different coalitions perform tasks with different levels of efficiency. Given all these facets, we define this as The Coalition Formation with Spatial and Temporal constraints problem (CFSTP). We show that this problem is NP-hard---in particular, it contains the well-known complex combinatorial problem of Team Orienteering as a special case. Based on this, we design a Mixed Integer Program to optimally solve small-scale instances of the CFSTP and develop new anytime heuristics that can, on average, complete 97% of the tasks for large problems (20 agents and 300 tasks). In so doing, our solutions represent the first results for CFSTP.
---
paper_title: THE EFFECT OF NETWORK STRUCTURE ON DYNAMIC TEAM FORMATION IN MULTI‐AGENT SYSTEMS
paper_content:
Previous studies of team formation in multi-agent systems have typically assumed that the agent social network underlying the agent organization is either not explicitly described or the social network is assumed to take on some regular structure such as a fully connected network or a hierarchy. However, recent studies have shown that real-world networks have a rich and purposeful structure, with common properties being observed in many different types of networks. As multi-agent systems continue to grow in size and complexity, the network structure of such systems will become increasing important for designing efficient, effective agent communities. ::: ::: We present a simple agent-based computational model of team formation, and analyze the theoretical performance of team formation in two simple classes of networks (ring and star topologies). We then give empirical results for team formation in more complex networks under a variety of conditions. From these experiments, we conclude that a key factor in effective team formation is the underlying agent interaction topology that determines the direct interconnections among agents. Specifically, we identify the property of diversity support as a key factor in the effectiveness of network structures for team formation. Scale-free networks, which were developed as a way to model real-world networks, exhibit short average path lengths and hub-like structures. We show that these properties, in turn, result in higher diversity support; as a result, scale-free networks yield higher organizational efficiency than the other classes of networks we have studied.
---
paper_title: Methods for Task Allocation Via Agent Coalition Formation
paper_content:
Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement.
---
paper_title: Agent-organized networks for dynamic team formation
paper_content:
Many multi-agent systems consist of a complex network of autonomous yet interdependent agents. Examples of such networked multi-agent systems include supply chains and sensor networks. In these systems, agents have a select set of other agents with whom they interact based on environmental knowledge, cognitive capabilities, resource limitations, and communications constraints. Previous findings have demonstrated that the structure of the artificial social network governing the agent interactions is strongly correlated with organizational performance. As multi-agent systems are typically embedded in dynamic environments, we wish to develop distributed, on-line network adaptation mechanisms for discovering effective network structures. Therefore, within the context of dynamic team formation, we propose several strategies for agent-organized networks (AONs) and evaluate their effectiveness for increasing organizational performance.
---
paper_title: An anytime algorithm for optimal coalition structure generation
paper_content:
In multi agent system, how to find a coalition structure making the greatest profits in cooperation in the shortest time is an issue which has been given much attention. When finding the optimal coalition structure, if we make no any restriction on the searching space, we must search all the coalition structures. An anytime algorithm—LVAA(Lateral and Vertical Anytime Algorithm), designed in this paper, used a branch and bound technique and pruning function to simplify the searching space besides L 1 , L 2 and L n layers vertically and horizontally. Then, it find the optimal coalition structure value. The result of the experiment proved that it greatly reduced the searching space. When coalition values meet uniform distribution and normal distribution respectively, the searching times of LVAA can be reduced by 78% and 82% than Sandholm's for agent number 18 and 23.
---
paper_title: Negotiation decision functions for autonomous agents
paper_content:
We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated.
---
paper_title: Agent-Based Cloud Computing
paper_content:
Agent-based cloud computing is concerned with the design and development of software agents for bolstering cloud service discovery, service negotiation, and service composition. The significance of this work is introducing an agent-based paradigm for constructing software tools and testbeds for cloud resource management. The novel contributions of this work include: 1) developing Cloudle: an agent-based search engine for cloud service discovery, 2) showing that agent-based negotiation mechanisms can be effectively adopted for bolstering cloud service negotiation and cloud commerce, and 3) showing that agent-based cooperative problem-solving techniques can be effectively adopted for automating cloud service composition. Cloudle consists of 1) a service discovery agent that consults a cloud ontology for determining the similarities between providers' service specifications and consumers' service requirements, and 2) multiple cloud crawlers for building its database of services. Cloudle supports three types of reasoning: similarity reasoning, compatibility reasoning, and numerical reasoning. To support cloud commerce, this work devised a complex cloud negotiation mechanism that supports parallel negotiation activities in interrelated markets: a cloud service market between consumer agents and broker agents, and multiple cloud resource markets between broker agents and provider agents. Empirical results show that using the complex cloud negotiation mechanism, agents achieved high utilities and high success rates in negotiating for cloud resources. To automate cloud service composition, agents in this work adopt a focused selection contract net protocol (FSCNP) for dynamically selecting cloud services and use service capability tables (SCTs) to record the list of cloud agents and their services. Empirical results show that using FSCNP and SCTs, agents can successfully compose cloud services by autonomously selecting services.
---
paper_title: Self-organization through bottom-up coalition formation
paper_content:
We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system.
---
paper_title: Automated negotiation with decommitment for dynamic resource allocation in cloud computing
paper_content:
We consider the problem of allocating networked resources in dynamic environment, such as cloud computing platforms, where providers strategically price resources to maximize their utility. Resource allocation in these environments, where both providers and consumers are selfish agents, presents numerous challenges since the number of consumers and their resource demand is highly dynamic. While numerous auction-based approaches have been proposed in the literature, this paper explores an alternative approach where providers and consumers automatically negotiate resource leasing contracts. Since resource demand and supply can be dynamic and uncertain, we propose a distributed negotiation mechanism where agents negotiate over both a contract price and a decommitment penalty, which allows agents to decommit from contracts at a cost. We compare our approach experimentally, using representative scenarios and workloads, to both combinatorial auctions and the fixed-price model used by Amazon's Elastic Compute Cloud, and show that the negotiation model achieves a higher social welfare.
---
paper_title: Methods for Task Allocation Via Agent Coalition Formation
paper_content:
Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement.
---
paper_title: Random geometric graphs
paper_content:
We analyze graphs in which each vertex is assigned random coordinates in a geometric space of arbitrary dimensionality and only edges between adjacent points are present. The critical connectivity is found numerically by examining the size of the largest cluster. We derive an analytical expression for the cluster coefficient, which shows that the graphs are distinctly different from standard random graphs, even for infinite dimensionality. Insights relevant for graph bipartitioning are included.
---
paper_title: Self-organization through bottom-up coalition formation
paper_content:
We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system.
---
paper_title: Agent Organized Networks Redux
paper_content:
Individual robots or agents will often need to form coalitions to accomplish shared tasks, e.g., in sensor networks or markets. Furthermore, in most real systems it is infeasible for entities to interact with all peers. The presence of a social network can alleviate this problem by providing a neighborhood system within which entities interact with a reduced number of peers. Previous research has shown that the topology of the underlying social network has a dramatic effect on the quality of coalitions formed and consequently on system performance (Gaston & deslardins 2005a). It has also been shown that it is feasible to develop agents which dynamically alter connections to improve an organization's ability to form coalitions on the network. However those studies have not analysed the network topologies that result from connectivity adaptation strategies. In this paper the resulting network topologies were analysed and it was found that high performance and rapid convergence were attained because scale free networks were being formed. However it was observed that organizational performance is not impacted by limiting the number of links per agent to the total number of skills available within the population. implying that bandwidth was wasted by previous approaches. We used these observations to inform the design of a token based algorithm that attains higher performance using an order of magnitude less messages for both uniform and non-uniform distributions of skills.
---
paper_title: Agent Organized Networks Redux
paper_content:
Individual robots or agents will often need to form coalitions to accomplish shared tasks, e.g., in sensor networks or markets. Furthermore, in most real systems it is infeasible for entities to interact with all peers. The presence of a social network can alleviate this problem by providing a neighborhood system within which entities interact with a reduced number of peers. Previous research has shown that the topology of the underlying social network has a dramatic effect on the quality of coalitions formed and consequently on system performance (Gaston & deslardins 2005a). It has also been shown that it is feasible to develop agents which dynamically alter connections to improve an organization's ability to form coalitions on the network. However those studies have not analysed the network topologies that result from connectivity adaptation strategies. In this paper the resulting network topologies were analysed and it was found that high performance and rapid convergence were attained because scale free networks were being formed. However it was observed that organizational performance is not impacted by limiting the number of links per agent to the total number of skills available within the population. implying that bandwidth was wasted by previous approaches. We used these observations to inform the design of a token based algorithm that attains higher performance using an order of magnitude less messages for both uniform and non-uniform distributions of skills.
---
| Title: Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey
Section 1: INTRODUCTION
Description 1: Introduce the background, motivation, and main contribution of the paper regarding self-adaptation-based dynamic coalition formation in multiagent systems.
Section 2: AGENT NETWORK MODEL
Description 2: Define the formal structure of the agent network, including agents, their resources, states, and roles.
Section 3: COALITION FORMATION MECHANISM
Description 3: Describe the proposed mechanism for dynamic coalition formation, including the steps involved and the negotiation protocol used.
Section 4: How Does an Agent Adjust DoI Values?
Description 4: Explain the procedure and algorithm for agents to adjust their degree of involvement (DoI) values to minimize penalties while joining new coalitions.
Section 5: EXPERIMENTS AND ANALYSIS
Description 5: Present the experimental setup, results, and analysis, comparing the proposed mechanism with other coalition formation mechanisms.
Section 6: CONCLUSION
Description 6: Summarize the findings, contributions, and potential applications of the proposed mechanism, and suggest future research directions. |
A survey of technologies for parsing and indexing digital video | 17 | ---
paper_title: Visual reasoning for information retrieval from very large databases
paper_content:
When the database grows larger and larger, the user no longer knows what is in the database and nor does the user know clearly what should be retrieved. How to get at the data becomes a central problem for very large databases. We suggest an approach based upon data visualization and visual reasoning. The idea is to transform the data objects and present sample data objects in a visual space. The user can then incrementally formulate the information retrieval request in the visual space. By combining data visualization, visual query, visual examples and visual clues, we hope to come up with better ways for formulating and modifying a user's query. A prototype system using the Visual Language Compiler and the VisualNet is then described.
---
paper_title: SELECTION AND DISSEMINATION OF DIGITAL VIDEO VIA THE VIRTUAL VIDEO BROWSER
paper_content:
The Virtual Video Browser (VVB) is a manifestation of our mechanisms for the location, identification, and delivery of digital audio and video in a distributed system which can be extended to several application domains including multimedia-based home entertainment, catalog shopping, and distance learning. In the following sections we describe the VVB software application designed to allow the interactive browsing and content-based query of a video database and to facilitate the subsequent playout of selected titles.
---
paper_title: A similarity retrieval method for image databases using simple graphics
paper_content:
A retrieval method that uses key images is proposed for image databases. The key images represent layout information of the target image. Therefore, key images can appear explicitly in the target image. The layout information presentation scheme is also discussed. The key-image-matching method used to search for the target image in the database is described. Applying the concept of ambiguity to the matching method, users' requests can be reflected more accurately and flexibly. >
---
paper_title: Indexes for user access to large video databases
paper_content:
Video-on-Demand systems have received a good deal of attention recently. Few studies, however, have addressed the problem of locating a video of interest in a large video database. This paper describes the design and implementation of a metadata database and query interface that attempts to solve this information retrieval problem.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: A three-dimensional iconic environment for image database querying
paper_content:
Retrieval by contents of images from pictorial databases can be effectively performed through visual icon-based systems. In these systems, the representation of pictures with 2D strings, which are derived from symbolic projections, provides an efficient and natural way to construct iconic indexes for pictures and is also an ideal representation for the visual query. With this approach, retrieval is reduced to matching two symbolic strings. However, using 2D-string representations, spatial relationships between the objects represented in the image might not be exactly specified. Ambiguities arise for the retrieval of images of 3D scenes. In order to allow the unambiguous description of object spatial relationships, in this paper, following the symbolic projections approach, images are referred to by considering spatial relationships in the 3D imaged scene. A representation language is introduced that expresses positional and directional relationships between objects in three dimensions, still preserving object spatial extensions after projections. Iconic retrieval from pictorial databases with 3D interfaces is discussed and motivated. A system for querying by example with 3D icons, which supports this language, is also presented. >
---
paper_title: Video query formulation
paper_content:
For developing advanced query formulation methods for general multimedia data, we describe the issues related to video data. We distinguish between the requirements for image retrieval and video retrieval by identifying queryable attributes unique to video data, namely audio, temporal structure, motion, and events. Our approach is based on visual query methods to describe predicates interactively while providing feedback that is as similar as possible to the video data. An initial prototype of our visual query system for video data is presented.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: IconicBrowser: An iconic retrieval system for object-oriented databases
paper_content:
From the point of view of the ability to manage complex data, object-oriented database systems are superior to conventional database systems. Query languages for object-oriented databases, however, tend to be overly complicated for the casual user unfamiliar with sophisticated database manipulation. This paper describes an IconicBrowser which allows the user to retrieve objects in a database by means of icons. Icons represent classes and objects in the database. Queries are specified by overlapping one icon over another. The system then interprets them into the database operations depending on their combination, and at the same time generates predicate-based (text-base) queries which can be used in other applications of the database.
---
paper_title: PICQUERY: a high level query language for pictorial database management
paper_content:
A reasonably comprehensive set of data accessing and manipulation operations that should be supported by a generalized pictorial database management system (PDBMS) is proposed. A corresponding high-level query language, PICQUERY, is presented and illustrated through examples. PICQUERY has been designed with a flavor similar to QBE as the highly nonprocedural and conservational language for the pictorial database management system PICDMS. PICQUERY and a relational QBE-like language would form the language by which a user could access conventional relational databases and at the same time pictorial databases managed by PICDMS or other robust PDBMS. This language interface is part of an architecture aimed toward data heterogeneity transparency over pictorial and nonpictorial databases. >
---
paper_title: Media Streams: an iconic visual language for video annotation
paper_content:
In order to enable the search and retrieval of video from large archives, we need a representation of video content. Although some aspects of video can be automatically parsed, a detailed representation requires that video be annotated. We discuss the design criteria for a video annotation language with special attention to the issue of creating a global, reusable video archive. Our prototype system, Media Streams, enables users to create multi-layered, iconic annotations of streams video data. Within Media Streams, the organization and categories of the Director's Workshop allow users to browse and compound over 2200 iconic primitives by means of a cascading hierarchical structure which supports compounding icons across branches of the hierarchy. The problems of creating a representation of action for video are given special attention, as well as describing transitions in video. >
---
paper_title: PROBE Spatial Data Modeling in an Image Database and Query Processing Application
paper_content:
Abstracf-The PROBE research project has produced results in the areas of data modeling, spatial/temporal query processing, recursive query processing, and database system architecture for “nontraditional” application areas, many of which involve spatial data and data with complex structure. PROBE provides the point se? as a construct for modeling spatial data. This abstraction is compatible with notions of spatial data found in a wide variety of applications. PROBE is extensible and supports a generalization hierarchy, so it is possible to incorporate application-specific implementations of the point set abstraction. PROBE’s query processor supports point sets with the geometry filter, an optiniizer of spatial queries. Spatial queries are processed by decomposing them into 1) a set-at-a-time portion that is evaluated efficiently by the geometry filter and 2) a portion that involves detailed nianipulations of individual spatial objects by functions supplied with the application-specific representation. The output from the first step is an approximate answer, which is refined in the second step. The data model and the geometry filter are valid in all dimensions, and they are compatible with a wide variety of representations. PROBE’S spatial data model and geometry filter are described, and it is shown how these facilities can be used to support image database applications.
---
paper_title: SELECTION AND DISSEMINATION OF DIGITAL VIDEO VIA THE VIRTUAL VIDEO BROWSER
paper_content:
The Virtual Video Browser (VVB) is a manifestation of our mechanisms for the location, identification, and delivery of digital audio and video in a distributed system which can be extended to several application domains including multimedia-based home entertainment, catalog shopping, and distance learning. In the following sections we describe the VVB software application designed to allow the interactive browsing and content-based query of a video database and to facilitate the subsequent playout of selected titles.
---
paper_title: Query by image and video content: the QBIC system
paper_content:
Research on ways to extend and improve query methods for image databases is widespread. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information. Two key properties of QBIC are (1) its use of image and video content-computable properties of color, texture, shape and motion of images, videos and their objects-in the queries, and (2) its graphical query language, in which queries are posed by drawing, selecting and other graphical means. This article describes the QBIC system and demonstrates its query capabilities. QBIC technology is part of several IBM products. >
---
paper_title: Indexes for user access to large video databases
paper_content:
Video-on-Demand systems have received a good deal of attention recently. Few studies, however, have addressed the problem of locating a video of interest in a large video database. This paper describes the design and implementation of a metadata database and query interface that attempts to solve this information retrieval problem.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Delivering interactive multimedia documents over networks
paper_content:
A family of applications that consists of interactive multimedia documents, such as electronic magazines and interactive TV shows, is examined and the links between application architecture, user behavior, and network performance are investigated. The kinds of application-specific information that influence the end-to-end quality of service are discussed. The architecture and dynamics of the interactive document in terms of presentation objects (P-Objects), which are the segments of information accessed by the application and which are described according to their size, media composition, and access links, are described. The same structural characteristics that may make an interactive multimedia document appealing to the end user are the characteristics that are helpful during dynamic network performance optimization. This observation is based on the hypothesis that the P-Objects' access graph, together with viewing time statistics, is the information most useful to the network delivery control mechanism for optimizing network performance. Preliminary guidelines for both network and application designers to address each other's concerns are presented. >
---
paper_title: Cinematic primitives for multimedia
paper_content:
The development of robust frameworks in interactive multimedia for representing story elements to the machine so that they can be retrieved in multiple contexts is addressed. Interactive multimedia is discussed as a user-directed form of storytelling, and the nature of cinematic storytelling is examined. It is proposed that content can be represented in layers. This model for layered information will allow programs to take advantage of the relation between cinematic sequences and the world they represent. The collection of content by the camera and microphone is considered in this context. The use of the methodology to build meaningful, context-rich sequences is discussed. >
---
paper_title: Multiresolution video indexing for subband coded video databases
paper_content:
In this paper we present a multiresolution approach for video indexing and feature matching of subband coded video databases. Subband coding refers to a coding technique where the input images are quantized after being decomposed into several narrow spatial frequency bands by filtering and decimation. Five different approaches were tested for scene change detection which is applied only on the lowest subband for computational efficiency. Two kinds of scene changes, abrupt and smoothly accumulated scene changes, mark the beginning of new scene segments. An index for each scene segment is the histogram of two representative frames, which we take to be the first and the last frame of the scene for simplicity. Using the approach of query by example, the index matching algorithm takes a multi-resolution approach by hierarchically comparing histograms at different resolutions. The search algorithm for the match between example query and its target scene segment starts from the coarsest resolution, and moves to the next finer resolution until a single match is obtained or the finest resolution is reached. Experimental results are presented, and the proposed indexing technique appears to be promising for its computational efficiency and its inherent hierarchical search procedure.
---
paper_title: Scene change detection and content-based sampling of video sequences
paper_content:
Digital images and image sequences (video) are a significant component of multimedia information systems, and by far the most demanding in terms of storage and transmission requirements. Content-based temporal sampling of video frames is proposed as an efficient method for representing the visual information contained in the video sequence by using only a small subset of the video frames. This involves the identification and retention of frames at which the contents of the scene is `significantly' different from the previously retained frames. It is argued that the criteria used to measure the significance of a change in the contents of the video frames are subjective, and performing the task of content-based sampling of image sequences, in general, requires a high level of image understanding. However, a significant subset of the points at which the contextual information in the video frames change significantly can be detected by a `scene change detection' method. The definition of a scene change is generalized to include not only the abrupt transitions between shots, but also gradual transitions between shots resulting from video editing modes, and inter-shot changes induced by camera operations. A method for detecting abrupt and gradual scene changes is discussed. The criteria for detecting camera-induced scene changes from camera operations are proposed. Scene matching is proposed as a means of achieving further reductions in the storage and transmission requirements.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Video parsing using compressed data
paper_content:
Parsing video content is an important first step in the video indexing process. This paper presents algorithms to automate the video parsing task, including video partitioning and video clip classification according to camera operations using compressed video data. We have studied and implemented two algorithms for partitioning video data compressed according to the MPEG standard. The first one is based on discrete cosine transform coefficients of video frames, and the other based on correlation of motion vectors. Algorithms to detect camera operations using motion vectors are presented.
---
paper_title: The JPEG still picture compression standard
paper_content:
For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Scene change detection and content-based sampling of video sequences
paper_content:
Digital images and image sequences (video) are a significant component of multimedia information systems, and by far the most demanding in terms of storage and transmission requirements. Content-based temporal sampling of video frames is proposed as an efficient method for representing the visual information contained in the video sequence by using only a small subset of the video frames. This involves the identification and retention of frames at which the contents of the scene is `significantly' different from the previously retained frames. It is argued that the criteria used to measure the significance of a change in the contents of the video frames are subjective, and performing the task of content-based sampling of image sequences, in general, requires a high level of image understanding. However, a significant subset of the points at which the contextual information in the video frames change significantly can be detected by a `scene change detection' method. The definition of a scene change is generalized to include not only the abrupt transitions between shots, but also gradual transitions between shots resulting from video editing modes, and inter-shot changes induced by camera operations. A method for detecting abrupt and gradual scene changes is discussed. The criteria for detecting camera-induced scene changes from camera operations are proposed. Scene matching is proposed as a means of achieving further reductions in the storage and transmission requirements.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: Video parsing using compressed data
paper_content:
Parsing video content is an important first step in the video indexing process. This paper presents algorithms to automate the video parsing task, including video partitioning and video clip classification according to camera operations using compressed video data. We have studied and implemented two algorithms for partitioning video data compressed according to the MPEG standard. The first one is based on discrete cosine transform coefficients of video frames, and the other based on correlation of motion vectors. Algorithms to detect camera operations using motion vectors are presented.
---
paper_title: A Guided Tour Of Computer Vision
paper_content:
An introduction to computer vision, covering the structure and properties of the visual world. This concise guide stresses fundamental concepts, and also provides details and pointers with respect to recent developments. The author pursues the narrow view of vision covering the structure and properties of the visual world, thereby providing a lucid introduction for the novice and a fresh perspective to the expert.
---
paper_title: Segmentation of Frame Sequences Obtained by a Moving Observer
paper_content:
In many applications of computer vision, a frame sequence may be acquired using a moving camera. We propose ego-motion polar transformation for segmentation of such sequences. It is shown that segmentation and extraction of motion information become easier in the transformed domain. Our experience with a translating camera indicates that this technique can play a very important role in the analysis of moving observer dynamic scenes.
---
paper_title: Multiresolution video indexing for subband coded video databases
paper_content:
In this paper we present a multiresolution approach for video indexing and feature matching of subband coded video databases. Subband coding refers to a coding technique where the input images are quantized after being decomposed into several narrow spatial frequency bands by filtering and decimation. Five different approaches were tested for scene change detection which is applied only on the lowest subband for computational efficiency. Two kinds of scene changes, abrupt and smoothly accumulated scene changes, mark the beginning of new scene segments. An index for each scene segment is the histogram of two representative frames, which we take to be the first and the last frame of the scene for simplicity. Using the approach of query by example, the index matching algorithm takes a multi-resolution approach by hierarchically comparing histograms at different resolutions. The search algorithm for the match between example query and its target scene segment starts from the coarsest resolution, and moves to the next finer resolution until a single match is obtained or the finest resolution is reached. Experimental results are presented, and the proposed indexing technique appears to be promising for its computational efficiency and its inherent hierarchical search procedure.
---
paper_title: Digital video segmentation
paper_content:
The data driven, bottom up approach to video segmentation has ignored the inherent structure that exists in video. This work uses the model driven approach to digital video segmentation. Mathematical models of video based on video production techniques are formulated. These models are used to classify the edit effects used in video and film production. The classes and models are used to systematically design the feature detectors for detecting edit effects in digital video. Digital video segmentation is formulated as a feature based classification problem. Experimental results from segmenting cable television programming with cuts, fades, dissolves and page translate edits are presented.
---
paper_title: Projection-detecting filter for video cut detection
paper_content:
This paper discusses a video cut detection method. Cut detection is an important technique for making videos easier to handle. First, this paper analyzes the distribution of the image differenceV to clarify the characteristics that makeV suitable for cut detection. We propose a cut detection method that uses a projection (an isolated sharp peak) detecting filter. A motion sensitiveV is used to stabilizeV projections at cuts, and cuts are detected more reliably with this filter. The method can achieve high detection rates without increasing the rate of misdetection. Experimental results confirm the effectiveness of the filter.
---
paper_title: Automatic partitioning of full-motion video
paper_content:
Partitioning a video source into meaningful segments is an important step for video indexing. We present a comprehensive study of a partitioning system that detects segment boundaries. The system is based on a set of difference metrics and it measures the content changes between video frames. A twin-comparison approach has been developed to solve the problem of detecting transitions implemented by special effects. To eliminate the false interpretation of camera movements as transitions, a motion analysis algorithm is applied to determine whether an actual transition has occurred. A technique for determining the threshold for a difference metric and a multi-pass approach to improve the computation speed and accuracy have also been developed.
---
paper_title: The R*-tree: an efficient and robust access method for points and rectangles
paper_content:
The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R * -tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R * -tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R * -tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R * -tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.
---
paper_title: Description and performance analysis of signature file methods for office filing
paper_content:
Signature files have attracted a lot of interest as an access method for text and specifically for messages in the office environment. Messages are stored sequentially in the message file, whereas their hash-coded abstractions (signatures) are stored sequentially in the signature file. To answer a query, the signature file is examined first, and many nonqualifying messages are immediately rejected. In this paper we examine the problem of designing signature extraction methods and studying their performance. We describe two old methods, generalize another one, and propose a new method and its variation. We provide exact and approximate formulas for the dependency between the false drop probability and the signature size for all the methods, and we show that the proposed method (VBC) achieves approximately ten times smaller false drop probability than the old methods, whereas it is well suited for collections of documents with variable document sizes.
---
paper_title: Indexes for user access to large video databases
paper_content:
Video-on-Demand systems have received a good deal of attention recently. Few studies, however, have addressed the problem of locating a video of interest in a large video database. This paper describes the design and implementation of a metadata database and query interface that attempts to solve this information retrieval problem.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: A three-dimensional iconic environment for image database querying
paper_content:
Retrieval by contents of images from pictorial databases can be effectively performed through visual icon-based systems. In these systems, the representation of pictures with 2D strings, which are derived from symbolic projections, provides an efficient and natural way to construct iconic indexes for pictures and is also an ideal representation for the visual query. With this approach, retrieval is reduced to matching two symbolic strings. However, using 2D-string representations, spatial relationships between the objects represented in the image might not be exactly specified. Ambiguities arise for the retrieval of images of 3D scenes. In order to allow the unambiguous description of object spatial relationships, in this paper, following the symbolic projections approach, images are referred to by considering spatial relationships in the 3D imaged scene. A representation language is introduced that expresses positional and directional relationships between objects in three dimensions, still preserving object spatial extensions after projections. Iconic retrieval from pictorial databases with 3D interfaces is discussed and motivated. A system for querying by example with 3D icons, which supports this language, is also presented. >
---
paper_title: Prefix B-trees
paper_content:
Two modifications of B -trees are described, simple prefix B -trees and prefix B -trees. Both store only parts of keys, namely prefixes, in the index part of a B * -tree. In simple prefix B -trees those prefixes are selected carefully to minimize their length. In prefix B -trees the prefixes need not be fully stored, but are reconstructed as the tree is searched. Prefix B -trees are designed to combine some of the advantages of B -trees, digital search trees, and key compression techniques while reducing the processing overhead of compression techniques.
---
paper_title: ACCESS METHODS OF IMAGE DATABASE
paper_content:
The perception of spatial relationships among objects in a picture is one of the important selection criteria to discriminate and retrieve images in an image database system. The data structure called 2-D string, proposed by Chang et al., is adopted to represent the symbolic pictures. When there are a large number of images in the image database and each image contains many objects, the processing time for image retrievals is tremendous. It is essential to develop efficient access methods for these retrievals. In this paper, the efficient methods for retrieval by objects, retrieval by pairwise spatial relationships and retrieval by subpicture are proposed. All the methods are based on the superimposed coding technique.
---
paper_title: The K-D-B-tree: a search structure for large multidimensional dynamic indexes
paper_content:
The problem of retrieving multikey records via range queries from a large, dynamic index is considered. By large it is meant that most of the index must be stored on secondary memory. By dynamic it is meant that insertions and deletions are intermixed with queries, so that the index cannot be built beforehand. A new data structure, the K-D-B-tree, is presented as a solution to this problem. K-D-B-trees combine properties of K-D-trees and B-trees. It is expected that the multidimensional search effieciency of balanced K-D-trees and the I/O efficiency of B-trees should both be approximated in the K-D-B-tree. Preliminary experimental results that tend to support this are reported.
---
paper_title: Representation of multi-resolution symbolic and binary pictures using 2D H-strings
paper_content:
A type of quadtree data structure, called a 2D H-string, is proposed for representing symbolic and binary pictures. The approach has an embedding property that allows for spatial-relational operators to be easily inserted into the structure. Algorithms are presented for traversing 2D H-strings to carry out the processing needed and for intersecting two pictures directly on their 2D H-string representations. The space complexity of 2D H-strings is analyzed and compared with other quadtree data structures. It is shown to be efficient in terms of space complexity and effective in terms of cooperativeness with other spatial-relational operators. >
---
paper_title: A pictorial index mechanism for model-based matching
paper_content:
Abstract We are currently developing unified query processing strategies for image databases. To perform this task, model-based representations of images by content are being used, as well as a hierarchical generalization of a relatively new object-recognition technique called data-driven indexed hypotheses. As the name implies, it is index-based, from which its efficiency derives. Earlier approaches to data-driven model-based object recognition techniques were not capable of handling complex image data containing overlapping, partially visible, and touching objects due to the limitations of the features used for building models. Recently, a few data-driven techniques capable of handling complex image data have been proposed. In these techniques, as in traditional databases, iconic index structures are employed to store the image and shape representation in such a way that searching for a given shape or image feature can be conducted efficiently. Some of these techniques handle the insertion and deletion of shapes and/or image representations very efficiently and with very little influence on the overall system performance. However, the main disadvantage of all previous data-driven implementations is that they are main memory based. In the present paper, we describe a secondary memory implementation of data-driven indexed hypotheses along with some performance studies we have conducted.
---
paper_title: Access methods for text
paper_content:
This paper compares text retrieval methods intended for office systems. The operational requirements of the office environment are discussed, and retrieval methods from database systems and from information retrieval systems are examined. We classify these methods and examine the most interesting representatives of each class. Attempts to speed up retrieval with special purpose hardware are also presented, and issues such as approximate string matching and compression are discussed. A qualitative comparison of the examined methods is presented. The signature file method is discussed in more detail.
---
paper_title: Video parsing using compressed data
paper_content:
Parsing video content is an important first step in the video indexing process. This paper presents algorithms to automate the video parsing task, including video partitioning and video clip classification according to camera operations using compressed video data. We have studied and implemented two algorithms for partitioning video data compressed according to the MPEG standard. The first one is based on discrete cosine transform coefficients of video frames, and the other based on correlation of motion vectors. Algorithms to detect camera operations using motion vectors are presented.
---
paper_title: Multidimensional binary search trees used for associative searching
paper_content:
This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1)/ k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t )/ k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given.
---
paper_title: The Grid File: An Adaptable, Symmetric Multikey File Structure
paper_content:
Traditional file structures that provide multikey access to records, for example, inverted files, are extensions of file structures originally designed for single-key access. They manifest various deficiencies in particular for multikey access to highly dynamic files. We study the dynamic aspects of file structures that treat all keys symmetrically, that is, file structures which avoid the distinction between primary and secondary keys. We start from a bitmap approach and treat the problem of file design as one of data compression of a large sparse matrix. This leads to the notions of a grid partition of the search space and of a grid directory , which are the keys to a dynamic file structure called the grid file . This file system adapts gracefully to its contents under insertions and deletions, and thus achieves an upper bound of two disk accesses for single record retrieval; it also handles range queries and partially specified queries efficiently. We discuss in detail the design decisions that led to the grid file, present simulation results of its behavior, and compare it to other multikey access file structures.
---
paper_title: Conceptual modeling of multimedia documents
paper_content:
An approach to the document-retrieval problem that aims to increase the efficiency and effectiveness of document-retrieval systems by exploiting the semantic contents of the documents is presented. The document retrieval problem is delineated, and conceptual document modeling basics and requirements are discussed. An experimental system, the Multimedia Office Server (Multos), which implements some of the document-model concepts described, is presented. >
---
paper_title: A unified approach to data modeling and retrieval for a class of image database applications
paper_content:
Recently, there has been widespread interest in various kinds of database management systems for managing information from images. Image Retrieval problem is concerned with retrieving images that are relevant to users’ requests from a large collection of images, referred to as the image database. Since the application areas are very diverse, there seems to be no consensus as to what an image database system really is. Consequently, the characteristics of the existing image database systems have essentially evolved from domain specific considerations [20]. In response to this situation, we have introduced a unified framework for retrieval in image databases in [17]. Our approach to the image retrieval problem is based on the premise that it is possible to develop a data model and an associated retrieval model that can address the needs of a class of image retrieval applications. For this class of applications, from the perspective of the end users, image processing and image retrieval are two orthogonal issues and this distinction contributes toward domain-independence. In this paper, we analyze the existing approaches to image data modeling and establish a taxonomy based on which these approaches can be systematically studied and understood. Then we investigate a class of image retrieval applications from the view point of their retrieval requirements to establish both a taxonomy for image attributes and generic retrieval types. To support the generic retrieval types, we have proposed a data model/framework referred to as AIR. AIR data model employs multiple logical representations. The logical representations can be viewed as abstractions of physical images at various levels. They are stored as persistent data in the image database. We then discuss how image database systems can be developed based on the AIR framework. Development of two image database retrieval applications based on our implementation of AIR framework are briefly described. Finally, we identify several research issues in AIR and our proposed solutions to some of them are indicated.
---
paper_title: Chabot: Retrieval from a Relational Database of Images
paper_content:
Selecting from a large, expanding collection of images requires carefully chosen search criteria. We present an approach that integrates a relational database retrieval system with a color analysis technique. The Chabot project was initiated at our university to study storage and retrieval of a vast collection of digitized images. These images are from the State of California Department of Water Resources. The goal was to integrate a relational database retrieval system with content analysis techniques that would give our querying system a better method for handling images. Our simple color analysis method, if used in conjunction with other search criteria, improves our ability to retrieve images efficiently. The best result is obtained when text-based search criteria are combined with content-based criteria and when a coarse granularity is used for content analysis. >
---
paper_title: SELECTION AND DISSEMINATION OF DIGITAL VIDEO VIA THE VIRTUAL VIDEO BROWSER
paper_content:
The Virtual Video Browser (VVB) is a manifestation of our mechanisms for the location, identification, and delivery of digital audio and video in a distributed system which can be extended to several application domains including multimedia-based home entertainment, catalog shopping, and distance learning. In the following sections we describe the VVB software application designed to allow the interactive browsing and content-based query of a video database and to facilitate the subsequent playout of selected titles.
---
paper_title: Query by image and video content: the QBIC system
paper_content:
Research on ways to extend and improve query methods for image databases is widespread. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information. Two key properties of QBIC are (1) its use of image and video content-computable properties of color, texture, shape and motion of images, videos and their objects-in the queries, and (2) its graphical query language, in which queries are posed by drawing, selecting and other graphical means. This article describes the QBIC system and demonstrates its query capabilities. QBIC technology is part of several IBM products. >
---
| Title: A survey of technologies for parsing and indexing digital video
Section 1: Introduction
Description 1: This section introduces the background and the necessity for advanced technologies in parsing and indexing digital video.
Section 2: Video Data Indexing
Description 2: This section discusses the requirements for effectively indexing video data for easy retrieval, and mentions the primary approaches and systems used for video indexing.
Section 3: Video Data Modeling
Description 3: This section covers the methods of modeling video data, highlighting the differences between structured and unstructured data, and introduces various models for video data.
Section 4: Information Extraction
Description 4: This section describes the processes and techniques used for extracting useful information from video data, including automatic, manual, and hybrid extraction methods.
Section 5: Video Scene Dynamics
Description 5: This section explains the dynamics of video scenes, particularly focusing on the challenges of segmenting video data based on scene transitions.
Section 6: Pixel-Level Change Detection
Description 6: This section delves into the methods of detecting changes at the pixel level for video segmentation, discussing various algorithms and their robustness.
Section 7: Likelihood Ratio
Description 7: This section presents the likelihood ratio approach for detecting changes in video frames based on statistical characteristics of pixel intensities.
Section 8: Histogram Comparison
Description 8: This section explains the use of histogram comparison methods for detecting scene changes, highlighting gray-level and color histogram techniques.
Section 9: Twin-Comparison
Description 9: This section introduces the twin-comparison method for detecting gradual transitions in video scenes with special camera effects.
Section 10: Detection of Camera Motion
Description 10: This section describes the techniques used to detect camera motion, such as optical flow, for accurate video scene segmentation.
Section 11: Using DCT Coefficients in MPEG Encoded Video
Description 11: This section discusses the use of Discrete Cosine Transform (DCT) coefficients from MPEG encoded video for detecting camera breaks and segmentation.
Section 12: Segmentation Based on Object Motion
Description 12: This section covers the segmentation of video data based on the analysis of object motion within the video frames.
Section 13: Segmentation Based on Subband-Coded Video Data
Description 13: This section explains segmentation techniques for subband-coded video data, detailing various metrics used for video segmentation.
Section 14: Segmentation Based on Features
Description 14: This section outlines feature-based segmentation methods, including intensity edge analysis and chromatic scaling models.
Section 15: Data Representation and Organization
Description 15: This section reviews various data representation and organization techniques for indexing and retrieving video data, and compares different data structures and methods.
Section 16: Examples
Description 16: This section provides practical examples of video data indexing and retrieval for different types of video content, such as newscasts, distance learning, and surveillance.
Section 17: Conclusion
Description 17: This section concludes the paper, summarizing the findings and future directions in the field of video parsing and indexing technologies. |
Modular Self-Reconfigurable Robotic Systems: A Survey on Hardware Architectures | 9 | ---
paper_title: Modular robot systems towards the execution of cooperative tasks in large facilities
paper_content:
Large facilities present with wide range of tasks and modular robots present as a flexible robot solution. Some of the tasks to be performed in large facilities can vary from, achieving locomotion with different modular robot (M-Robot) configurations or the execution of cooperative tasks such as moving objects or manipulating objects with multiple modular robot configurations (M-Robot colony) and existing robot deployments. The coordination mechanisms enable the M-Robots to perform cooperative tasks as efficiently as specialised or standard robots. The approach is based on the combination of two communication types i.e., Inter Robot and Intra Robot communications. Through this communication architecture, tight and loose cooperation strategies are implemented to synchronise modules within an M-Robot configuration and to coordinate M-Robots belonging to the colony. These cooperation strategies are based on a closed-loop discrete time method, a remote clock reading method and a negotiation protocol. The coordination mechanisms and cooperation strategies are implemented into a real modular robotic system, SMART. The need for using such a mechanism in hazardous section of large scientific facilities is presented along with constraints and tasks. Locomotion execution of the mobile M-Robots colony in a bar-pushing task is used as an example for cooperative task execution of the coordination mechanisms and results are presented. We present the results of cooperative task execution using heterogeneous modular robotic system (MRS).It is performed using tight and loosely coupled inter and intra robot configurations.The approach is based on the use of an inventory of three types of basic modules of the modular robot system.The heterogeneity in MRS is an advantageous property for the diverse tasks in large facilities.
---
paper_title: Modular and reconfigurable mobile robotics
paper_content:
With increasing demand on reliable robotic platforms that can alleviate the burden of daily painstaking tasks, researchers have focused their effort towards developing robotic platforms that possess a high level of autonomy and versatility in function. These robots, capable of operating either individually or in a group, also possess the structural modular morphology that enables them to adapt to the unstructured nature of a real environment. Over the past two decades, significant work has been published in this field, particularly in the aspects of autonomy, mobility and docking. This paper reviews the primary methods in the literature related to the fields of modular and reconfigurable mobile robotics. By bringing together aspects of modularity, including docking and autonomy, and synthesizing the most relevant findings, there is optimism that a more complete understanding of this field will serve as a starting ground for innovation and integration of such technology in the urban environment.
---
paper_title: Get back in shape! [SMA self-reconfigurable microrobots]
paper_content:
Self-reconfigurable robotic systems composed of multiple modules have been investigated intensively with respect to their versatility, flexibility, and fault-tolerance. Although some microscale self-assembly systems have been reported, they are passively assembled to predetermined shape by surface tension in an irreversible manner and cannot form arbitrary shapes. To develop a modular microrobot that can actively reconfigure itself, we adopt an actuating mechanism driven by a shape memory alloy (SMA). One of the advantages of an SMA actuator is that it keeps a higher power-weight ratio on microscales than electromagnetic motors.
---
paper_title: Scalable modular self-reconfigurable robots using external actuation
paper_content:
This paper presents a method for scaling down the size and scaling up the number of modules of self- re configurable systems by focusing on the actuation mechanism. Rather than developing smaller actuators, the main actuator is removed entirely. Energy instead comes from the environment to provide motion in prescribed synchronous ways. Prescribed synchronous motions allow much faster assembly times than random Brownian motion which has been used before. An instantiation of this idea is presented using a motion platform to induce motions based on the inertial properties of the modules and the timed actuation of small latching mechanisms.
---
paper_title: A physical implementation of the self-reconfiguring crystalline robot
paper_content:
We discuss a physical implementation of the crystalline robot system. Crystalline robots consist of modules that can aggregate together to form distributed robot systems. Crystalline modules are actuated by expanding and contracting each unit. This actuation mechanism permits automated shape metamorphosis. We describe the crystalline module concept and a physical implementation of a robot system with ten units. We describe experiments with this robot.
---
paper_title: A Modular Self-Reconfigurable Bipartite Robotic System: Implementation and Motion Planning
paper_content:
In this manuscript, we discuss i>I-Cubes, a class of modular robotic system that is capable of reconfiguring itself in 3-D to adapt to its environment. This is a bipartite system, i.e., a collection of (i) active elements for actuation, and (ii) passive elements acting as connectors. Active elements (i>links) are 3-DOF manipulators that are capable of attaching/detaching from/to the passive elements (i>cubes), which can be positioned and oriented using links. Self-reconfiguration capability enables the system to perform locomotion tasks over difficult terrains the shape and size can be changed according to the task. This paper describes the design of the system, and 3-D reconfiguration properties. Specifics of the hardware implementation, results of the experiments with the current prototypes, our approach to motion planning and problems related to 3-D motion planning are given.
---
paper_title: PetRo: Development of a modular pet robot
paper_content:
Modular robots present many advantages that open opportunities for novel applications and greater variability in operational parameters. We present PetRo (which stands for Pet Robot) as a modular throwable, self-assembling and reconfigurable pet robot. Some of the design challenges and solutions selected are explained as well as the two main applications for which the robot architecture is being developed: namely as a companion pet and for search and rescue operations. Although both applications are being considered, this paper places emphasis on the first and the potential for human robot interactive communication is highlighted.
---
paper_title: M-blocks: Momentum-driven, magnetic modular robots
paper_content:
In this paper, we describe a novel self-assembling, self-reconfiguring cubic robot that uses pivoting motions to change its intended geometry. Each individual module can pivot to move linearly on a substrate of stationary modules. The modules can use the same operation to perform convex and concave transitions to change planes. Each module can also move independently to traverse planar unstructured environments. The modules achieve these movements by quickly transferring angular momentum accumulated in a self-contained flywheel to the body of the robot. The system provides a simplified realization of the modular actions required by the sliding cube model using pivoting. We describe the principles, the unit-module hardware, and extensive experiments with a system of eight modules.
---
paper_title: Self-reconfiguring robots
paper_content:
At the Dartmouth Robotics Laboratory, we have a vision of the future: a modular robot that can assume a snake shape to traverse a tunnel, reconfigure upon exiting as a six-legged robot to traverse rough terrain, and change shape and gait to climb stairs and enter a building. The key to such versatility is self-reconfiguration-a new way of thinking about robots that offers a rich class of questions about robot design, control, and use. To help realize our vision, we designed the Molecule, a small and simple robotic module, capable of self-reconfiguration in three dimensions.
---
paper_title: Kinematics of a metamorphic robotic system
paper_content:
A metamorphic robotic system is a collection of mechatronic modules, each of which has the ability to connect, disconnect, and climb over adjacent modules. A change in the macroscopic morphology results from the locomotion of each module over its neighbors. That is, a metamorphic system can dynamically self-reconfigure. Metamorphic systems can therefore be viewed as a large swarm of physically connected robotic modules which collectively act as a single entity. What separates metamorphic systems from other reconfigurable robots is that they possess all of the following properties: (1) self-reconfigurability without outside help; (2) a large number of homogeneous modules; and (3) physical constraints ensure contact between modules. In this paper, the kinematic constraints governing a particular metamorphic robot are addressed. >
---
paper_title: 3D M-Blocks: Self-reconfiguring robots capable of locomotion via pivoting in three dimensions
paper_content:
This paper presents the mechanical design of a modular robot called the 3D M-Block, a 50mm cubic module capable of both independent and lattice-based locomotion. The first M-Blocks described in [1] could pivot about one axis of rotation only. In contrast, the 3D M-blocks can exert on demand both forward and backward torques about three orthogonal axes, for a total of six directions. The 3D M-Blocks transform these torques into pivoting motions which allow the new 3D M-Blocks to move more freely than their predecessors. Individual modules can employ pivoting motions to independently roll across a wide variety of surfaces as well as to join and move relative to other M-Blocks as part of a larger collective structure. The 3D M-Block maintains the same form factor and magnetic bonding system as the one-dimensional M-Blocks [1], but a new fabrication process supports more efficient and precise production. The 3D M-blocks provide a robust and capable modular self-reconfigurable robotic platform able to support swarm robot applications through individual module capabilities and self-reconfiguring robot applications using connected lattices of modules.
---
paper_title: Mechatronic design of a modular self-reconfiguring robotic system
paper_content:
Design and implementation of I-Cubes, a modular self-reconfigurable robotic system, is discussed. I-Cubes is a bipartite collection of individual modules that can be independently controlled. The group consists of active elements, called links, which are 3-DOF manipulators capable of attaching to/detaching from the passive elements (cubes) acting as connectors. The cubes can be oriented and positioned by the links. Using actuation and attachment properties of the link and the cubes, the system can self-reconfigure to adapt to its environment. Tasks such as moving over obstacles, climbing stairs can be performed by changing the relative position and connection of the modules. The links are actuated using servomotors and worm gear mechanisms. Mechanical encoders and rotary switches provide position feedback for semi-autonomous control of the system. The cubes are equipped with a novel mechanism that provides inter-module attachment. Design and hardware implementation of the system as well as experimental results are presented.
---
paper_title: Micro Self-Reconfigurable Robotic System using Shape Memory Alloy
paper_content:
This paper presents micro self-reconfigurable modular robotic systems using shape memory alloy (SMA). The system is designed so that various shapes can be autonomously formed by a group of identical mechanical units. The unit realizes rotational motion by using an actuator mechanism composed of two SMA torsion coil springs. We have realized micro-size prototype units and examined their fundamental functions by experiments. An extended 3D system and its self-reconfiguration algorithm are also discussed.
---
paper_title: Em-cube: cube-shaped, self-reconfigurable robots sliding on structure surfaces
paper_content:
Many previous works simulate cube-shaped modular robots to explain their systems and algorithms. This paper explores a cube-shaped, self-reconfigurable system composed of EM-Cube robot modules. The paper describes the system's design, implementation, movement algorithms, and experimentation. It reports on the hardware and software, and presents the algorithms of linear walking, convex and concave transition, and locomotion. Finally, it discusses EM-Cube locomotion experiments.
---
paper_title: A 3-D self-reconfigurable structure
paper_content:
A three-dimensional, self-reconfigurable structure is proposed. The structure is a fully distributed system composed of many identical 3-D units. Each unit has functions of changing local connection, information processing, and communication among neighborhood units. Groups of units cooperate to change their connection so that the shape of the whole solid structure transforms into an arbitrary shape. Also, the structure can repair itself by rejecting faulty units, replacing them with spare units. This kind of self-maintainability is essential to structure's longevity in hazardous or remote environments such as space or deep sea where human operators cannot approach. We have designed and built a prototype unit to examine the feasibility of the 3-D self-reconfigurable concept. The design of the unit, method of reconfiguration, hardware implementation, and results of preliminary experiments are shown. In the last part of the paper, distributed software for self-reconfiguration is discussed.
---
paper_title: Useful metrics for modular robot motion planning
paper_content:
In this paper the problem of dynamic self-reconfiguration of a class of modular robotic systems referred to as metamorphic systems is examined. A metamorphic robotic system is a collection of mechatronic modules, each of which has the ability to connect, disconnect, and climb over adjacent modules. We examine the near-optimal reconfiguration of a metamorphic robot from an arbitrary initial configuration to a desired final configuration. Concepts of distance between metamorphic robot configurations are defined, and shown to satisfy the formal properties of a metric. These metrics, called configuration metrics, are then applied to the automatic self-reconfiguration of metamorphic systems in the case when one module is allowed to move at a time. There is no simple method for computing the optimal sequence of moves required to reconfigure. As a result, heuristics which can give a near optimal solution must be used. We use the technique of simulated annealing to drive the reconfiguration process with configuration metrics as cost functions. The relative performance of simulated annealing with different cost functions is compared and the usefulness of the metrics developed in this paper is demonstrated.
---
paper_title: Modular ATRON: modules for a self-reconfigurable robot
paper_content:
This paper describes the mechanical and electrical design of a new lattice based self-reconfigurable robot, called the ATRON. The ATRON system consists of several fully self-contained robot modules, each having their own processing power, power supply, sensors and actuators. The ATRON modules are roughly spheres with equatorial rotation. Each module can be connected to up to eight neighbors through four male and four female connectors. In this paper, we describe the realization of the design, both the mechanics and the electronics. Details on power sharing and power consumption is given. Finally, this paper includes a brief outline of our future work on the ATRON system.
---
paper_title: Reliable External Actuation for Extending Reachable Robotic Modular Self-Reconfiguration
paper_content:
External actuation in self-reconfigurable modular robots promises to allow modules to shrink down in size. Synchronous external motions promise to allow fast convergence and assembly times. XBot is a modular system that uses synchronous external actuation, but has a limited range of reachable configurations stemming from a single motion primitive of a module rotating about another. This paper proposes to extend the motion primitives by using moves with two modules swinging in a dynamic chain. The feasibility of these motion primitives is proven experimentally. A parameterization of the external actuation motion profiles is explored to define a space of physically valid motion profiles. The larger the space, the more robust the motion primitives will be to inexact initial conditions and to imprecision in the external actuation mechanisms. Additionally, this paper proves a configuration of XBot meta-modules can reach any configuration using just these motion primitives.
---
paper_title: A 3-D self-reconfigurable structure and experiments
paper_content:
A three-dimensional self-reconfigurable structure made of identical units is proposed. Each unit has six arms on the surface of its base cube which can connect to neighboring units mechanically. By the connection, cubic lattice structure is formed. A unit can carry its neighbor unit from one node of the lattice to another by rotating its arm by 90 degrees. Repeating this movement, the structure can reconfigure itself to realize various 3D structures. General process of reconfiguration were proposed for this system. Four units were made and basic motions of self-reconfiguration were verified.
---
paper_title: Robust and reversible self-reconfiguration
paper_content:
Modular, self-reconfigurable robots are robots that can change their own shape by physically rearranging the modules from which they are built. Self-reconfiguration can be controlled by e.g. an off-line planner, but numerous implementation issues hamper the actual self-reconfiguration process: the continuous evolution of the communication topology increases the risk of communications failure, generating code that correctly controls the self-reconfiguration process is non-trivial, and hand-tuning the self-reconfiguration process is tedious and error-prone. To address these issues, we have developed a distributed scripting language that controls self-reconfiguration of the ATRON robot using a robust communication scheme that relies on local broadcast of shared state. This language can be used as the target of a planner, offers direct support for parallelization of independent operations while maintaining correct sequentiality of dependent operations, and compiles to a robust and efficient implementation. Moreover, a novel feature of this language is its reversibility: once a self-reconfiguration sequence is described the reverse sequence is automatically available to the programmer, significantly reducing the amount of work needed to deploy self-reconfiguration in larger scenarios. We demonstrate our approach with long-running (reversible) self-reconfiguration experiments using the ATRON robot and a reversible self-reconfiguration experiment using simulated MTRAN modules.
---
paper_title: Programmable parts: a demonstration of the grammatical approach to self-organization
paper_content:
In this paper, we introduce a robotic implementation of the theory of graph grammars (Klavins et al., 2005), which we use to model and direct self-organization in a formal, predictable and provably-correct fashion. The robots, which we call programmable parts, float passively on an air table and bind to each other upon random collisions. Once attached, they execute local rules that determine how their internal states change and whether they should remain bound. We demonstrate through experiments how they can self-organize into a global structure by executing a common graph grammar in a completely distributed fashion. The system also presents a challenge to the grammatical method (and to distributed systems approaches in general) due to the stochastic nature of its dynamics. We conclude by discussing these challenges and our initial approach to addressing them.
---
paper_title: Miche: Modular Shape Formation by Self-Dissasembly
paper_content:
We describe the design, implementation, and experimentation with a collection of robots that, starting from an amorphous arrangement, can be assembled into arbitrary shapes and then commanded to self-disassemble in an organized manner. Each of the 28 modules in the system is implemented as a 1.8-inch autonomous cube-shaped robot able to connect to and communicate with its immediate neighbors. Two cooperating microprocessors control each module's magnetic connection mechanisms and infrared communication interfaces. When assembled into a structure, the modules form a system that can be virtually sculpted using a computer interface. We report on the hardware design and experiments from hundreds of trials.
---
paper_title: Robot pebbles: One centimeter modules for programmable matter through self-disassembly
paper_content:
This paper describes the design, fabrication, and experimental results of a programmable matter system capable of 2D shape formation through subtraction. The system is composed of autonomous 1cm modules which use custom-designed electropermanent magnets to bond, communicate, and share power with their neighbors. Given an initial block composed of many of these modules latched together in a regular crystalline structure, our system is able to form shapes by detaching the unnecessary modules. Many experiments show that the modules in our system are able to distribute data at 9600bps to their neighbors with a 98.5% success rate after four retries, and the connectors are able to support over 85 times the weight of a single module.
---
paper_title: Self-organizing collective robots with morphogenesis in a vertical plane
paper_content:
This paper presents a novel concept of self-organizing collective robots with morphogenesis in a vertical plane. It is potentially applicable to autonomous mobile robots. For physical reconfiguration of a swarm of robots against gravity, new types of mechanisms and control strategies are proposed and demonstrated. Prototype robots have been fabricated in order to confirm the basic feasibility of the mechanisms. Each robot is composed of a body and a pair of arms. The body can be regarded as a cube with edge length of 90 mm, and is equipped with permanent magnets for bonding with another robot. The arms change the bonding configuration by rotating and sliding motions. As for the control strategies, we proposed the algorithms which can locally generate specific global formations of robots, with minimum interactions between neighboring robots. The overall scheme is similar to cellular automata. The control algorithms proposed have been tested by simulations.
---
paper_title: Self-assembling machine
paper_content:
The design of a machine which is composed of homogeneous mechanical units is described. We show the design of both hardware and control software of the unit. Each unit can connect with other units and change the connection by itself. In spite of its simple mechanism, a set of these units realizes various mechanical functions. We developed the control software of the unit which realizes "self-assembly," one of the basic functions of this machine. A set of these units can form a given shape of the whole system by themselves. The units exchange information about local geometric relation by communication, and cooperate to form the whole shape through a diffusion-like process. There is no upper level controller to supervise these units, and the software of each unit is completely the same. Three actual units have been built to test the basic movements, and the function of self-assembly has been verified by computer simulation. >
---
paper_title: Three dimensional stochastic reconfiguration of modular robots
paper_content:
Here we introduce one simulated and two physical three-dimensional stochastic modular robot systems, all capable of self-assembly and self-reconfiguration. We assume that individual units can only draw power when attached to the growing structure, and have no means of actuation. Instead they are subject to random motion induced by the surrounding medium when unattached. We present a simulation environment with a flexible scripting language that allows for parallel and serial selfassembly and self-reconfiguration processes. We explore factors that govern the rate of assembly and reconfiguration, and show that self-reconfiguration can be exploited to accelerate the assembly of a particular shape, as compared with static self-assembly. We then demonstrate the ability of two different physical three-dimensional stochastic modular robot systems to self-reconfigure in a fluid. The second physical implementation is only composed of technologies that could be scaled down to achieve stochastic self-assembly and self-reconfiguration at the microscale.
---
paper_title: Telecubes: mechanical design of a module for self-reconfigurable robotics
paper_content:
A Telecube is a cubic module that has six prismatic degrees of freedom whose sides can expand more than twice its original length and has the ability to magnetically (de)attach to other modules. Many of these modules can be connected together to form a modular self-reconfigurable robot. The paper presents the intended functions, discusses the physical requirements of the modules and describes two key mechanical components: a compact telescoping linear actuator and a switching permanent magnet device.
---
paper_title: Millibot Trains for Enhanced Mobility
paper_content:
The objective of this work is to enhance the mobility of small mobile robots by enabling them to link into a train configuration capable of crossing relatively large obstacles. In particular, we are building on Millibots, semiautonomous, tracked mobile sensing/communication platforms at the 5-cm scale previously developed at Carnegie Mellon University. The Millibot Train concept provides couplers that allow the Millibot modules to engage/disengage under computer control and joint actuators that allow lifting of one module by another and control of the whole train shape in two dimensions. A manually configurable train prototype demonstrated the ability to climb standard stairs and vertical steps nearly half the train length. A fully functional module with powered joints has been developed and several have been built and tested. Construction of a set of six modules is well underway and will allow testing of the complete train in the near future. This paper focuses on the development, design, and construction of the electromechanical hardware for the Millibot Train.
---
paper_title: Hormone-based control for self-reconfigurable robots
paper_content:
Self-reconfigurable or metamorphic robots can change their individual and collective shape and size to meet operational demands. Since these robots are constructed from a set of autonomous and connectable modules (or agents), controlling them is a challenging task. The difficulties stem from the facts that all locomotion, perception, and decision making must be distributed among a network of modules, that this network has a dynamic topology, that each individual module has only limited resources, and that the coordination between modules is highly complex and diverse. To meet these challenges, this paper presents a distributed control mechanism inspired by the concept of hormones in biological systems. We view hormones as special messages that can trigger different actions in different modules, and we exploit such properties to coordinate motions and perform reconfiguration in the context of limited communications and dynamic network topologies. The paper develops a primitive theory of hormone-based control, reports the experimental results of applying such a control mechanism to our CONRO metamorphic robots, and discusses the generality of the approach for a larger class of distributed autonomous systems.
---
paper_title: Method of Autonomous Approach, Docking and Detaching Between Cells for Dynamically Recontigurable Robotic System CEBOT
paper_content:
The concept of the new dynamically reconfigurable robotic system (DRRS) are its realization as the cell-structured distributed robotic system CEBOT are discussed in this paper. CEBOT can reconfigurate itself to optimal structure depending on purpose and environment. These robotic systems are very advantageous for space robotics, since the available resources are limited in space. This robotic system is possible owing to the use of separable self-controlled units called "cells" ; thus the robotic system is called a cell-structured robot #(CEBOT). CEBOT has unique features such as a dynamically reconfigurable structure, on-line task adaptability, fault tolerance, etc. With simple elementary cells clustered into large structures or modules, complicated tasks can be executed (this concept is found in biological organisms). We propose this concept, control and sensor system structure, and automatic approach control. The efticiency of CEBOT is shown by the results of automatic approaching, connecting and separating experiments.
---
paper_title: M3Express: A low-cost independently-mobile reconfigurable modular robot
paper_content:
This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.
---
paper_title: Study on three-dimensional active cord mechanism: development of ACM-R2
paper_content:
Describes the development of ACM-R2, which is new version of the active cord mechanism with 3D mobility. This ACM-R2 has installed a driving mechanism called "M-drive". This mechanism enables the ACM-R2 to acquire a high output/mass ratio and torque limiting joints, which works as a normal joint and deforms under excessive torque. It has also installed a new type of torque sensor, the "float differential torque sensor" on each joint. Additionally the ACM-R2 has also a self-contained structure with control computer and battery in the joint units. These specific driving mechanisms and the self-contained structure enable ACM-R2 to demonstrate new propulsion methods and motions combining the ability of a manipulator and a locomotor.
---
paper_title: Hormone-inspired adaptive communication and distributed control for conro self-reconfigurable robots
paper_content:
Presents a biologically inspired approach to two basic problems in modular self-reconfigurable robots: adaptive communication in self-reconfigurable and dynamic networks, and distributed collaboration between the physically coupled modules to accomplish global effects such as locomotion and reconfiguration. Inspired by the biological concept of hormone, the paper develops the adaptive communication (AC) protocol that enables modules continuously to discover changes in their local topology, and the adaptive distributed control (ADC) protocol that allows modules to use hormone-like messages in collaborating their actions to accomplish locomotion and self-reconfiguration. These protocols are implemented and evaluated, and experiments in the CONRO self-reconfigurable robot and in a Newtonian simulation environment have shown that the protocols are robust and scaleable when configurations change dynamically and unexpectedly, and they can support online reconfiguration, module-level behavior shifting, and locomotion. The paper also discusses the implication of the hormone-inspired approach for distributed multiple robots and self-reconfigurable systems in general.
---
paper_title: Docking among independent and autonomous CONRO self-reconfigurable robots
paper_content:
Docking between independent groups of self-reconfigurable robotic modules enables the merger of two or more independent self-reconfigurable robots. This ability allows independent reconfigurable robots in the same environment to join together to complete a task that would otherwise not be possible with the individual robots prior to merging. The challenges for this task include: (1) coordinate and align two independent self-reconfigurable robots using the docking guidance system available only at the connectors of the docking modules; (2) overcome the inevitable errors in the alignment by a novel and coordinated movements from both docking ends; (3) ensure the secure connection at the end of docking; (4) switch configuration and let modules to discover the changes and new connections so that the two docked robots will move as a single coherent robot. We have developed methods for overcome these challenging problems and accomplished for the first time an actual docking between two independent CONRO robots each with multiple modules.
---
paper_title: Cooperation through self-assembly in multi-robot systems
paper_content:
This paper illustrates the methods and results of two sets of experiments in which a group of mobile robots, called s-bots, are required to physically connect to each other—i.e., to selfassemble—to cope with environmental conditions that prevent them to carry out their task individually. The first set of experiments is a pioneering study on the utility of self-assembling robots to address relatively complex scenarios, such as cooperative object trasport. The results of our work suggest that the s-bots possess hardware characteristics which facilitate the design of control mechanisms for autonomous self-assembly. The second set of experiments is an attempt to integrate within the behavioural repertoire of an s-bot decision making mechanisms to allow the robot to autonomously decide whether or not environmental contingencies require self-assembly. The results show that it is possible to synthesise, by using evolutionary computation techniques, artificial neural networks that integrate both the mechanisms for sensory-motor coordination and for decision making required by the robots in the context of self-assembly.
---
paper_title: Cooperative Hole Avoidance in a Swarm-bot
paper_content:
In this paper, we study coordinated motion in a swarm robotic system, called a swarm-bot. A swarm-bot is a self-assembling and self-organising artifact, composed of a swarm of s-bots, mobile robots with the ability to connect to and disconnect from each other. The swarm-bot concept is particularly suited for tasks that require all-terrain navigation abilities, such as space exploration or rescue in collapsed buildings. As a rst step toward the development of more complex control strategies, we investigate the case in which a swarm-bot has to explore an arena while avoiding to fall into holes. In such a scenario, individual s-bots have sensory-motor limitations that prevent them to navigate ecien tly and that can be overcome exploiting the physical connections and the cooperation among the s-bots. In order to synthesise the s-bots’ controller, we rely on articial evolution, which we show to be a powerful tool for the production of simple and eectiv e solutions to the hole avoidance task.
---
paper_title: Ultrasonic based autonomous docking on plane for mobile robot
paper_content:
Autonomous docking between independent robotic modules is critical for self-reconfigurable robots. This ability allows independent reconfigurable robots in the same environment to join together to complete a task that would otherwise not be possible with the individual robots prior to merging. This paper presents an easy and inexpensive implementation of such a system using only one ultrasonic transmitter and one receiver on each of two docking modules of JL-1. The characteristic of ultrasonic signal, being sensitive to distance and angle between transmitter and receiver, is applied to align two robots in this paper. Experiments have indicated that alternating active rotation and ultrasonic based guidance method can align two robots in one line, and ready for docking. The theory proposed in paper is general and can be applied to other inter-docking process on plane.
---
paper_title: Autonomous Self-Assembly in Swarm-Bots
paper_content:
In this paper, we discuss the self-assembling capabilities of the swarm-bot, a distributed robotics concept that lies at the intersection between collective and self-reconfigurable robotics. A swarm-bot is comprised of autonomous mobile robots called s-bots. S-bots can either act independently or self-assemble into a swarm-bot by using their grippers. We report on experiments in which we study the process that leads a group of s-bots to self-assemble. In particular, we present results of experiments in which we vary the number of s-bots (up to 16 physical robots), their starting configurations, and the properties of the terrain on which self-assembly takes place. In view of the very successful experimental results, swarm-bot qualifies as the current state of the art in autonomous self-assembly
---
paper_title: JL-2: A Mobile Multi-robot System with Docking and Manipulating Capabilities
paper_content:
This paper presents a new version of the JL series reconfigurable multi-robot system called JL-2. By virtue of the docking manipulator composed of a parallel mechanism and a cam gripper, every mobile robot in the JL-2 system is able to not only perform tasks in parallel, e.g. moving and grasping, but also dock with each other even if there are large misalignments between two robots. A motorized spherical joint is formed between two docked robots to enhance the locomotion capability of JL-2. To fulfill the demands of reconfiguration, a distributed control system and sonar based docking guidance system are designed for the JL-2 prototype. Based on the above design, the JL-2 prototype has been built and successfully demonstrated to confirm the validity and functionality of the proposed capabilities.
---
paper_title: Swarm-Bot: A New Distributed Robotic Concept
paper_content:
The swarm intelligence paradigm has proven to have very interesting properties such as robustness, flexibility and ability to solve complex problems exploiting parallelism and self-organization. Several robotics implementations of this paradigm confirm that these properties can be exploited for the control of a population of physically independent mobile robots. ::: ::: The work presented here introduces a new robotic concept called swarm-bot in which the collective interaction exploited by the swarm intelligence mechanism goes beyond the control layer and is extended to the physical level. This implies the addition of new mechanical functionalities on the single robot, together with new electronics and software to manage it. These new functionalities, even if not directly related to mobility and navigation, allow to address complex mobile robotics problems, such as extreme all-terrain exploration. ::: ::: The work shows also how this new concept is investigated using a simulation tool (swarmbot3d) specifically developed for quickly designing and evaluating new control algorithms. Experimental work shows how the simulated detailed representation of one s-bot has been calibrated to match the behaviour of the real robot.
---
paper_title: Performance benefits of self-assembly in a swarm-bot
paper_content:
Mobile robots are said to be capable of self- assembly when they can autonomously form physical connections with each other. Despite the recent proliferation of self- assembling systems, little work has been done on using self- assembly to add functional value to a robotic system, and even less on quantifying the contribution of self-assembly to system performance. In this study we demonstrate and quantify the performance benefits of i) acting as a physically larger self-assembled entity, ii) using self-assembly adaptively and iii) making the robots morphologically aware (the self-assembled robots leverage their new connected morphology in a task specific way). In our experiments, two real robots must navigate to a target over a-priori unknown terrain. In some cases the terrain can only be overcome by a self-assembled connected entity. In other cases, the robots can reach the target faster by navigating individually.
---
paper_title: New locomotion gaits
paper_content:
This paper investigates new modes of robot land locomotion, in particular statically stable non-wheeled, non-tracked locomotion. These locomotion gaits are accomplished by a reconfigurable modular robot called Polypod using a control scheme combining a small number of primitive control modes for each module. The design of Polypod is first reviewed, then two and three-dimensional locomotion gaits are described along with two "exotic" gaits. These gaits have been implemented on Polypod or simulated on a graphic workstation. >
---
paper_title: Design of transmote: A modular self-reconfigurable robot with versatile transformation capabilities
paper_content:
This paper presents the design and implementation of a new modular self-reconfigurable robot, called Transmote. The proposed robot has several novel features that make it suitable for search and rescue operations in complex environments. A single module can move independently by coordinating three joints. Multiple modules can connect with each other to form versatile assembled robotic structures with more powerful locomotion capabilities. The assembled structures can also transform between each other or fully disassemble into individual modules again. The modules, which communicate with each other through ZigBee compliant protocols, can build emergency communication and monitoring networks when spread to the areas without communication infrastructures. Preliminary experimental results demonstrate locomotion, mechanical docking, and transformation of several example configurations.
---
paper_title: Swarm-bot: An experiment in swarm robotics
paper_content:
This paper provides an overview of the SWARM-BOTS project, a robotics project sponsored by the Future and Emerging Technologies program of the European Commission (IST-2000-31010). We describe the s-bot, a small autonomous robot with self-assembling capabilities that we designed and built within the project. Then we illustrate the cooperative object transport scenario that we chose to use as a test-bed for our robots. Last, we report on results of experiments in which a group of s-bots perform a variety of tasks within the scenario which may require self-assembling, physical cooperation and coordination.
---
paper_title: Object transport by modular robots that self-assemble
paper_content:
We present a first attempt to accomplish a simple object manipulation task using the self-reconfigurable robotic system swarm-bot. The number of modular entities involved, their global shape or size and their internal structure are not pre-determined, but result from a self-organized process in which the modules autonomously grasp each other and/or an object. The modules are autonomous in perception, control, action, and power. We present quantitative results, obtained with six physical modules, that confirm the utility of self-assembling robots in a concrete task
---
paper_title: Development of active cord mechanism ACM-R3 with agile 3D mobility
paper_content:
This paper describes the development of ACM-R3, which is a new version of the active cord mechanism with three-dimensional mobility. This ACM-R3 is equipped with large passive wheels which wrap its overall body, and has frictional characteristic as snake-like skin. It is also equipped with radio controlled servomotors with some gears added to them, and held tightly by shell flames so that it can move steadily and with high power. Each unit of ACM-R3 consists of a very simple structure, batteries, and electronic circuits to control itself. Consequently, ACM-R3 consists of as many units as we want to use, and is suitable for the platform of snake-like robotics researchers. ACM-R3 can demonstrate new propulsion methods and motions, combining the ability of a manipulator and a propulsion unit.
---
paper_title: PolyBot: A Modular Reconfigurable Robot
paper_content:
Modular, self-reconfigurable robots show the promise of great versatility, robustness and low cost. This paper presents examples and issues in realizing those promises. PolyBot is a modular, self-reconfigurable system that is being used to explore the hardware reality of a robot with a large number of interchangeable modules. Three generations of PolyBot have been built over the last three years which include ever increasing levels of functionality and integration. PolyBot has shown versatility, by demonstrating locomotion over a variety of terrain and manipulating a variety of objects. PolyBot is the first robot to demonstrate sequentially two topologically distinct locomotion modes by self-reconfiguration. PolyBot has raised issues regarding software scalability and hardware dependency and as the design evolves the issues of low cost and robustness are being addressed while exploring the potential of modular, selfreconfigurable robots.
---
paper_title: AMOEBA-I: A Shape-Shifting Modular Robot for Urban Search and Rescue
paper_content:
This work intends to enhance the mobility and flexibility of a tracked mobile robot through changing its shape in unstructured environments. A shape-shifting mobile robot, AMOEBA-I, has been developed. With three tracked modules, AMOEBA-I has nine locomotion configurations and three of them are symmetrical configurations. The key advantage of this design over other mobile robots is its adaptability and flexibility because of its various configurations. It can change its configuration fluently and automatically to adapt to different environments or missions. A modularized structure of the control system is proposed and designed for AMOEBA-I to improve the fault tolerance and substitutability of the system. The strategies of cooperative control, including cooperative shape shifting, cooperative turning and cooperative obstacle negotiation, have been proposed to improve the capability of shape shifting, locomotion and obstacle negotiation for AMOEBA-I. A series of experiments have been carried out, and demon...
---
paper_title: Development of a genderless and fail-safe connection system for autonomous modular robots
paper_content:
One of the most challenging tasks on developing hardware of modular robots is to design a reliable and flexible connection system. This paper presents the design and implementation of a GENderless and FAil-safe (GENFA) connection system for autonomous modular robots that possesses self-assembly, self-reconfiguration and self-healing capabilities. By using the GENFA connector, a group of autonomous modular robots can assemble into a single robotic organism. With robust mechanical and electrical connection, a single robotic organism can disassemble into a set of disconnected units. In this paper, the detailed design of a connection system is presented. Then, mechanical strength and misalignment tolerance are studied theoretically. Some experimental tests are conducted with miniature mobile robots, and some experimental results show that the GENFA connection system is power efficient, fail-safe, and has limited misalignment tolerance.
---
paper_title: Design and implementation of UBot: A modular Self-Reconfigurable Robot
paper_content:
The design and implementation of a novel modular Self-Reconfigurable Robot (SRR) called UBot is reviewed in this paper. Firstly, the philosophy of hardware design is presented. The module is designed with criteria such as cubic-shape, homogeneity, and strong connections to fulfill the requirements of complex three-dimensional reconfiguration and locomotion. Each robotic module has two degrees of freedom and four connecting surfaces with hook-type connecting mechanism. A group of modules can transform between different configurations by changing their local connections, achieve complicated modes of motion and accomplish a large variety of tasks. Secondly, a 3D dynamics simulator for UBot SRR is developed, where robot locomotion and transfiguration simulation could be done. A worm-like robot evolution is performed with results of a variety of high-performance locomotion patterns. Finally, Experiments are performed about autonomous docking, multi-mode locomotion and self-reconfiguration. The validity of docking method, CPG-network control and reconfiguration planning method is verified through locomotion and transformation tests of configurations such as snake-type, quadruped walking-type, omni-directional cross-type and loop-type.
---
paper_title: Roombots—Towards decentralized reconfiguration with self-reconfiguring modular robotic metamodules
paper_content:
This paper presents our work towards a decentralized reconfiguration strategy for self-reconfiguring modular robots, assembling furniture-like structures from Roombots (RB) metamodules. We explore how reconfiguration by locomotion from a configuration A to a configuration B can be controlled in a distributed fashion. This is done using Roombots metamodules—two Roombots modules connected serially—that use broadcast signals, lookup tables of their movement space, assumptions about their neighborhood, and connections to a structured surface to collectively build desired structures without the need of a centralized planner.
---
paper_title: M-TRAN: self-reconfigurable modular robotic system
paper_content:
In this paper, a novel robotic system called modular transformer (M-TRAN) is proposed. M-TRAN is a distributed, self-reconfigurable system composed of homogeneous robotic modules. The system can change its configuration by changing each module's position and connection. Each module is equipped with an onboard microprocessor, actuators, intermodule communication/power transmission devices and intermodule connection mechanisms. The special design of M-TRAN module realizes both reliable and quick self-reconfiguration and versatile robotic motion. For instance, M-TRAN is able to metamorphose into robotic configurations such as a legged machine and hereby generate coordinated walking motion without any human intervention. An actual system with ten modules was built and basic operations of self-reconfiguration and motion generation were examined through experiments. A series of software programs has also been developed to drive M-TRAN hardware, including a simulator of M-TRAN kinematics, a user interface to design appropriate configurations and motion sequences for given tasks, and an automatic motion planner for a regular cluster of M-TRAN modules. These software programs are integrated into the M-TRAN system supervised by a host computer. Several demonstrations have proven its capability as a self-reconfigurable robot.
---
paper_title: Multimode locomotion via SuperBot robots
paper_content:
This paper presents a modular and reconfigurable robot for multiple locomotion modes based on reconfigurable modules. Each mode consists of characteristics for the environment type, speed, turning-ability, energy-efficiency, and recover ability from failures. The paper demonstrates this solution by the Superbot robot that combines advantages from MTRAN, CONRO and others. Experimental results, both in real robots and in simulation, have shown the validity of the approach and demonstrated the movements of forward, backward, turn, sidewinder, maneuver, and travel on batteries up to 500 meters on a flat terrain. In physics-based simulation, Superbot can perform as snake, caterpillar, insect, spider, rolling track, H-walker, etc., and move 1.0 meter/second on flat terrain with less than 6 W/module, and climb slopes of no less 40 degrees
---
paper_title: ORTHO-BOT : A Modular Reconfigurable Space Robot Concept
paper_content:
A new set of challenging tasks are envisaged for future robotic planetary space missions. In contrast to conventional exploration rovers, industrial robotic roles are required for object manipulation and transportation in e.g. habitat construction. This prompts research into more robust failsafe robot designs, having greater mission redundancy for cost-effectiveness, with adjustable structures for multi-tasking. A Modular Reconfigurable design is investigated to meet these requirements using linear actuation over revolute since this alternative approach to modular robotics can form truss type structures providing inherently stable structures appropriate to the given task type. For ease of reconfiguration a connectivity solution is sought that may be simple enough to allow self-reconfiguration thus enabling extremely remote autonomous operation. In effort to meet this challenge the ORTHO-BOT developmental concept is introduced in this paper. Based on the core module developed thus far, a walking design has been successfully demonstrated in simulation to fulfil the key requirement of locomotion. Though the focus for this research is aimed at space-based roles conceptual solutions developed should also find useful application in terrestrial remote or hazardous environments.
---
paper_title: Design of iMobot, an intelligent reconfigurable mobile robot with novel locomotion
paper_content:
The design and novel features of a reconfigurable modular robot, called iMobot, with four controllable degrees of freedom is presented in this paper. iMobot, which is designed for search and rescue operations as well as other applications such as research and teaching, has versatile locomotion, including a unique feature of driving as though with wheels and lifting itself into a camera platform. Future work is envisioned for using these modules in clusters to achieve advanced mobility. The accompanying video demonstrates the various locomotion of the modular robot.
---
paper_title: Tetrobot: a modular system for hyper-redundant parallel robotics
paper_content:
A modular system for the design, implementation, and control of a class of highly redundant parallel robotic mechanisms is described. The Tetrobot system features a novel concentric multilink spherical (CMS) joint which allows an arbitrary number of links to share a common center of rotation. The CMS joint facilitates the construction of a wide variety of variable geometry truss mechanisms using an integrated control and computational framework. A distributed control algorithm is based on an iterative virtual force based solution which propagates the goal positions of target nodes to determine systematic adjustments of actuators. Capabilities of the Tetrobot system have been demonstrated by the construction of several configurations of up to 18 nodes, 48 links, and 15 actuators. Implementations of a double Stewart platform and a six-legged walker, constructed from the same set of parts, are described in this paper.
---
paper_title: M-TRAN II: metamorphosis from a four-legged walker to a caterpillar
paper_content:
We have been developing a self-reconfigurable modular robotic system (M-TRAN) which can make various 3-D configurations and motions. In the second prototype (M-TRAN II), various improvements are integrated in order to realize complicated reconfigurations and versatile whole body motions. Those are a reliable connection/detachment mechanism, on-board multi-computers, high speed inter-module communication system, low power consumption, precise motor control, etc. Programing environments are also integrated to design self-reconfiguration processes, to verify motions in dynamics simulation, and to realize distributed control on the hardware. Hardware design, developed software and experiments are presented in this paper.
---
paper_title: Self-Soldering Connectors for Modular Robots
paper_content:
The connection mechanism between neighboring modules is the most critical subsystem of each module in a modular robot. Here, we describe a strong, lightweight, and solid-state connection method based on heating a low melting point alloy to form reversible soldered connections. No external manipulation is required for forming or breaking connections between adjacent connectors, making this method suitable for reconfigurable systems such as self-reconfiguring modular robots. Energy is only consumed when switching connectivity, and the ability to transfer power and signal through the connector is inherent to the method. Soldering connectors have no moving parts, are orders of magnitude lighter than other connectors, and are readily mass manufacturable. The mechanical strength of the connector is measured as 173 N, which is enough to support many robot modules, and hundreds of connection cycles are performed before failure.
---
paper_title: Evolved and Designed Self-Reproducing Modular Robotics
paper_content:
Long-term physical survivability of most robotic systems today is achieved through durable hardware. In contrast, most biological systems are not made of robust materials; long-term sustainability and evolutionary adaptation in nature are provided through processes of self-repair and, ultimately, self-reproduction. Here we demonstrate a large space of possible robots capable of autonomous self-reproduction. These robots are composed of actuated modules equipped with electromagnets to selectively control the morphology of the robotic assembly. We show a variety of 2-D and 3-D machines from 3 to 2n modules, and two physical implementations that each achieves two generations of reproduction. We show both automatically generated and manually designed morphologies
---
paper_title: Emulating self-reconfigurable robots - design of the SMORES system
paper_content:
Self-reconfigurable robots are capable of changing their shape to suit a task. The design of one system called SMORES (Self-assembling MOdular Robot for Extreme Shape-shifting) is introduced. This system is capable of rearranging its modules in all three classes of reconfiguration; lattice style, chain style and mobile reconfiguration. This system is capable of emulating many of the other existing systems and promises to be a step towards a universal modular robot.
---
paper_title: A motion planning method for a self-reconfigurable modular robot
paper_content:
This paper addresses motion planning of a homogeneous modular robotic system. The modules have self-reconfiguration capability so that a group of the modules can construct a robotic structure. Motion planning for self-reconfiguration is a kind of computationally difficult problem because of many combinatorial possibilities of modular configuration and the restricted degrees of freedom of the module; only two rotation axes per module. We will show a motion planning method for a class of multimodule structures. It is based on global planning and local motion scheme selection that is effective to solve the complicated planning problem.
---
paper_title: SUPERBOT: A Deployable, Multi-Functional, and Modular Self-Reconfigurable Robotic System
paper_content:
Self-reconfigurable robots are modular robots that can autonomously change their shape and size to meet specific operational demands. Recently, there has been a great interest in using self-reconfigurable robots in applications such as reconnaissance, rescue missions, and space applications. Designing and controlling self-reconfigurable robots is a difficult task. Hence, the research has primarily been focused on developing systems that can function in a controlled environment. This paper presents a novel self-reconfigurable robotic system called SuperBot, which addresses the challenges of building and controlling deployable self-reconfigurable robots. Six prototype modules have been built and preliminary experimental results demonstrate that SuperBot is a flexible and powerful system that can be used in challenging real-world applications.
---
paper_title: Roombots-mechanical design of self-reconfiguring modular robots for adaptive furniture
paper_content:
We aim at merging technologies from information technology, roomware, and robotics in order to design adaptive and intelligent furniture. This paper presents design principles for our modular robots, called Roombots, as future building blocks for furniture that moves and self-reconfigures. The reconfiguration is done using dynamic connection and disconnection of modules and rotations of the degrees of freedom. We are furthermore interested in applying Roombots towards adaptive behaviour, such as online learning of locomotion patterns. To create coordinated and efficient gait patterns, we use a Central Pattern Generator (CPG) approach, which can easily be optimized by any gradient-free optimization algorithm. To provide a hardware framework we present the mechanical design of the Roombots modules and an active connection mechanism based on physical latches. Further we discuss the application of our Roombots modules as pieces of a homogenic or heterogenic mix of building blocks for static structures.
---
paper_title: The UBot modules for self-reconfigurable robot
paper_content:
The design philosophy of UBot module is proposed in this paper. A novel modular self-reconfigurable robot called UBot is presented. The UBot module is compact, strength, flexible and capable of performing efficient locomotion, self-reconfiguration and manipulation tasks. This robot consists of several standard modules. Each module is cubic structure based on universal joint, and has four connecting surfaces that can connect to or disconnect from adjacent modules. A hook-type connecting mechanism is designed, which could connect to or disconnect from adjacent modules quickly and reliably. This mechanism is self-locking after connected, and energy-saving. Wireless communication technology was employed in the module, which can avoid cable-winding and improve flexibility of locomotion and self-reconfiguration. One simple orientation detecting system is designed, which can detect four possible orientations by metal contact points. A group of UBot modules can adapt their configuration and function to changing environment without external help by changing their connections and positions. To achieve small oversize and mass, compact mechanical structures and electrical systems are adopted in module designing. At last, the experiment of connecting mechanism and locomotion of UBot has been implemented.
---
paper_title: Towards robotic self-reassembly after explosion
paper_content:
This paper introduces a new challenge problem: designing robotic systems to recover after disassembly from high-energy events and a first implemented solution of a simplified problem. It uses vision-based localization for self- reassembly. The control architecture for the various states of the robot, from fully-assembled to the modes for sequential docking, are explained and inter-module communication details for the robotic system are described.
---
paper_title: TETROBOT: a modular approach to parallel robotics
paper_content:
The TETROBOT is an actuated robotic structure which may be reassembled into many different configurations while still being controlled by the same hardware and software architecture. The TETROBOT system addresses the needs of application domains, such as space, undersea, mining, and construction, where adaptation to unstructured and changing environments and custom design for rapid implementation are required.
---
paper_title: A Modular Self-Reconfigurable Robot with Enhanced Locomotion Performances: Design, Modeling, Simulations, and Experiments
paper_content:
This paper presents the design and implementation of a modular self-reconfigurable robot with enhanced locomotion capabilities. It is a small hexahedron robot which is 160 mm × 140 mm × 60 mm in size and 405 g in weight. The robot is driven by three omnidirectional wheels, with up and down symmetrical structure. The robot can perform rectilinear and rotational locomotion, and turn clockwise and counterclockwise without limitation. A new docking mechanism that combines the advantages of falcula and pin-hole has been designed for attaching and detaching different modules. The communication and image data transmission are based on a wireless network. The kinematics and dynamics of the single module has been analyzed, and the enhanced locomotion capabilities of the prototype robot are verified through experiments. The maximum linear velocity is 25.1cm/s, which is much faster than other modular self-reconfigurable robots. The mobility of two connected modules is analyzed in the ADAMS simulator. The locomotion of the docking modules is more flexible. Simulations on the wheel and crawling locomotion are conducted, the trajectories of the robot are shown, and the movement efficiency is analyzed. The docking mechanisms are tested through docking experiments, and the effectiveness has been verified. When the transmission time interval between the adjacent packets is more than 4 ms, the wireless network will not lose any packet at the maximum effective distance of 37 m in indoor environments.
---
paper_title: Soldercubes: a self-soldering self-reconfiguring modular robot system
paper_content:
Soldercubes are a self-reconfiguring modular robot (MR) system whose modules are light weight, low cost, and designed with manufacturability for large batch production in mind. The frequently cited promises of modular robotics--versatility, robustness, and low cost--assume the availability of large numbers of modules. However, modules in most MR prototypes are large, mechanically complex, expensive, and difficult to manufacture. Soldercubes partially overcome this contradiction through optimizing some components for volume manufacturing processes. With the integration of a soldering connector which weighs only 2 g and has no moving parts, Soldercubes are among the cheapest, lightest and smallest among comparable self-reconfiguring MR systems. This paper describes the Soldercube module design in detail, reports on experiments in a lattice configuration, explores non-lattice applications of the system, and discusses the effects of utilising volume manufacturing processes in module production. All Soldercubes design files are released as open source hardware.
---
paper_title: Distributed Self-Reconfiguration of M-TRAN III Modular Robotic System
paper_content:
A new prototype of a self-reconfigurable modular robot, M-TRAN III, has been developed, with an improved fast and rigid connection mechanism. Using a distributed controller, various control modes are possible: single-master, globally synchronous control or parallel asynchronous control. Self-reconfiguration experiments using up to 24 modules were undertaken by centralized or decentralized control. Experiments using decentralized control examined a modular structure moved in a given direction as a flow produced by local self-reconfigurations. In all experiments, system homogeneity and scalability were maintained: modules used identical software except for their ID numbers. Identical self-reconfiguration was realized when different modules were used in initial configurations.
---
paper_title: Programming reconfigurable modular robots
paper_content:
Highly reconfigurable modular robots face unique control and programming challenges due to the large scale of the robotic systems, high level of reconfigurability of the systems, and high number of controllable degrees of freedom in the system. Modular robot systems such as iMobot must face these challenges in novel ways. This paper presents a unified software framework which facilitates the programming, coordination, and cooperation among multiple modular reconfigurable robots. The framework consists of Ch, a C/C++ scripting environment; Mobile-C, a C/C++ mobile agent framework; and CiMobot, an object-oriented C++ class capable of controlling multiple robot modules simultaneously. Three experiments with iMobots were performed using the new software framework. First, an iMobot is controlled autonomously using the software framework. Next, an iMobot is controlled in a master/slave scenario using the same code-base. Finally, the robot is controlled by a mobile agent using the software framework. The robotic system functions correctly and similarly for each of the experimental scenarios.
---
paper_title: A Multi-Sensory Autonomous Docking Approach for a Self-Reconfigurable Robot without Mechanical Guidance
paper_content:
The most important feature of a Self-Reconfigurable Robot (SRR) is that it is reconfigurable and self-repairing. At the centre of these capabilities is autonomous docking. One difficulty for docking is the alignment between two robots. Current strategies overcome this by integrating a mechanical guiding device within the connecting mechanism. This increases the robustness of docking but compromises the flexibility of reconfiguration. In this paper, we present a new autonomous docking strategy that can overcome the drawbacks of current approaches. The new strategy uses a novel hook-type connecting mechanism and multi-sensory guidance. The hook-type connecting mechanism is strong and rigid for reliable physical connection between the modules. The multi-sensory docking strategy, which includes visual-sensor-guided rough positioning, Hall-sensor-guided fine positioning, and the locking between moving and target modules, guarantees robust docking without sacrificing reconfigurability. The proposed strategy is ...
---
paper_title: Morpho: A self-deformable modular robot inspired by cellular structure
paper_content:
We present a modular robot design inspired by the creation of complex structures and functions in biology via deformation. Our design is based on the Tensegrity model of cellular structure, where active filaments within the cell contract and expand to control individual cell shape, and sheets of such cells undergo large-scale shape change through the cooperative action of connected cells. Such deformations play a role in many processes, e.g. early embryo shape change and lamprey locomotion. Modular robotic systems that replicate the basic deformable multicellular structure have the potential to quickly generate large-scale shape change and create dynamic shapes to achieve different global functions. Based on this principle, our design includes four different modular components: (1) active links, (2) passive links, (3) surface membranes, and (4) interfacing cubes. In hardware implementation, we show several self-deformable structures that can be generated from these components, including a self-deformable surface, expandable cube, terrain-adaptive bridge [C.-H. Yu et al., 2007]. We present experiments to demonstrate that such robotic structures are able to perform real time deformation to adapt to different environments. In simulation, we show that these components can be configured into a variety of bio-inspired robots, such as an amoeba-like robot and a tissue-inspired material. We argue that self-deformation is well-suited for dynamic and sensing-adaptive shape change in modular robotics.
---
paper_title: A robotically reconfigurable truss
paper_content:
This paper addresses the design of passive robotically-reconfigurable truss structures and progress towards a robot capable of manipulating such structures. The elements are designed to be inserted and removed singly in “random access,” thus eliminating some assembly order constraints and enabling a physical realization of the construction process. The proposed robot is a “hinge” robot that can demonstrate manipulating said elements. The robot is also designed to be able to traverse arbitrary scale truss structures. With the addition of reconfiguration algorithms discussed by Lobo in [1], we suggest that such reconfigurable structures and robots could open the door to a machine metabolic process where structures are decomposed and recomposed autonomously to meet varying needs to a variety of applications from infrastructure recovery to space exploration.
---
paper_title: Factory floor: A robotically reconfigurable construction platform
paper_content:
Passive robotically-reconfigurable truss structures offer considerable utility as they can quickly adjust to changing functional requirements and resources at a level of sophistication that no human builder could match. Furthermore, robot built structures can be constructed in environments such as surface of Mars or in micro-gravity, which would otherwise be too time consuming or dangerous for humans. In this paper we discuss some of the mechanical design challenges of developing a passive robotically-reconfigurable truss system, and present the concept of the factory floor, which can construct truss-like structures without climbing on them. In the proposed system, each level is constructed on a ground plane using a truss and node configuration and is elevated to make room for the next level. This process is repeated to create 3D truss structures or reversed to decompose the structure for the next task.
---
paper_title: Shape-shifting materials for programmable structures
paper_content:
In this paper we discuss how the jamming phenomenon of granular materials might be exploited to achieve programmable matter – a robotic material that permits user specified shape changes. Opportunities for applying this research to the field of architecture are discussed. Experimental results are presented, which reveal the performance of a wide variety of materials under jamming conditions.
---
paper_title: Mechanical design of odin, an extendable heterogeneous deformable modular robot
paper_content:
Highly sophisticated animals consist of a set of heterogenous modules decided by nature so that they can survive in a complex environment. In this paper we present a new modular robot inspired by biology called Odin. The Odin robot is based on a deformable lattice and consists of an extendable set of heterogeneous modules. We present the design and implementation of a cubic closed-packed (CCP) joint module, a telescoping link, and a flexible connection mechanism. The developed robot is highly versatile and opens up for a wide range of new research in modular robotics.
---
paper_title: A modular robot that exploits a spontaneous connectivity control mechanism
paper_content:
This paper discusses a fully decentralized algorithm able to control the morphology of a two-dimensional modular robot called "Slimebot", consisting of many identical modules, according to the environment encountered. One of the significant features of our approach is that we explicitly exploit "emergent phenomena" stemming from the interplay between control and mechanical systems in order to control the morphology in real time. To this end, we particularly focus on a "functional material" and a "mutual entrainment", the former of which is used as a spontaneous connectivity control mechanism between the modules, and the latter of which plays as the core of the control mechanism for the generation of locomotion. Simulation results indicate that the proposed algorithm can induce locomotion, which allows us to successfully control the morphology of the modular robot in real time according to the situation without losing the coherence of the entire system.
---
paper_title: Programmable matter
paper_content:
In the past 50 years, computers have shrunk from room-size mainframes to lightweight handhelds. This fantastic miniaturization is primarily the result of high-volume nanoscale manufacturing. While this technology has predominantly been applied to logic and memory, it's now being used to create advanced microelectromechanical systems using both top-down and bottom-up processes. One possible outcome of continued progress in high-volume nanoscale assembly is the ability to inexpensively produce millimeter-scale units that integrate computing, sensing, actuation, and locomotion mechanisms. A collection of such units can be viewed as a form of programmable matter.
---
paper_title: Development of a transformable mobile robot composed of homogeneous gear-type units
paper_content:
Recently, there has been significant research interest in homogeneous modular robots that can transform (i.e. reconfigure their overall shape). However, many of the proposed transformation mechanisms are too expensive and complex to be practical. The transformation process is also normally slow, and therefore the mechanisms are not suitable for situations where frequent, quick reconfiguration is required. To solve these problems, we have studied a transformable mobile robot composed of multiple homogeneous gear-type units. Each unit has only one actuator and cannot move independently. But when engaged in a swarm configuration, units are able to move rapidly by rotating around one another. The most important problem encountered when developing our multi-module robot was determining how units should join together. We designed a passive attachment mechanism that employs a single, six-pole magnet carried by each unit. Motion principles for the swarm were confirmed in simulation, and based on these results we constructed a series of hardware prototypes. In our teleoperation experiments we verified that a powered unit can easily transfer from one stationary unit to another, and that the swarm can move quickly in any direction while transforming.
---
paper_title: Claytronics: An Instance of Programmable Matter
paper_content:
Programmable matter refers to a technology that will allow one to control and manipulate three-dimensional physical artifacts (similar to how we already control and manipulate two-dimensional images with computer graphics). In other words, programmable matter will allow us to take a (big) step beyond virtual reality, to synthetic reality, an environment in which all the objects in a user’s environment (including the ones inserted by the computer) are physically realized. Note that the idea is not to transport objects nor is it to recreate an objects chemical composition, but rather to create a physical artifact that will mimic the shape, movement, visual appearance, sound, and tactile qualities of the original object. The enabling hardware technology behind synthetic reality is Claytronics, a form of programmable matter that can organize itself into the shape of an object and render its outer surface to match the visual appearance of that object. Claytronics is made up of individual components, called catoms—for Claytronic atoms—that can move in three dimensions (in relation to other catoms), adhere to other catoms to maintain a 3D shape, and compute state information (with possible assistance from other catoms in the ensemble). In our preliminary designs, each catom is a self-contained unit with a CPU, an energy store, a network device, a video output device, one or more sensors, a means of locomotion, and a mechanism for adhering to other catoms. Creating a physical replica of an arbitrary moving 3D object that can be updated in real time involves many challenges. The research involved in addressing these scientific challenges is likely to have broad impact beyond synthetic reality. Particularly relevant to the ASPLOS community, for example, is how to build and program a robust distributed system containing millions of computers that must cooperate extensively in a harsh environment where their configuration and goals are constantly changing? It is already possible to reproduce static 3D objects [5, 2]. To create a dynamic 3D object, however, we will build upon ideas from such diverse areas as modular and reconfigurable robotics [6, 3] and amorphous computing [1]. A Claytronics system forms a shape through the interaction of the individual catoms. For example, suppose we wish to synthesize a physical “copy” of a person. The catoms would first determine their relative location and orientation. Using that information they would then form a network in a distributed fashion and organize themselves into a hierarchical structure, both to improve locality and to facilitate the planning and coordination tasks. The goal (mimicking a human form) would then be specified abstractly, perhaps as a series of “snapshots” or as a collection of virtual deforming “forces”, and then broadcast to the catoms. Compilation of the specification would then provide each catom with a local plan for achieving the desired global shape. At this point, the catoms would start to move around each other using forces generated on-board, either magnetically or electrostatically, and adhere to each other using, for example, a nanofiber-adhesive mechanism [4]. Finally, the catoms on the surface would display an image; rendering the color and texture characteristics of the source object. If the source object begins to move, a concise description of the movements would be broadcast allowing the catoms to update their positions by moving around each other. The end result is that the system appears to be a single coordinated system. One key motivation for our work is that technology has reached a point where we can realistically build a programmable matter system which is guided by design principles which will allow it to ultimately scale to millions of sub-millimeter catoms. In fact, we expect our prototype for 2D Claytronics to be operational before ASPLOS’04. Our goal is that the system be usable now and scalable for the future. Thus, the guiding design principle, behind both the hardware and the software, is scalability. Hardware mechanisms need to scale towards micronsized catoms and million-catom ensembles. For example, the catom hardware minimizes static power consumption (e.g., no static power is used for adhesion) and avoids moving parts (e.g., the locomotion mechanism currently uses magnetic forces). Software mechanisms need to be scale invariant. For example, our localization and orientation algorithms are completely distributed, parallel, and, are indifferent to catom size. Claytronics will be a test-bed for solving some of the most challenging problems we face today: how to build complex, massively distributed dynamic systems. It is also a step towards truly integrating computers into our lives—by having them integrated into the very artifacts around us and allowing them to interact with the world.
---
paper_title: Stress-driven MEMS assembly + electrostatic forces = 1mm diameter robot
paper_content:
As the size of the modules in a self-reconfiguring modular robotic system shrinks and the number of modules increases, the flexibility of the system as a whole increases. In this paper, we describe the manufacturing methods and mechanisms for a 1 millimeter diameter module which can be manufactured en masse. The module is the first step towards realizing the basic unit of claytronics, a modular robotic system designed to scale to millions of units.
---
paper_title: A Development of a Modular Robot That Enables Adaptive Reconfiguration
paper_content:
This paper discusses experimental verifications of a two-dimensional modular robot called "Slimebot", consisting of many identical modules. The Slimebot exhibits adaptive reconfiguration by exploiting a fully decentralized algorithm able to control its morphology according to the environment encountered. One of the significant features of our approach is that we explicitly exploit "emergent phenomena" stemming from the interplay between control and mechanical systems in order to control the morphology in real time. To this end, we particularly focus on a "functional material" and a "mutual entrainment" among nonlinear oscillators, the former of which is used as a spontaneous connectivity control mechanism between the modules, and the latter of which acts as the core of the control mechanism for the generation of locomotion. Experimental results indicate that the proposed algorithm can induce locomotion, which allows us to successfully control the morphology of the modular robot in real time according to the situation without losing the coherence of the entire system.
---
paper_title: Planar Microassembly by Parallel Actuation of MEMS Microrobots
paper_content:
We present designs, theory, and results of fabrication and testing for a novel parallel microrobotic assembly scheme using stress-engineered MEMS microrobots. The robots are 240-280 mum times 60 mum times 7-20 mum in size and can be controlled to dock compliantly together, forming planar structures several times this size. The devices are classified into species based on the design of their steering arm actuators, and the species are further classified as independent if they can be maneuvered independently using a single global control signal. In this paper, we show that microrobot species are independent if the two transition voltages of their steering arms, i.e., the voltages at which the arms are raised or lowered, form a unique pair. We present control algorithms that can be applied to groups of independent microrobot species to direct their motion from arbitrary nondead-lock configurations to desired planar microassemblies. We present designs and fabrication for four independent microrobot species, each with a unique transition voltage. The fabricated microrobots are used to demonstrate directed assembly of five types of planar structures from two classes of initial conditions. We demonstrate an average docking accuracy of 5 mum and use self-aligning compliant interaction between the microrobots to further align and stabilize the intermediate assemblies. The final assemblies match their target shapes on average 96%, by area.
---
paper_title: The robot is the tether: active, adaptive power routing modular robots with unary inter-robot connectors
paper_content:
This paper describes a novel approach to powering a radical type of microrobot. Our long-term aim is to enable the construction of ensembles of millions of coordinated near-spherical, submillimeter microrobots. Both the large number of potential simultaneous neighbors of each robot (12) and the difficulty of fine actuation at such small scales preclude the use of complex connectors previously developed in many modular robotics efforts. Instead, we propose to leverage multirobot cooperation to simplify the mechanics of modular robot docking. In our approach, the robots actively cooperate to route virtual power busses (both supply and ground) to all the robots in the ensemble using only unary (single conductor) electrical connectors between robots. A unary connector allows for larger tolerances in engagement angle, simplifies robot manufacture, speeds reconfiguration, and maximizes the proportion of the connector surface area useful for carrying current. The algorithms we present permit a robot ensemble to efficiently harvest and distribute power from sources discovered in the environment and/or carried by the ensemble. We evaluate these algorithms in a variety of simulated deployment conditions and report on the impact of hardware defects, limited on-board power storage, and the ensemble-environment interface.
---
paper_title: An untethered, electrostatic, globally controllable MEMS micro-robot
paper_content:
We present an untethered, electrostatic, MEMS micro-robot, with dimensions of 60 /spl mu/m by 250 /spl mu/m by 10 /spl mu/m. The device consists of a curved, cantilevered steering arm, mounted on an untethered scratch drive actuator (USDA). These two components are fabricated monolithically from the same sheet of conductive polysilicon, and receive a common power and control signal through a capacitive coupling with an underlying electrical grid. All locations on the grid receive the same power and control signal, so that the devices can be operated without knowledge of their position on the substrate. Individual control of the component actuators provides two distinct motion gaits (forward motion and turning), which together allow full coverage of a planar workspace. These MEMS micro-robots demonstrate turning error of less than 3.7/spl deg//mm during forward motion, turn with radii as small as 176 /spl mu/m, and achieve speeds of over 200 /spl mu/m/sec with an average step size as small as 12 nm. They have been shown to operate open-loop for distances exceeding 35 cm without failure, and can be controlled through teleoperation to navigate complex paths. The devices were fabricated through a multiuser surface micromachining process, and were postprocessed to add a patterned layer of tensile chromium, which curls the steering arms upward. After sacrificial release, the devices were transferred with a vacuum microprobe to the electrical grid for testing. This grid consists of a silicon substrate coated with 13-/spl mu/m microfabricated electrodes, arranged in an interdigitated fashion with 2-/spl mu/m spaces. The electrodes are insulated by a layer of electron-beam-evaporated zirconium dioxide, so that devices placed on top of the electrodes will experience an electrostatic force in response to an applied voltage. Control waveforms are broadcast to the device through the capacitive power coupling, and are decoded by the electromechanical response of the device body. Hysteresis in the system allows on-board storage of n=2 bits of state information in response to these electrical signals. The presence of on-board state information within the device itself allows each of the two device subsystems (USDA and steering arm) to be individually addressed and controlled. We describe this communication and control strategy and show necessary and sufficient conditions for voltage-selective actuation of all 2/sup n/ system states, both for our devices (n=2), and for the more general case (where n is larger.).
---
paper_title: A Middleware Based Control Architecture for Modular Robot Systems
paper_content:
Robot technology has been widely used in industrial production and daily life. Unfortunately, due to the uncertainty of the environment and the diversity of functions, the complexity of the robot software is growing dramatically, in which a lot of repetitive works are done. To solve this problem, the modularization of robots and robot middleware has been proposed. From the perspective of modularization, interoperability, scalability and ease-of-use, this paper proposes a loose-coupled, service-oriented modular robot middleware (MoRoM) architecture. In this architecture, we define the standard interface to encapsulate services each functional component provides, realize system automatic configuration (plug and play), and provide a message notification mechanism as well as program frameworks. To verify the proposed middleware, we implement it using ACE-TAO, which is a real-time CORBA implementation, and test it in a modular robot system. The test shows that this architecture is suitable for improving reusability, interoperability and scalability of robot software.
---
paper_title: Programming modular robots with the TOTA middleware
paper_content:
Modular robots represent a perfect application scenario for multiagent coordination. The autonomous modules composing the robot must coordinate their respective activities to enforce a specific global shape or a coherent motion gait. Here we show how the TOTA ("Tuples On The Air") middleware can be effectively exploited to support agents' coordination in this context. The key idea in TOTA is to rely on spatially distributed tuples, spread across the robot, to guide the agents' activities in moving and reshaping the robot. Three simulated examples are presented to support our claims.
---
| Title: Modular Self-Reconfigurable Robotic Systems: A Survey on Hardware Architectures
Section 1: Introduction
Description 1: Provide an overview of modular self-reconfigurable robots (MSRR), their advantages, and the motivation for research in this field.
Section 2: Modular Robots, Hardware Architectures
Description 2: Discuss the evolution of MSRR hardware architectures, various classification paradigms, and the major categories and subcategories of MSRR based on physical characteristics, abilities, and structural formations.
Section 3: Lattice Structured Systems
Description 3: Describe MSRR designs that follow a lattice structure, providing examples, design characteristics, and key technical details of representative systems.
Section 4: Chain Structured Systems
Description 4: Discuss MSRR designs following a chain structure, detailing their features, capabilities, and examples of significant prototypes.
Section 5: Hybrid Structured Systems
Description 5: Explore MSRR designs that incorporate both lattice and chain structures, elaborating on their design benefits, examples, and technical aspects.
Section 6: Truss Structured Systems
Description 6: Examine MSRR designs based on truss structures, detailing specific design aspects, examples, and the applications these systems are suited for.
Section 7: Free-Form Structured Systems
Description 7: Describe MSRR designs that allow free-form structural formations, including key design principles, examples, and the unique capabilities these systems provide.
Section 8: Conclusion
Description 8: Summarize the research presented, noting the creative and interdisciplinary nature of MSRR development, and outline the areas for future research and potential advancements in MSRR technologies.
Section 9: Conflicts of Interest
Description 9: State the authors' declaration regarding any conflicts of interest related to the research presented in the paper. |
Review: False alarm minimization techniques in signature-based intrusion detection systems: A survey | 15 | ---
paper_title: An Intrusion-Detection Model
paper_content:
A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system.
---
paper_title: Bro: A System for Detecting Network Intruders in Real-Time
paper_content:
Abstract We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes high-speed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an `event engine' that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a `policy script interpreter' that interprets event handlers written in a specialized language used to express a site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications via syslog. We also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these, and give particulars of how Bro analyzes the six applications integrated into it so far: Finger, FTP, Portmapper, Ident, Telnet and Rlogin. The system is publicly available in source code form.
---
paper_title: Guide to Intrusion Detection and Prevention Systems (IDPS
paper_content:
This is a Hard copy of the NIST Special Publication 800-94 Guide to Intrusion Detection and Prevention Systems (IDPS). This publication describes the characteristics of IDPS technologies and provides recommendations for designing, implementing, configuring, securing, monitoring, and maintaining them. The types of IDPS technologies are differentiated primarily by the types of events that they monitor and the ways in which they are deployed. This publication discusses the following four types of IDPS technologies: Network-Based, which monitors network traffic for particular network segments or devices and analyzes the network and application protocol activity to identify suspicious activity Wireless, which monitors wireless network traffic and analyzes it to identify suspicious activity involving the wireless networking protocols themselves Network Behavior Analysis (NBA), which examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of malware, and policy violations (e.g., a client system providing network services to other systems) Host-Based, which monitors the characteristics of a single host and the events occurring within that host for suspicious activity. Implementing the following recommendations should facilitate more efficient and effective intrusion detection and prevention system use for Federal departments and agencies. Disclaimer This hardcopy is not published by National Institute of Standards and Technology (NIST), the US Government or US Department of Commerce. The publication of this document should not in any way imply any relationship or affiliation to the above named organizations and Government.
---
paper_title: Finding the Needle: Suppression of False Alarms in Large Intrusion Detection Data Sets
paper_content:
Managed security service providers (MSSPs) must manage and monitor thousands of intrusion detection sensors. The sensors often vary by manufacturer and software version, making the problem of creating generalized tools to separate true attacks from false positives particularly difficult. Often times it is useful from an operations perspective to know if a particular sensor is acting out of character. We propose a solution to this problem using anomaly detection techniques over the set of alarms produced by the sensors. Similar to the manner in which an anomaly based sensor detects deviations from normal user or system behavior, we establish the baseline behavior of a sensor and detect deviations from this baseline. We show that departures from this profile by a sensor have a high probability of being artifacts of genuine attacks. We evaluate a set of time-based Markovian heuristics against a simple compression algorithm and show that we are able to detect the existence of all attacks which were manually identified by security personnel, drastically reduce the number of false positives, and identify attacks which were overlooked during manual evaluation.
---
paper_title: Anomaly detection: A survey
paper_content:
Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.
---
paper_title: The Problem of False Alarms: Evaluation with Snort and DARPA 1999 Dataset
paper_content:
It is a common issue that an Intrusion Detection System (IDS) might generate thousand of alerts per day. The problem has got worse by the fact that IT infrastructure have become larger and more complicated, the number of generated alarms that need to be reviewed can escalate rapidly, making the task very difficult to manage. Moreover, a significant problem facing current IDS technology now is the high level of false alarms. The main purpose of this paper is to investigate the extent of false alarms problem in Snort, using the 1999 DARPA IDS evaluation dataset. A thorough investigation has been carried out to assess the accuracy of alerts generated by Snort IDS. Significantly, this experiment has revealed an unexpected result; with 69% of total generated alerts are considered to be false alarms.
---
paper_title: Using Adaptive Alert Classification to Reduce False Positives in Intrusion Detection
paper_content:
Intrusion Detection Systems (IDSs) are used to monitor computer systems for signs of security violations. Having detected such signs, IDSs trigger alerts to report them. These alerts are presented to a human analyst, who evaluates them and initiates an adequate response.
---
paper_title: The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection
paper_content:
Research in automated computer security intrusion detection, intrusion detection for short, is maturing. Several difficulties remain to be solved before intrusion detection systems become commonplace as part of real-world security solutions. One such difficulty regards the subject of effectiveness, how successful the intrusion detection system is at actually detecting intrusions with a high degree of certainty. With this as its starting point, this paper discusses the “base-rate fallacy” and how it influences the relative success of an intrusion detection system, under a set of reasonable circumstances. The conclusion is reached that the false-alarm rate quickly becomes a limiting factor.
---
paper_title: Alert correlation survey: framework and techniques
paper_content:
Managing raw alerts generated by various sensors are becoming of more significance to intrusion detection systems as more sensors with different capabilities are distributed spatially in the network. Alert Correlation addresses this issue by reducing, fusing and correlating raw alerts to provide a condensed, yet more meaningful view of the network from the intrusion standpoint. Techniques from a divers range of disciplines have been used by researchers for different aspects of correlation. This paper provides a survey of the state of the art in alert correlation techniques. Our main contribution is a two-fold classification of literature based on correlation framework and applied techniques. The previous works in each category have been described alongside with their strengths and weaknesses from our viewpoint.
---
paper_title: A survey of coordinated attacks and collaborative intrusion detection
paper_content:
Coordinated attacks, such as large-scale stealthy scans, worm outbreaks and distributed denial-of-service (DDoS) attacks, occur in multiple networks simultaneously. Such attacks are extremely difficult to detect using isolated intrusion detection systems (IDSs) that monitor only a limited portion of the Internet. In this paper, we summarize the current research directions in detecting such attacks using collaborative intrusion detection systems (CIDSs). In particular, we highlight two main challenges in CIDS research: CIDS architectures and alert correlation algorithms. We review the current CIDS approaches in terms of these two challenges. We conclude by highlighting opportunities for an integrated solution to large-scale collaborative intrusion detection.
---
paper_title: Enhancing byte-level network intrusion detection signatures with context
paper_content:
Many network intrusion detection systems (NIDS) use byte sequences as signatures to detect malicious activity. While being highly efficient, they tend to suffer from a high false-positive rate. We develop the concept of contextual signatures as an improvement of string-based signature-matching. Rather than matching fixed strings in isolation, we augment the matching process with additional context. When designing an efficient signature engine for the NIDS bro, we provide low-level context by using regular expressions for matching, and high-level context by taking advantage of the semantic information made available by bro's protocol analysis and scripting language. Therewith, we greatly enhance the signature's expressiveness and hence the ability to reduce false positives. We present several examples such as matching requests with replies, using knowledge of the environment, defining dependencies between signatures to model step-wise attacks, and recognizing exploit scans.To leverage existing efforts, we convert the comprehensive signature set of the popular freeware NIDS snort into bro's language. While this does not provide us with improved signatures by itself, we reap an established base to build upon. Consequently, we evaluate our work by comparing to snort, discussing in the process several general problems of comparing different NIDSs.
---
paper_title: Model-driven, network-context sensitive intrusion detection
paper_content:
Intrusion Detection Systems (IDSs) have the reputation of generating many false positives. Recent approaches, known as stateful IDSs, take the state of communication sessions into account to address this issue. A substantial reduction of false positives, however, requires some correlation between the state of the session, known vulnerabilities, and the gathering of more network context information by the IDS than what is currently done (e.g., configuration of a node, its operating system, running applications). In this paper we present an IDS approach that attempts to decrease the number of false positives by collecting more network context and combining this information with known vulnerabilities. The approach is model-driven as it relies on the modeling of packet and network information as UML class diagrams, and the definition of intrusion detection rules as OCL expressions constraining these diagrams. The approach is evaluated using real attacks on real systems, and appears to be promising.
---
paper_title: STATL: An Attack Language for State-based Intrusion Detection
paper_content:
STATL is an extensible state/transition-based attack description language designed to support intrusion detection. The language allows one to describe computer penetrations as sequences of actions that an attacker performs to compromise a computer system. A STATL description of an attack scenario can be used by an intrusion detection system to analyze a stream of events and detect possible ongoing intrusions. Since intrusion detection is performed in different domains (i.e., the network or the hosts) and in different operating environments (e.g., Linux, Solaris, or Windows NT), it is useful to have an extensible language that can be easily tailored to different target environments. STATL defines domain-independent features of attack scenarios and provides constructs for extending the language to describe attacks in particular domains and environments. The STATL language has been successfully used in describing both network-based and host-based attacks, and it has been tailored to very different environments, e.g., Sun Microsystems' Solaris and Microsoft's Windows NT. An implementation of the runtime support for the STATL language has been developed and a toolset of intrusion detection systems based on STATL has been implemented. The toolset was used in a recent intrusion detection evaluation effort, delivering very favorable results. This paper presents the details of the STATL syntax and its semantics. Real examples from both the host and network-based extensions of the language are also presented.
---
paper_title: A stateful intrusion detection system for World-Wide Web servers
paper_content:
Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom Web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, Web servers are a popular target for hackers. To mitigate the security exposure associated with Web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of Web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. We present WebSTAT, an intrusion detection system that analyzes Web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the Web server. By correlating different streams of events, it is possible to achieve more effective detection of Web-based attacks.
---
paper_title: Stateful intrusion detection for high-speed network's
paper_content:
As networks become faster there is an emerging need for security, analysis techniques that can keep up with the increased network throughput. Existing network-based intrusion detection sensors can barely, keep up with bandwidths of a few hundred Mbps. Analysis tools that can deal with higher throughput are unable to maintain state between different steps of an attack or they are limited to the analysis of packet headers. We propose a partitioning approach to network security, analysis that supports in-depth, stateful intrusion detection on high-speed links. The approach is centered around a slicing mechanism that divides the overall network traffic into subsets of manageable size. The traffic partitioning is done so that a single slice contains all the evidence necessary to detect a specific attack, making sensor-to-sensor interactions unnecessary. This paper describes the approach and presents a first experimental evaluation of its effectiveness.
---
paper_title: The NIDS cluster: Scalable, stateful network intrusion detection on commodity hardware
paper_content:
In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.
---
paper_title: NetShield: massive semantics-based vulnerability signature matching for high-speed networks
paper_content:
Accuracy and speed are the two most important metrics for Network Intrusion Detection/Prevention Systems (NIDS/NIPSes). Due to emerging polymorphic attacks and the fact that in many cases regular expressions (regexes) cannot capture the vulnerability conditions accurately, the accuracy of existing regex-based NIDS/NIPS systems has become a serious problem. In contrast, the recently-proposed vulnerability signatures (a.k.a data patches) can exactly describe the vulnerability conditions and achieve better accuracy. However, how to efficiently apply vulnerability signatures to high speed NIDS/NIPS with a large ruleset remains an untouched but challenging issue. This paper presents the first systematic design of vulnerability signature based parsing and matching engine, NetShield, which achieves multi-gigabit throughput while offering much better accuracy. Particularly, we made the following contributions: (i) we proposed a candidate selection algorithm which efficiently matches thousands of vulnerability signatures simultaneously requiring a small amount of memory; (ii) we proposed an automatic lightweight parsing state machine achieving fast protocol parsing. Experimental results show that the core engine of NetShield achieves at least 1.9+Gbps signature matching throughput on a 3.8GHz single-core PC, and can scale-up to at least 11+Gbps under a 8-core machine for 794 HTTP vulnerability signatures.
---
paper_title: Shield: vulnerability-driven network filters for preventing known vulnerability exploits
paper_content:
Software patching has not been effective as a first-line defense against large-scale worm attacks, even when patches have long been available for their corresponding vulnerabilities. Generally, people have been reluctant to patch their systems immediately, because patches are perceived to be unreliable and disruptive to apply. To address this problem, we propose a first-line worm defense in the network stack, using shields -- vulnerability-specific, exploit-generic network filters installed in end systems once a vulnerability is discovered, but before a patch is applied. These filters examine the incoming or outgoing traffic of vulnerable applications, and correct traffic that exploits vulnerabilities. Shields are less disruptive to install and uninstall, easier to test for bad side effects, and hence more reliable than traditional software patches. Further, shields are resilient to polymorphic or metamorphic variations of exploits [43].In this paper, we show that this concept is feasible by describing a prototype Shield framework implementation that filters traffic above the transport layer. We have designed a safe and restrictive language to describe vulnerabilities as partial state machines of the vulnerable application. The expressiveness of the language has been verified by encoding the signatures of several known vulnerabilites. Our evaluation provides evidence of Shield's low false positive rate and small impact on application throughput. An examination of a sample set of known vulnerabilities suggests that Shield could be used to prevent exploitation of a substantial fraction of the most dangerous ones.
---
paper_title: Alarm clustering for intrusion detection systems in computer networks
paper_content:
Until recently, network administrators manually arranged alarms produced by Intrusion Detection Systems (IDSs) to attain a high-level description of threats. As the number of alarms is increasingly growing, automatic tools for alarm clustering have been proposed to provide such a high level description of the attack scenario. In addition, it has been shown that effective threat analysis require the fusion of different sources of information, such as different IDSs, firewall logs, etc. In this paper, we propose a new strategy to perform alarm clustering which produces unified descriptions of attacks from multiple alarms. Tests have been performed on a live network where commercial and open-source IDSs analyzed network traffic.
---
paper_title: Mining intrusion detection alarms for actionable knowledge
paper_content:
In response to attacks against enterprise networks, administrators increasingly deploy intrusion detection systems. These systems monitor hosts, networks, and other resources for signs of security violations. The use of intrusion detection has given rise to another difficult problem, namely the handling of a generally large number of alarms. In this paper, we mine historical alarms to learn how future alarms can be handled more efficiently. First, we investigate episode rules with respect to their suitability in this approach. We report the difficulties encountered and the unexpected insights gained. In addition, we introduce a new conceptual clustering technique, and use it in extensive experiments with real-world data to show that intrusion detection alarms can be handled efficiently by using previously mined knowledge.
---
paper_title: Clustering intrusion detection alarms to support root cause analysis
paper_content:
It is a well-known problem that intrusion detection systems overload their human operators by triggering thousands of alarms per day. This paper presents a new approach for handling intrusion detection alarms more efficiently. Central to this approach is the notion that each alarm occurs for a reason, which is referred to as the alarm's root causes. This paper observes that a few dozens of rather persistent root causes generally account for over 90p of the alarms that an intrusion detection system triggers. Therefore, we argue that alarms should be handled by identifying and removing the most predominant and persistent root causes. To make this paradigm practicable, we propose a novel alarm-clustering method that supports the human analyst in identifying root causes. We present experiments with real-world intrusion detection alarms to show how alarm clustering helped us identify root causes. Moreover, we show that the alarm load decreases quite substantially if the identified root causes are eliminated so that they can no longer trigger alarms in the future.
---
paper_title: IDS false alarm filtering using KNN classifier
paper_content:
Intrusion detection is one of he important aspects in computer security. Many commercial intrusion detection systems (IDSs) are available and are widely used by organizations. However, most of them suffer from the problem of high false alarm rate, which added heavy workload to security officers who are responsible for handling the alarms. In this paper, we propose a new method to reduce the number of false alarms. We model the normal alarm patterns of IDSs and detect anomaly from incoming alarm streams using k-nearest-neighbor classifier. Preliminary experiments show that our approach successfully reduces up to 93% of false alarms generated by a famous IDS.
---
paper_title: Intrusion detection alarms reduction using root cause analysis and clustering
paper_content:
As soon as the Intrusion Detection System (IDS) detects any suspicious activity, it will generate several alarms referring to as security breaches. Unfortunately, the triggered alarms usually are accompanied with huge number of false positives. In this paper, we use root cause analysis to discover the root causes making the IDS triggers these false alarms; most of these root causes are not attacks. Removing the root causes enhances alarms quality in the future. The root cause instigates the IDS to trigger alarms that almost always have similar features. These similar alarms can be clustered together; consequently, we have designed a new clustering technique to group IDS alarms and to produce clusters. Then, each cluster is modeled by a generalized alarm. The generalized alarms related to root causes are converted (by the security analyst) to filters in order to reduce future alarms' load. The suggested system is a semi-automated system helping the security analyst in specifying the root causes behind these false alarms and in writing accurate filtering rules. The proposed clustering method was verified with three different datasets, and the averaged reduction ratio was about 74% of the total alarms. Application of the new technique to alarms log greatly helps the security analyst in identifying the root causes; and then reduces the alarm load in the future.
---
paper_title: New data mining technique to enhance IDS alarms quality
paper_content:
The intrusion detection systems (IDSs) generate large number of alarms most of which are false positives. Fortunately, there are reasons for triggering alarms where most of these reasons are not attacks. In this paper, a new data mining technique has been developed to group alarms and to produce clusters. Hereafter, each cluster abstracted as a generalized alarm. The generalized alarms related to root causes are converted to filters to reduce future alarms load. The proposed algorithm makes use of nearest neighboring and generalization concepts to cluster alarms. As a clustering algorithm, the proposed algorithm uses a new measure to compute distances between alarms features values. This measure depends on background knowledge of the monitored network, making it robust and meaningful. The new data mining technique was verified with many datasets, and the averaged reduction ratio was about 82% of the total alarms. Application of the new technique to alarms log greatly helps the security analyst in identifying the root causes; and then reduces the alarm load in the future.
---
paper_title: Reducing IDS False Positives Using Incremental Stream Clustering Algorithm
paper_content:
Along with Cryptographic protocols and digital signatures, Intrusion Detection Systems(IDS) are considered to be the last line of defense to secure a network. But the main problem with todays most popular commercial IDSs(Intrusion Detection System) is the generation of huge amount of false positive alerts along with the true positive alerts, which is a cumbersome task for the operator to investigate in order to initiate proper responses. So, there is a great demand to explore this area of research and to find out a feasible solution. ::: ::: In this thesis, we have chosen this problem as our main area of research. We have tested the effectiveness of using the Incremental Stream Clustering Algorithm in order to reduce the number of false alerts from an IDS output. This algorithm was tested with output of one of the most popular network based open source IDS, named Snort, which was configured to playback mood to look for DARPA 1999 network traffic dataset. Our approach was evaluated and compared with K-Nearest Neighbor Algorithm. The result shows that the Incremental Stream Clustering Algorithm reduces (more than 99%) the number of false alarms more than that of K-Nearest Neighbor Algorithm (93%).
---
paper_title: Mining alarm clusters to improve alarm handling efficiency
paper_content:
It is a well-known problem that intrusion detection systems overload their human operators by triggering thousands of alarms per day. As a matter of fact, IBM Research's Zurich Research Laboratory has been asked by one of our service divisions to help them deal with this problem. This paper presents the results of our research, validated thanks to a large set of operational data. We show that alarms should be managed by identifying and resolving their root causes. Alarm clustering is introduced as a method that supports the discovery of root causes. The general alarm clustering problem is proved to be NP-complete, an approximation algorithm is proposed, and experiments are presented.
---
paper_title: A preliminary two-stage alarm correlation and filtering system using SOM neural network and K-means algorithm
paper_content:
Intrusion Detection Systems (IDSs) play a vital role in the overall security infrastructure. Although the IDS has become an essential part of corporate network infrastructure, the art of detecting intrusion is still far from perfect. A significant problem is that of false alarms, as generating a huge volume of such alarms could render the system inefficient. In this paper, we propose a new method to reduce the number of false alarms. We develop a two-stage classification system using a SOM neural network and K-means algorithm to correlate the related alerts and to further classify the alerts into classes of true and false alarms. Preliminary experiments show that our approach effectively reduces all superfluous and noisy alerts, which often contribute to more than 50% of false alarms generated by a common IDS.
---
paper_title: Data Fusion and Cost Minimization for Intrusion Detection
paper_content:
Statistical pattern recognition techniques have recently been shown to provide a finer balance between misdetections and false alarms than the more conventional intrusion detection approaches, namely misuse detection and anomaly detection. A variety of classical machine learning and pattern recognition algorithms has been applied to intrusion detection with varying levels of success. We make two observations about intrusion detection. One is that intrusion detection is significantly more effective by using multiple sources of information in an intelligent way, which is precisely what human experts rely on. Second, different errors in intrusion detection have different costs associated with them-a simplified example being that a false alarm may be more expensive than a misdetection and, hence, the true objective function to be minimized is the cost of errors and not the error rate itself. We present a pattern recognition approach that addresses both of these issues. It utilizes an ensemble of a classifiers approach to intelligently combine information from multiple sources and is explicitly tuned toward minimizing the cost of the errors as opposed to the error rate itself. The information fusion approach dLEARNIN alone is shown to achieve state-of-the-art performances better than those reported in the literature so far, and the cost minimization strategy dCMS further reduces the cost with a significant margin.
---
paper_title: An intrusion detection and alert correlation approach based on revising probabilistic classifiers using expert knowledge
paper_content:
Bayesian networks are important knowledge representation tools for handling uncertain pieces of information. The success of these models is strongly related to their capacity to represent and handle dependence relations. Some forms of Bayesian networks have been successfully applied in many classification tasks. In particular, naive Bayes classifiers have been used for intrusion detection and alerts correlation. This paper analyses the advantage of adding expert knowledge to probabilistic classifiers in the context of intrusion detection and alerts correlation. As examples of probabilistic classifiers, we will consider the well-known Naive Bayes, Tree Augmented Naive Bayes (TAN), Hidden Naive Bayes (HNB) and decision tree classifiers. Our approach can be applied for any classifier where the outcome is a probability distribution over a set of classes (or decisions). In particular, we study how additional expert knowledge such as "it is expected that 80 % of traffic will be normal" can be integrated in classification tasks. Our aim is to revise probabilistic classifiers' outputs in order to fit expert knowledge. Experimental results show that our approach improves existing results on different benchmarks from intrusion detection and alert correlation areas.
---
paper_title: False Alarm Classification Model for Network-Based Intrusion Detection System
paper_content:
Network-based IDS(Intrusion Detection System) gathers network packet data and analyzes them into attack or normal. But they often output a large amount of low-level or incomplete alert information. Such alerts can be unmanageable and also be mixed with false alerts. In this paper we proposed a false alarm classification model to reduce the false alarm rate using classification analysis of data mining techniques. The model was implemented based on associative classification in the domain of DDOS attack. We evaluated the false alarm classifier deployed in front of Snort with Darpa 1998 dataset and verified the reduction of false alarm rate. Our approach is useful to reduce false alerts and to improve the detection rate of network-based intrusion detection systems.
---
paper_title: Using Adaptive Alert Classification to Reduce False Positives in Intrusion Detection
paper_content:
Intrusion Detection Systems (IDSs) are used to monitor computer systems for signs of security violations. Having detected such signs, IDSs trigger alerts to report them. These alerts are presented to a human analyst, who evaluates them and initiates an adequate response.
---
paper_title: Performance enhancement of Intrusion Detection Systems using advances in sensor fusion
paper_content:
Various intrusion detection systems reported in literature have shown distinct preferences for detecting a certain class of attacks with improved accuracy, while performing moderately on the other classes. With the advances in sensor fusion, it has become possible to obtain a more reliable and accurate decision for a wider class of attacks, by combining the decisions of multiple intrusion detection systems. In this paper, an architecture using data-dependent decision fusion is proposed. The method gathers an in-depth understanding about the input traffic and also the behavior of the individual intrusion detection systems by means of a neural network supervised learner unit. This information is used to fine-tune the fusion unit, since the fusion depends on the input feature vector. For illustrative purposes, three intrusion detection systems namely PHAD, ALAD, and Snort have been considered using the DARPA 1999 dataset in order to validate the proposed architecture. The overall performance of the proposed sensor fusion system shows considerable improvement with respect to the performance of individual intrusion detection systems.
---
paper_title: Critical Episode Mining in Intrusion Detection Alerts
paper_content:
One of the most important steps in attack detection using Intrusion Detection Systems (IDSs) is dealing with huge number of alerts that can be either critical single alerts and multi-step attack scenarios or false alerts and non-critical ones. In this paper we try to address the problem of managing alerts via a multi-layer alert correlation and Itering that can identify critical alerts after each step of correlation and Itering. After applying the approach on LL DDoS 1.0 data set, we achieved very good results in terms of critical alert detection rates, running time of approach and its memory usage. Our method could extract all of critical and multi-step attacks in LL DDoS 1.0 data set while we had almost 90% reduction in number of alerts.
---
paper_title: Applying Data Fusion in Collaborative Alerts Correlation
paper_content:
Due to various network intrusions, network security has always been a main concern of the network administrator. However, nowadays traditional security tools like IDSs, firewalls etc cannot play the roles of effective defense mechanisms. Instead, they only generate elementary alerts to form alert flooding and they often have high false alerts rates. Moreover due to their weak collaboration-awareness, they cannot detect large distributed attacks such as a DDoS attack. In this paper, we present an efficient and effective model for collaborative alerts analyzing. Our system enhances the alert verification using assets? contextual information. By applying alert fusion and using a precisely defined knowledge base in the correlation phase, it also provides a method to get general and synthetic alerts from the large volume of elementary alerts. Moreover, this system is able to reconstruct the attack scenarios for multi-step attacks. Experiments show the system can effectively distinguish false positives, detect and predicate large-scale attacks in their early stage.
---
paper_title: An Operational Framework for Alert Correlation using a Novel Clustering Approach
paper_content:
Intrusion Detection System (IDS) is a well known security feature and widely implemented among practitioners. However, since the creation of IDS the enormous number of alerts generated by the detection sensors has always been a setback in the implementation environment. Moreover due to this obtrusive predicament, two other problems have emerged which are the difficulty in processing the alerts accurately and also the decrease in performance rate in terms of time and memory capacity while processing these alerts. Thus, based on the specified problems, the purpose of our overall research is to construct a holistic solution that is able to reduce the number of alerts to be processed and at the same time to produce a high quality attack scenarios that are meaningful to the administrators in a timely manner. In this paper we will present our proposed framework together with the result of our novel clustering method, architectured solely with the intention of reducing the amount of alerts generated by IDS. The clustering method was tested against two dataset; a globally used dataset, DARPA and a live dataset from a cyber attack monitoring unit that uses SNORT engine to capture the alerts. The result obtained from the experiment is very promising; the clustering algorithm was able to reduce about 86.9% of the alerts used in the experiment. From the result we are able to highlight the contribution to practitioners in an actual working environment.
---
paper_title: Comprehensive approach to intrusion detection alert correlation
paper_content:
Alert correlation is a process that analyzes the alerts produced by one or more intrusion detection systems and provides a more succinct and high-level view of occurring or attempted intrusions. Even though the correlation process is often presented as a single step, the analysis is actually carried out by a number of components, each of which has a specific goal. Unfortunately, most approaches to correlation concentrate on just a few components of the process, providing formalisms and techniques that address only specific correlation issues. This paper presents a general correlation model that includes a comprehensive set of components and a framework based on this model. A tool using the framework has been applied to a number of well-known intrusion detection data sets to identify how each component contributes to the overall goals of correlation. The results of these experiments show that the correlation components are effective in achieving alert reduction and abstraction. They also show that the effectiveness of a component depends heavily on the nature of the data set analyzed.
---
paper_title: The Intrusion Detection Message Exchange Format (IDMEF)
paper_content:
The purpose of the Intrusion Detection Message Exchange Format (IDMEF) ::: is to define data formats and exchange procedures for sharing ::: information of interest to intrusion detection and response systems ::: and to the management systems that may need to interact with them. ::: This document describes a data model to represent information exported ::: by intrusion detection systems and explains the rationale for using ::: this model. An implementation of the data model in the Extensible ::: Markup Language (XML) is presented, an XML Document Type Definition is ::: developed, and examples are provided. This memo defines an ::: Experimental Protocol for the Internet community.
---
paper_title: IDS alerts correlation using grammar-based approach
paper_content:
Intrusion Detection System (IDS) is a security technology that attempts to identify intrusions. Defending against multi-step intrusions which prepare for each other is a challenging task. In this paper, we propose a novel approach to alert post-processing and correlation, the Alerts Parser. Different from most other alert correlation methods, our approach treats the alerts as tokens and uses modified version of the LR parser to generate parse trees representing the scenarii in the alerts. An Attribute Context-Free Grammar (ACF-grammar) is used for representing the multi-step attacks. Attack scenarii information and prerequisites/consequences knowledge are included together in the ACF-grammar enhancing the correlation results. The modified LR parser depends on these ACF-grammars to generate parse trees. The experiments were performed on two different sets of network traffic traces, using different open-source and commercial IDS sensors. The discovered scenarii are represented by Correlation Graphs (CGs). The experimental results show that Alerts Parser can work in parallel, effectively correlate related alerts with low false correlation rate, uncover the attack strategies, and generate concise CGs.
---
paper_title: Alert correlation in a cooperative intrusion detection framework
paper_content:
This paper presents the work we have done within the MIRADOR project to design CRIM, a cooperative module for intrusion detection systems (IDS). This module implements functions to manage, cluster, merge and correlate alerts. The clustering and merging functions recognize alerts that correspond to the same occurrence of an attack and create a new alert that merge data contained in these various alerts. Experiments show that these functions significantly reduce the number of alerts. However, we also observe that alerts we obtain are still too elementary to be managed by a security administrator. The purpose of the correlation function is thus to generate global and synthetic alerts. This paper focuses on the approach we suggest to design this function.
---
paper_title: Modeling network intrusion detection alerts for correlation
paper_content:
Signature-based network intrusion-detection systems (NIDSs) often report a massive number of simple alerts of low-level security-related events. Many of these alerts are logically involved in a single multi-stage intrusion incident and a security officer often wants to analyze the complete incident instead of each individual simple alert. This paper proposes a well-structured model that abstracts the logical relation between the alerts in order to support automatic correlation of those alerts involved in the same intrusion. The basic building block of the model is a logical formula called a capability. We use capability to abstract consistently and precisely all levels of accesses obtained by the attacker in each step of a multistage intrusion. We then derive inference rules to define logical relations between different capabilities. Based on the model and the inference rules, we have developed several novel alert correlation algorithms and implemented a prototype alert correlator. The experimental results of the correlator using several intrusion datasets demonstrate that the approach is effective in both alert fusion and alert correlation and has the ability to correlate alerts of complex multistage intrusions. In several instances, the alert correlator successfully correlated more than two thousand Snort alerts involved in massive scanning incidents. It also helped us find two multistage intrusions that were missed in auditing by the security officers.
---
paper_title: Modeling Multistep Cyber Attacks for Scenario Recognition
paper_content:
Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.
---
paper_title: A Novel Framework for Alert Correlation and Understanding
paper_content:
We propose a novel framework named Hidden Colored Petri-Net for Alert Correlation and Understanding (HCPN-ACU) in intrusion detection system. This model is based upon the premise that intrusion detection may be viewed as an inference problem – in other words, we seek to show that system misusers are carrying out a sequence of steps to violate system security policies in some way, with earlier steps preparing for the later ones. In contrast with prior arts, we separate actions from observations and assume that the attacker’s actions themselves are unknown, but the attacker’s behavior may result in alerts. These alerts are then used to infer the attacker’s actions. We evaluate the model with DARPA evaluation database. We conclude that HCPN-ACU can conduct alert fusion and intention recognition at the same time, reduce false positives and negatives, and provide better understanding of the intrusion progress by introducing confidence scores.
---
paper_title: Analyzing Intensive Intrusion Alerts via Correlation
paper_content:
Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive intrusions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. Several complementary alert correlation methods have been proposed to address this problem. As one of these methods, we have developed a framework to correlate intrusion alerts using prerequisites of intrusions. In this paper, we continue this work to study the feasibility of this method in analyzing real-world, intensive intrusions. In particular, we develop three utilities (called adjustable graph reduction, focused analysis, and graph decomposition) to facilitate the analysis of large sets of correlated alerts. We study the effectiveness of the alert correlation method and these utilities through a case study with the network traffic captured at the DEF CON 8 Capture the Flag (CTF) event. Our results show that these utilities can simplify the analysis of large amounts of alerts, and also reveals several attack strategies that were repeatedly used in the DEF CON 8 CTF event.
---
paper_title: Techniques and tools for analyzing intrusion alerts
paper_content:
Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive attacks, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. This paper presents a sequence of techniques to address this issue. The first technique constructs attack scenarios by correlating alerts on the basis of prerequisites and consequences of attacks. Intuitively, the prerequisite of an attack is the necessary condition for the attack to be successful, while the consequence of an attack is the possible outcome of the attack. Based on the prerequisites and consequences of different types of attacks, the proposed method correlates alerts by (partially) matching the consequences of some prior alerts with the prerequisites of some later ones. Moreover, to handle large collections of alerts, this paper presents a set of interactive analysis utilities aimed at facilitating the investigation of large sets of intrusion alerts. This paper also presents the development of a toolkit named TIAA, which provides system support for interactive intrusion analysis. This paper finally reports the experiments conducted to validate the proposed techniques with the 2000 DARPA intrusion detection scenario-specific datasets, and the data collected at the DEFCON 8 Capture the Flag event.
---
paper_title: LAMBDA: A Language to Model a Database for Detection of Attacks
paper_content:
This article presents an attack description language. This language is based on logic and uses a declarative approach. In the language, the conditions and effects of an attack are described with logical formulas related to the state of the target computer system. The various steps of the attack process are associated to events, which may be combined using specific algebraic operators. These elements provide a description of the attack from the point of view of the attacker. They are complemented with additional elements corresponding to the point of view of intrusion detection systems and audit programs. These detection and verification aspects provide the language user with means to tailor the description of the attack to the needs of a specific intrusion detection system or a specific environment.
---
paper_title: A requires/provides model for computer attacks
paper_content:
Computer attacks are typically described in terms of a single exploited vulnerability or as a signature composed of a specific sequence of events. These approaches lack the ability to characterize complex scenarios or to generalize to unknown attacks. Rather than think of attacks as a series of events, we view attacks as a set of capabilities that provide support for abstract attack concepts that in turn provide new capabilities to support other concepts. This paper describes a flexible extensible model for computer attacks, a language for specifying the model, and how it can be used in security applications such as vulnerability analysis, intrusion detection and attack generation
---
paper_title: An Improved Framework for Intrusion Alert Correlation
paper_content:
Alert correlation analyzes the alerts from one or more collaborative Intrusion Detection Systems (IDSs) to produce a concise overview of security-related activity on the network. The process consists of multiple components, each responsible for a different aspect of the overall correlation goal. The sequence order of the correlation components affects the correlation process performance. The total time needed for the whole process depends on the number of processed alerts in each component. This paper proposes a new correlation framework based on a model that reduces the number of processed alerts as early as possible by discarding the irrelevant and false alerts in the first phases. A new component is added to deal with the unrelated alerts. A modified algorithm for fusing the alerts is also proposed. The intruders’ intention is grouped into attack scenarios and thus used to detect future attacks. The contribution of this paper includes an enhanced new framework for alert correlation, the implementation of the alert correlator model based on the framework, and the evaluation of the model using the DARPA 2000 intrusion detection scenario specific datasets. The experimental results show that the correlation model is effective in achieving alert reduction and abstraction. The performance is improved after the attention is focused on correlating higher severity alerts.
---
paper_title: TRINETR: an intrusion detection alert management systems
paper_content:
In response to the daunting threats of cyber attacks, a promising approach is computer and network forensics. Intrusion detection system is an indispensable part of computer and network forensics. It is deployed to monitor network and host activities including dataflows and information accesses etc. But current intrusion detection products presents many flaws including alert flooding, too many false alerts and isolated alerts etc. We describe an ongoing project to develop an intrusion alert management system $TRINETR. We present a collaborative architecture design for multiple intrusion detection systems to work together to detect real-time network intrusions. The architecture is composed of three parts: alert aggregation, knowledge-based alert evaluation and alert correlation. The architecture is aimed at reducing the alert overload by aggregating alerts from multiple sensors to generate condensed views, reducing false positives by integrating network and host system information into alert evaluation process and correlating events based on logical relations to generate global and synthesized alert report. The first two parts of the architecture have been implemented and the implementation results are presented.
---
paper_title: M2d2: A formal data model for ids alert correlation
paper_content:
At present, alert correlation techniques do not make full use of the information that is available. We propose a data model for IDS alert correlation called M2D2. It supplies four information types: information related to the characteristics of the monitored information system, information about the vulnerabilities, information about the security tools used for the monitoring, and information about the events observed. M2D2 is formally defined. As far as we know, no other formal model includes the vulnerability and alert parts of M2D2. Three examples of correlations are given. They are rigorously specified using the formal definition of M2D2. As opposed to already published correlation methods, these examples use more than the events generated by security tools; they make use of many concepts formalized in M2D2.
---
paper_title: Information modeling for intrusion report aggregation
paper_content:
The paper describes the SCYLLARUS approach to fusing reports from multiple intrusion detection systems (ID-Ses) to provide an overall approach to intrusion situation awareness. The overall view provided by SCYLLARUS centers around the site's security goals, aggregating large numbers of individual IDS reports based on their impact. The overall view reduces information overload by aggregating multiple IDS reports in a rep-down view; and by reducing false positives by weighing evidence provided by multiple ID-Ses and other information sources. Unlike previous efforts in this area, SCYLLARUS is centered around its intrusion reference model (IRM). The SCYLLARUS IRM contains both dynamic and static (configuration) information. A network entity/relationship database (NERD), providing information about the site's hardware and software; a security goal database, describing the site's objectives and security policy; and an event dictionary, describing important events, both intrusions and benign; comprise the static portion of the IRM. The set of IDS reports; the events SCYLLARUS hypothesizes to explain them; and the resulting judgment of the state of site security goals comprise the dynamic part of the IRM.
---
paper_title: Comprehensive approach to intrusion detection alert correlation
paper_content:
Alert correlation is a process that analyzes the alerts produced by one or more intrusion detection systems and provides a more succinct and high-level view of occurring or attempted intrusions. Even though the correlation process is often presented as a single step, the analysis is actually carried out by a number of components, each of which has a specific goal. Unfortunately, most approaches to correlation concentrate on just a few components of the process, providing formalisms and techniques that address only specific correlation issues. This paper presents a general correlation model that includes a comprehensive set of components and a framework based on this model. A tool using the framework has been applied to a number of well-known intrusion detection data sets to identify how each component contributes to the overall goals of correlation. The results of these experiments show that the correlation components are effective in achieving alert reduction and abstraction. They also show that the effectiveness of a component depends heavily on the nature of the data set analyzed.
---
paper_title: NetSTAT: a network-based intrusion detection approach
paper_content:
Network-based attacks have become common and sophisticated. For this reason, intrusion detection systems are now shifting their focus from the hosts and their operating systems to the network itself. Network-based intrusion detection is challenging because network auditing produces large amounts of data, and different events related to a single intrusion may be visible in different places on the network. This paper presents NetSTAT, a new approach to network intrusion detection. By using a formal model of both the network and the attacks, NetSTAT is able to determine which network events have to be monitored and where they can be monitored.
---
paper_title: An approach to sensor correlation
paper_content:
We present an approach to intrusion detection (ID) sensor correlation that considers the problem in three phases: event aggregation, sensor coupling, and meta alert fusion. The approach is well suited to probabilistically based sensors such as EMERALD eBayes. We demonstrate the efficacy of the EMERALD alert thread mechanism, the sensor coupling in eBayes, and a prototype alert fusion capability towards achieving significant functionality in the field of ID sensor correlation.
---
paper_title: Analysis of Credential Stealing Attacks in an Open Networked Environment
paper_content:
This paper analyses the forensic data on credential stealing incidents over a period of 5 years across 5000 machines monitored at the National Center for Supercomputing Applications at the University of Illinois. The analysis conducted is the first attempt in an open operational environment (i) to evaluate the intricacies of carrying out SSH-based credential stealing attacks, (ii) to highlight and quantify key characteristics of such attacks, and (iii) to provide the system level characterization of such incidents in terms of distribution of alerts and incident consequences
---
paper_title: Alarm reduction and correlation in defence of IP networks
paper_content:
Society's critical infrastructures are increasingly dependent on IP networks. Intrusion detection and tolerance within data networks is therefore imperative for dependability in other domains such as telecommunications and future energy management networks. Today's data networks are protected by human operators who are overwhelmed by the massive information overload through false alarm rates of the protection mechanisms. This paper studies the role of alarm reduction and correlation in supporting the security administrator in an enterprise network. We present an architecture that incorporates intrusion detection systems as sensors, and provides improved alarm data to the human operator or to automated actuators. Alarm reduction and correlation via static and adaptive filtering, normalisation, and aggregation is demonstrated on the output from three sensors (Snort, Samhain and Syslog) used in a telecom test network.
---
paper_title: Aggregation and Correlation of Intrusion-Detection Alerts
paper_content:
This paper describes an aggregation and correlation algorithm used in the design and implementation of an intrusion-detection console built on top of the Tivoli Enterprise Console (TEC). The aggregation and correlation algorithm aims at acquiring intrusion-detection alerts and relating them together to expose a more condensed view of the security issues raised by intrusion-detection systems.
---
paper_title: Analysis of security data from a large computing organization
paper_content:
This paper presents an in-depth study of the forensic data on security incidents that have occurred over a period of 5 years at the National Center for Supercomputing Applications at the University of Illinois. The proposed methodology combines automated analysis of data from security monitors and system logs with human expertise to extract and process relevant data in order to: (i) determine the progression of an attack, (ii) establish incident categories and characterize their severity, (iii) associate alerts with incidents, and (iv) identify incidents missed by the monitoring tools and examine the reasons for the escapes. The analysis conducted provides the basis for incident modeling and design of new techniques for security monitoring.
---
paper_title: On the use of different statistical tests for alert correlation – Short Paper
paper_content:
In this paper we analyze the use of different types of statistical tests for the correlation of anomaly detection alerts. We show that the Granger Causality Test, one of the few proposals that can be extended to the anomaly detection domain, strongly depends on good choices of a parameter which proves to be both sensitive and difficult to estimate. We propose a different approach based on a set of simpler statistical tests, and we prove that our criteria work well on a simplified correlation task, without requiring complex configuration parameters.
---
paper_title: Discovering novel attack strategies from INFOSEC alerts
paper_content:
Correlating security alerts and discovering attack strategies are important and challenging tasks for security analysts. Recently, there have been several proposed techniques to analyze attack scenarios from security alerts. However, most of these approaches depend on a priori and hard-coded domain knowledge that lead to their limited capabilities of detecting new attack strategies. In this paper, we propose an approach to discover novel attack strategies. Our approach includes two complementary correlation mechanisms based on two hypotheses of attack step relationship. The first hypothesis is that attack steps are directly related because an earlier attack enables or positively affects the later one. For this type of attack relationship, we develop a Bayesian-based correlation engine to correlate attack steps based on security states of systems and networks. The second hypothesis is that for some related attack steps, even though they do not have obvious and direct relationship in terms of security and performance measures, they still have temporal and statistical patterns. For this category of relationship, we apply time series and statistical analysis to correlate attack steps. The security analysts are presented with aggregated information on attack strategies from these two correlation engines. We evaluate our approach using DARPA’s Grand Challenge Problem (GCP) data sets. The results show that our approach can discover novel attack strategies and provide a quantitative analysis of attack scenarios.
---
paper_title: An online adaptive approach to alert correlation
paper_content:
The current intrusion detection systems (IDSs) generate a tremendous number of intrusion alerts. In practice, managing and analyzing this large number of low-level alerts is one of the most challenging tasks for a system administrator. In this context alert correlation techniques aiming to provide a succinct and high-level view of attacks gained a lot of interest. Although, a variety of methods were proposed, the majority of them address the alert correlation in the off-line setting. In this work, we focus on the online approach to alert correlation. Specifically, we propose a fully automated adaptive approach for online correlation of intrusion alerts in two stages. In the first online stage, we employ a Bayesian network to automatically extract information about the constraints and causal relationships among alerts. Based on the extracted information, we reconstruct attack scenarios on-the-fly providing network administrator with the current network view and predicting the next potential steps of the attacker. Our approach is illustrated using both the well known DARPA 2000 data set and the live traffic data collected from a Honeynet network.
---
paper_title: Processing intrusion detection alert aggregates with time series modeling
paper_content:
The main use of intrusion detection systems (IDS) is to detect attacks against information systems and networks. Normal use of the network and its functioning can also be monitored with an IDS. It can be used to control, for example, the use of management and signaling protocols, or the network traffic related to some less critical aspects of system policies. These complementary usages can generate large numbers of alerts, but still, in operational environment, the collection of such data may be mandated by the security policy. Processing this type of alerts presents a different problem than correlating alerts directly related to attacks or filtering incorrectly issued alerts. We aggregate individual alerts to alert flows, and then process the flows instead of individual alerts for two reasons. First, this is necessary to cope with the large quantity of alerts - a common problem among all alert correlation approaches. Second, individual alert's relevancy is often indeterminable, but irrelevant alerts and interesting phenomena can be identified at the flow level. This is the particularity of the alerts created by the complementary uses of IDSes. Flows consisting of alerts related to normal system behavior can contain strong regularities. We propose to model these regularities using non-stationary autoregressive models. Once modeled, the regularities can be filtered out to relieve the security operator from manual analysis of true, but low impact alerts. We present experimental results using these models to process voluminous alert flows from an operational network.
---
paper_title: IDS interoperability and correlation using IDMEF and commodity systems
paper_content:
Over the past decade Intrusion Detection Systems (IDS) have been steadily improving their efficiency and effectiveness in detecting attacks. This is particularly true with signature-based IDS due to progress in attack analysis and attack signature specification. At the same time system complexity, overall numbers of bugs and security vulnerabilities have increased. This has led to the recognition that in order to operate over the entire attack space, multiple IDS must be used, which need to interoperate with one another, and possibly also with other components of system security. This paper describes an experiment in IDS interoperation using the Intrusion Detection Message Exchange Format for the purpose of correlation analysis and in order to identify and address the problems associated with the effective use and management of multiple IDS. A study of the process of intrusion analysis demonstrates the benefits of multi-IDS interoperation and cooperation, as well as the significant benefits provided by alert analysis using a central relational database.
---
paper_title: Alert Fusion for a Computer Host Based Intrusion Detection System
paper_content:
Intrusions impose tremendous threats to today's computer hosts. Intrusions using security breaches to achieve unauthorized access or misuse of critical information can have catastrophic consequences. To protect computer hosts from the increasing threat of intrusion, various kinds of intrusion detection systems (IDSs) have been developed. The main disadvantages of current IDSs are a high false detection rate and the lack of post-intrusion decision support capability. To minimize these drawbacks, we propose an event-driven intrusion detection architecture which integrates subject-verb-object (SVO) multi-point monitors and an impact analysis engine. Alert fusion and verification models are implemented to provide more reasonable intrusion information from incomplete, inconsistent or imprecise alerts acquired by SVO monitors. DEVS formalism is used to describe the model based design approach. Finally we use the DEVS-JAVA simulation tool to show the feasibility of the proposed system
---
paper_title: Optimal IDS Sensor Placement and Alert Prioritization Using Attack Graphs
paper_content:
We optimally place intrusion detection system (IDS) sensors and prioritize IDS alerts using attack graph analysis. We begin by predicting all possible ways of penetrating a network to reach critical assets. The set of all such paths through the network constitutes an attack graph, which we aggregate according to underlying network regularities, reducing the complexity of analysis. We then place IDS sensors to cover the attack graph, using the fewest number of sensors. This minimizes the cost of sensors, including effort of deploying, configuring, and maintaining them, while maintaining complete coverage of potential attack paths. The sensor-placement problem we pose is an instance of the NP-hard minimum set cover problem. We solve this problem through an efficient greedy algorithm, which works well in practice. Once sensors are deployed and alerts are raised, our predictive attack graph allows us to prioritize alerts based on attack graph distance to critical assets.
---
paper_title: Time-efficient and cost-effective network hardening using attack graphs
paper_content:
Attack graph analysis has been established as a powerful tool for analyzing network vulnerability. However, previous approaches to network hardening look for exact solutions and thus do not scale. Further, hardening elements have been treated independently, which is inappropriate for real environments. For example, the cost for patching many systems may be nearly the same as for patching a single one. Or patching a vulnerability may have the same effect as blocking traffic with a firewall, while blocking a port may deny legitimate service. By failing to account for such hardening interdependencies, the resulting recommendations can be unrealistic and far from optimal. Instead, we formalize the notion of hardening strategy in terms of allowable actions, and define a cost model that takes into account the impact of interdependent hardening actions. We also introduce a near-optimal approximation algorithm that scales linearly with the size of the graphs, which we validate experimentally.
---
paper_title: Building Attack Scenarios through Integration of Complementary Alert Correlation Method
paper_content:
Several alert correlation methods were proposed in the past several years to construct high-level attack scenarios from low-level intrusion alerts reported by intrusion detection systems (IDSs). These correlation methods have different strengths and limitations; none of them clearly dominate the others. However, all of these methods depend heavily on the underlying IDSs, and perform poorly when the IDSs miss critical attacks. In order to improve the performance of intrusion alert correlation and reduce the impact of missed attacks, this paper presents a series of techniques to integrate two complementary types of alert correlation methods: (1) those based on the similarity between alert attributes, and (2) those based on prerequisites and consequences of attacks. In particular, this paper presents techniques to hypothesize and reason about attacks possibly missed by IDSs based on the indirect causal relationship between intrusion alerts and the constraints they must satisfy. This paper also discusses additional techniques to validate the hypothesized attacks through raw audit data and to consolidate the hypothesized attacks to generate concise attack scenarios. The experimental results in this paper demonstrate the potential of these techniques in building high-level attack scenarios and reasoning about possibly missed attacks.
---
paper_title: A new alert correlation algorithm based on attack graph
paper_content:
Intrusion Detection Systems (IDS) are widely deployed in computer networks. As modern attacks are getting more sophisticated and the number of sensors and network nodes grows, the problem of false positives and alert analysis becomes more difficult to solve. Alert correlation was proposed to analyze alerts and to decrease false positives. Knowledge about the target system or environment is usually necessary for efficient alert correlation. For representing the environment information as well as potential exploits, the existing vulnerabilities and their Attack Graph (AG) is used. It is useful for networks to generate an AG and to organize certain vulnerabilities in a reasonable way. In this paper, we design a correlation algorithm based on AGs that is capable of detecting multiple attack scenarios for forensic analysis. It can be parameterized to adjust the robustness and accuracy. A formal model of the algorithm is presented and an implementation is tested to analyze the different parameters on a real set of alerts from a local network.
---
paper_title: An Intrinsic Graphical Signature Based on Alert Correlation Analysis for Intrusion Detection
paper_content:
We propose a graphical signature for intrusion detection given alert sequences. By correlating alerts with their temporal proximity, we build a probabilistic graph-based model to describe a group of alerts that form an attack or normal behavior. Using the models, we design a pairwise measure based on manifold learning to measure the dissimilarities between different groups of alerts. A large dissimilarity implies different behaviors between the two groups of alerts. Such measure can therefore be combined with regular classification methods for intrusion detection. We evaluate our framework mainly on Acer 2007, a private dataset gathered from a well-known Security Operation Center in Taiwan. The performance on the real data suggests that the proposed method can achieve high detection accuracy. Moreover, the graphical structures and the representation from manifold learning naturally provide the visualized result suitable for further analysis from domain experts.
---
paper_title: Alert Verification Determining the Success of Intrusion Attempts
paper_content:
Recently, intrusion detection systems (IDSs) have been increasingly brought to task for failing to meet the expectations that researchers and vendors were raising. Promises that IDSs would be capable of reliably identifying malicious activity never turned into reality. While virus scanners and firewalls have visible benefits and remain virtually unnoticed during normal operation, intrusion detection systems are known for producing a large number of alerts that are either not related to malicious activity (false positives) or not representative of a successful attack (non-relevant positives). Although tuning and proper configuration may eliminate the most obvious spurious alerts, the problem of the vast imbalance between actual and false or non-relevant alerts remains.
---
paper_title: Using Alert Verification to Identify Successful Intrusion Attempts
paper_content:
Intrusion detection systems monitor protected networks and attempt to identify evidence of malicious activity. When an attack is detected, an alert is produced, and, possibly, a countermeasure is executed. A perfect intrusion detection system would be able to identify all the attacks without raising any false alarms. In addition, a countermeasure would be executed only when an attack is actually successful. Unfortunately false alarms are commonplace in intrusion detection systems, and perfectly benign events are interpreted as malicious. In addition, non-relevant alerts are also common. These are alerts associated with attacks that were not successful. Such alerts should be tagged appropriately so that their priority can be lowered. The process of identifying alerts associated with successful attacks is called alert verification. This paper describes the different issues involved in alert verification and presents a tool that performs real-time verification of attacks detected by an intrusion detection system. The experimental evaluation of the tool shows that verification can dramatically reduce both false and non-relevant alerts.
---
paper_title: Verify results of network intrusion alerts using lightweight protocol analysis
paper_content:
We propose a method to verify the result of attacks detected by signature-based network intrusion detection systems using lightweight protocol analysis. The observation is that network protocols often have short meaningful status codes saved at the beginning of server responses upon client requests. A successful intrusion that alters the behavior of a network application server often results in an unexpected server response, which does not contain the valid protocol status code. This can be used to verify the result of the intrusion attempt. We then extend this method to verify the result of attacks that still generate valid protocol status code in the server responses. We evaluate this approach by augmenting Snort signatures and testing on real world data. We show that some simple changes to Snort signatures can effectively verify the result of attacks against the application servers, thus significantly improve the quality of alerts
---
paper_title: ATLANTIDES: An Architecture for Alert Verification in Network Intrusion Detection Systems
paper_content:
We present an architecture designed for alert verification (i.e., to reduce false positives) in network intrusion-detection systems. Our technique is based on a systematic (and automatic) anomaly-based analysis of the system output, which provides useful context information regarding the network services. The false positives raised by the NIDS analyzing the incoming traffic (which can be either signature- or anomaly-based) are reduced by correlating them with the output anomalies. We designed our architecture for TCP-based network services which have a client/server architecture (such as HTTP). Benchmarks show a substantial reduction of false positives between 50% and 100%.
---
paper_title: Alert verification evasion through server response forging
paper_content:
Intrusion Detection Systems (IDSs) are necessary components in the defense of any computer network. Network administrators rely on IDSs to detect attacks, but ultimately it is their responsibility to investigate IDS alerts and determine the damage done. With the number of alerts increasing, IDS analysts have turned to automated methods to help with alert verification. This research investigates this next step of the intrusion detection process. Some alert verification mechanisms attempt to identify successful intrusion attempts based on server responses and protocol analysis. This research examines the server responses generated by four different exploits across four different Linux distributions. Next, three techniques capable of forging server responses on Linux operating systems are developed and implemented. This research shows that these new alert verification evasion methods can make attacks appear unsuccessful even though the exploitation occurs. This type of attack ignores detection and tries to evade the verification process.
---
paper_title: Monitoring IDS Background Noise Using EWMA Control Charts and Alert Information
paper_content:
Intrusion detection systems typically create large amounts of alerts, processing of which is a time consuming task for the user. This paper describes an application of exponentially weighted moving average (EWMA) control charts used to help the operator in alert processing. Depending on his objectives, some alerts are individually insignificant, but when aggregated they can provide important information on the monitored system’s state. Thus it is not always the best solution to discard those alerts, for instance, by means of filtering, correlation, or by simply removing the signature. We deploy a widely used EWMA control chart for extracting trends and highlighting anomalies from alert information provided by sensors performing pattern matching. The aim is to make output of verbose signatures more tolerable for the operator and yet allow him to obtain the useful information available. The applied method is described and experimentation along its results with real world data are presented. A test metric is proposed to evaluate the results.
---
paper_title: Online Risk Assessment of Intrusion Scenarios Using D-S Evidence Theory
paper_content:
In the paper, an online risk assessment model based on D-S evidence theory is presented. The model can quantitate the risk caused by an intrusion scenario in real time and provide an objective evaluation of the target security state. The results of the online risk assessment show a clear and concise picture of both the intrusion progress and the target security state. The model makes full use of available information from both IDS alerts and protected targets. As a result, it can deal with uncertainties and subjectiveness very well in its evaluation process. In IDAM&IRS, the model serves as the foundation for intrusion response decision-making.
---
paper_title: Intrusion detection alert verification based on multi-level fuzzy comprehensive evaluation
paper_content:
Alert verification is a process which compares the information referred by an alert with the configuration and topology information of its target system in order to determine if the alert is relevant to its target system. It can reduce false positive alerts and irrelevant alerts. The paper presents an alert verification approach based on multi-level fuzzy comprehensive evaluation. It is effective in achieving false alert and irrelevant alerts reduction, which have been proved by our experiments. The algorithm can deal with the uncertainties better than other alert verification approaches. The relevance score vectors obtained from the algorithm facilitate the formulation of fine and flexible security policies, and further alert processing.
---
paper_title: Alert prioritization in Intrusion Detection Systems
paper_content:
Intrusion Detection Systems (IDSs) are designed to monitor user and/or network activity and generate alerts whenever abnormal activities are detected. The number of these alerts can be very large; making the task of security analysts difficult to manage. Furthermore, IDS alert management techniques, such as clustering and correlation, suffer from involving unrelated alerts in their processes and consequently provide imprecise results. In this paper, we propose a fuzzy-logic based technique for scoring and prioritizing alerts generated by an IDS(1). In addition, we present an alert rescoring technique that leads to a further reduction of the number of alerts. The approach is validated using the 2000 DARPA intrusion detection scenario specific datasets and comparative results between the Snort IDS alert scoring and our scoring and prioritization scheme are presented.
---
paper_title: A mission-impact-based approach to INFOSEC alarm correlation
paper_content:
We describe a mission-impact-based approach to the analysis of security alerts produced by spatially distributed heterogeneous information security (INFOSEC) devices, such as firewalls, intrusion detection systems, authentication services, and antivirus software. The intent of this work is to deliver an automated capability to reduce the time and cost of managing multiple INFOSEC devices through a strategy of topology analysis, alert prioritization, and common attribute-based alert aggregation. Our efforts to date have led to the development of a prototype system called the EMERALD Mission Impact Intrusion Report Correlation System, or M-Correlator. M-Correlator is intended to provide analysts (at all experience levels) a powerful capability to automatically fuse together and isolate those INFOSEC alerts that represent the greatest threat to the health and security of their networks.
---
paper_title: Reducing false positives in intrusion detection systems
paper_content:
A post-processing filter is proposed to reduce false positives in network-based intrusion detection systems. The filter comprises three components, each one of which is based upon statistical properties of the input alert set. Special characteristics of alerts corresponding to true attacks are exploited. These alerts may be observed in batches, which contain similarities in the source or destination IPs, or they may produce abnormalities in the distribution of alerts of the same signature. False alerts can be recognized by the frequency with which their signature triggers false positives. The filter architecture and design are discussed. Evaluation results performed using the DARPA 1999 dataset indicate that the proposed approach can significantly reduce the number and percentage of false positives produced by Snort(C) (Roesch, 1999). Our filter limited false positives by a percentage up to 75%.
---
paper_title: Snort - Lightweight Intrusion Detection for Networks
paper_content:
Network intrusion detection systems (NIDS) are an important part of any network security architecture. They provide a layer of defense which monitors network traffic for predefined suspicious activity or patterns, and alert system administrators when potential hostile traffic is detected. Commercial NIDS have many differences, but Information Systems departments must face the commonalities that they share such as significant system footprint, complex deployment and high monetary cost. Snort was designed to address these issues.
---
paper_title: Bro: A System for Detecting Network Intruders in Real-Time
paper_content:
Abstract We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes high-speed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an `event engine' that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a `policy script interpreter' that interprets event handlers written in a specialized language used to express a site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications via syslog. We also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these, and give particulars of how Bro analyzes the six applications integrated into it so far: Finger, FTP, Portmapper, Ident, Telnet and Rlogin. The system is publicly available in source code form.
---
paper_title: VisFlowConnect: netflow visualizations of link relationships for security situational awareness
paper_content:
We present a visualization design to enhance the ability of an administrator to detect and investigate anomalous traffic between a local network and external domains. Central to the design is a parallel axes view which displays NetFlow records as links between two machines or domains while employing a variety of visual cues to assist the user. We describe several filtering options that can be employed to hide uninteresting or innocuous traffic such that the user can focus his or her attention on the more unusual network flows. This design is implemented in the form of VisFlowConnect, a prototype application which we used to study the effectiveness of our visualization approach. Using VisFlowConnect, we were able to discover a variety of interesting network traffic patterns. Some of these were harmless, normal behavior, but some were malicious attacks against machines on the network.
---
paper_title: Correlation between NetFlow System and Network Views for Intrusion Detection
paper_content:
We present several ways to correlate security events from two applications that visualize the same underlying data with two distinct views: system and network. Correlation of security events provide Security Engineers a better understanding of what is happening for enhanced security situational awareness. Visualization leverages human cognitive abilities and promotes quick mental connections between events that otherwise may be obscured in the volume of IDS alert messages.
---
paper_title: Network specific false alarm reduction in intrusion detection system
paper_content:
Intrusion Detection Systems (IDSs) are used to find the security violations in computer networks. Usually IDSs produce a vast number of alarms that include a large percentage of false alarms. One of the main reason for such false alarm generation is that, in most cases IDSs are run with default set of signatures. In this paper, a scheme for network specific false alarm reduction in IDS is proposed. A threat profile of the network is created and IDS generated alarms are correlated using neural network. Experiments conducted in a test bed have successfully filtered out most of the false alarms for a range of attacks yet maintaining the Detection Rate. Copyright © 2010 John Wiley & Sons, Ltd.
---
paper_title: NVisionIP: netflow visualizations of system state for security situational awareness
paper_content:
The number of attacks against large computer systems is currently growing at a rapid pace. Despite the best efforts of security analysts, large organizations are having trouble keeping on top of the current state of their networks. In this paper, we describe a tool called NVisionIP that is designed to increase the security analyst's situational awareness. As humans are inherently visual beings, NVisionIP uses a graphical representation of a class-B network to allow analysts to quickly visualize the current state of their network. We present an overview of NVisionIP along with a discussion of various types of security-related scenarios that it can be used to detect.
---
paper_title: Using Contextual Information for IDS Alarm Classification (Extended Abstract)
paper_content:
Signature-based intrusion detection systems are known to generate many noncritical alarms (alarms not related to a successful attack). Adding contextual information to IDSes is a promising avenue to identify noncritical alarms. Several approaches using contextual information have been suggested. However, it is not clear what are the benefits of using a specific approach. This paper establishes the effectiveness of using target configuration (i.e. operating system and applications) as contextual information for identifying noncritical alarms. Moreover, it demonstrates that current tools for OS discovery are not adequate for IDS context gathering.
---
paper_title: An Intrusion-Detection Model
paper_content:
A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system.
---
paper_title: MapReduce: simplified data processing on large clusters
paper_content:
MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
---
| Title: Review: False alarm minimization techniques in signature-based intrusion detection systems: A survey
Section 1: Introduction
Description 1: Introduce the need for false alarm minimization in signature-based intrusion detection systems and the layout of the paper.
Section 2: Prior Work
Description 2: Discuss previous surveys and related works on false alarm minimization techniques in IDS.
Section 3: Overview
Description 3: Present an overview of various approaches proposed for false alarm minimization in signature-based IDS.
Section 4: Signature Enhancement
Description 4: Describe techniques to enhance IDS signatures with context information to reduce false alarms.
Section 5: Stateful Signatures
Description 5: Discuss the use of stateful signatures which analyze the state of the network to minimize false alarms.
Section 6: Vulnerability Signatures
Description 6: Explain the concept of using vulnerability signatures based on application semantics to improve detection and reduce false alarms.
Section 7: Alarm Mining
Description 7: Discuss various data mining techniques like clustering, classification, and frequent pattern mining used to minimize false alarms.
Section 8: Alarm Correlation
Description 8: Elaborate on different alarm correlation techniques to aggregate alarms and reconstruct attack scenarios for false alarm minimization.
Section 9: Alarm Verification
Description 9: Explain methods to verify the success of an attack corresponding to an alarm as a means to reduce false alarms.
Section 10: Flow Analysis
Description 10: Describe flow analysis used to study alarm patterns under normal and abnormal scenarios.
Section 11: Alarm Prioritization
Description 11: Discuss techniques that rate and prioritize alarms based on post-assessment or evaluation to reduce false alarms.
Section 12: Hybrid Approach
Description 12: Introduce hybrid approaches which combine multiple techniques to improve the detection accuracy and minimize false alarms.
Section 13: Commercial Tools
Description 13: Review commercial SIEM tools, their features, and performance in handling and minimizing false alarms.
Section 14: Research Questions
Description 14: Enlist research questions that need to be addressed for future improvements in false alarm minimization techniques.
Section 15: Conclusion
Description 15: Summarize the survey, discuss the benefits of various techniques, and propose future directions for research in this domain. |
CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis | 11 | ---
paper_title: A CMOS Image Sensor With On-Chip Image Compression Based on Predictive Boundary Adaptation and Memoryless QTD Algorithm
paper_content:
This paper presents the architecture, algorithm, and VLSI hardware of image acquisition, storage, and compression on a single-chip CMOS image sensor. The image array is based on time domain digital pixel sensor technology equipped with nondestructive storage capability using 8-bit Static-RAM device embedded at the pixel level. The pixel-level memory is used to store the uncompressed illumination data during the integration mode as well as the compressed illumination data obtained after the compression stage. An adaptive quantization scheme based on fast boundary adaptation rule (FBAR) and differential pulse code modulation (DPCM) procedure followed by an online, least storage quadrant tree decomposition (QTD) processing is proposed enabling a robust and compact image compression processor. A prototype chip including 64×64 pixels, read-out and control circuitry as well as an on-chip compression processor was implemented in 0.35 μm CMOS technology with a silicon area of 3.2×3.0 mm2 and an overall power of 17 mW. Simulation and measurements results show compression figures corresponding to 0.6-1 bit-per-pixel (BPP), while maintaining reasonable peak signal-to-noise ratio levels.
---
paper_title: Subband image coding using watershed and watercourse lines of the wavelet transform.
paper_content:
Reports progress in primitive-based image coding using nonorthogonal dyadic wavelets. A 3D isotropic wavelet is used to approximate the difference-of-Gaussians (D-o-G) operator. Convolution of the image with dilated versions of the wavelet produces three band-pass signals that approximate multiscale smoothed second derivatives. An additional convolution of the image with a Gaussian-shaped low-pass wavelet creates a fourth subband signal that preserves low-frequency information not described by the three band-pass signals. The authors show that the original image can be recovered from the watershed and watercourse lines of the three band-pass signals plus the lowpass subband signal. By thresholding the watershed/watercourse representation, subsampling the low-pass subband, and using edge post emphasis, the authors achieve data reduction with little loss of fidelity. Further compression of the watersheds and watercourses is achieved by chain coding their shapes and predictive coding their amplitudes prior to lossless arithmetic coding. Results are presented for grey-level test images at data rates between 0.1 and 0.3 b/pixel.
---
paper_title: A CMOS Image Sensor for Multi-Level Focal Plane Image Decomposition
paper_content:
An alternative image decomposition method that exploits prediction via nearby pixels has been integrated on the CMOS image sensor focal plane. The proposed focal plane decomposition is compared to the 2-D discrete wavelet transform (DWT) decomposition commonly used in state of the art compression schemes such as SPIHT and JPEG2000. The method achieves comparable compression performance with much lower computational complexity and allows image compression to be implemented directly on the sensor focal plane in a completely pixel parallel structure. A CMOS prototype chip has been fabricated and tested. The test results validate the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate with much lower data readout volume. The features of the proposed decomposition scheme also benefit real-time, low rate and low power applications.
---
paper_title: Image Compression Based On Best Wavelet Packet Bases
paper_content:
An adaptive subband decomposition technique for efficient signal representation and compression is introduced and tested in a rate-distortion framework. The best wavelet packet bases exhibit a SFFT (short-time fast Fourier transform) subband decomposition at one source instance, a wavelet decomposition at another instance, or any intermediate wavelet packet decomposition at yet other instances to best match the signal's characteristics. For a given subband decomposition, commonly used information measures such as entropy, distortion (mse), and rate (no. of coefficients) are minimized over all subbands decomposition to search the most efficient wavelet packet tree of signal and a method to provide such a wavepacket tree is proposed. Image coding application results using the joint rate-distortion cost measure demonstrated superior performance over the entropy only information cost measure. >
---
paper_title: A CMOS Imager With Focal Plane Compression Using Predictive Coding
paper_content:
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit. The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizer/coder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 mum CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm times 5.96 mm which includes an 80 times 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
---
paper_title: Memory-efficient architecture for JPEG 2000 coprocessor with large tile image
paper_content:
The experimental results show that using a larger tile size to perform JPEG 2000 coding results in better image quality (i.e., greater than or equal to 256 /spl times/ 256 tile image). However, processing large tile images also requires relatively high memory for the hardware implementation. For example, it would require tile memory of 256 K words to support the process of a 512 /spl times/ 512 tile image in the straightforward architecture. To reduce hardware resources, we have proposed the quad code-block (QCB) -based discrete wavelet transform method to reduce the size of tile memory by a factor of 4. In this paper, the remaining 1/4 tile memory can be further reduced through two approaches: the zero-holding extension with slight image degradation and the QCB-block size extension without any image degradation. That is, it only requires 12 K words tile memory to support the process of 512 /spl times/ 512 tile image by using zero-holding extension, and 13.58 K words memory through QCB-block size extension. The low memory requirement makes the on-chip memory practicable.
---
paper_title: A VLSI architecture of JPEG2000 encoder
paper_content:
This paper proposes a VLSI architecture of JPEG2000 encoder, which functionally consists of two parts: discrete wavelet transform (DWT) and embedded block coding with optimized truncation (EBCOT). For DWT, a spatial combinative lifting algorithm (SCLA)-based scheme with both 5/3 reversible and 9/7 irreversible filters is adopted to reduce 50% and 42% multiplication computations, respectively, compared with the conventional lifting-based implementation (LBI). For EBCOT, a dynamic memory control (DMC) strategy of Tier-1 encoding is adopted to reduce 60% scale of the on-chip wavelet coefficient storage and a subband parallel-processing method is employed to speed up the EBCOT context formation (CF) process; an architecture of Tier-2 encoding is presented to reduce the scale of on-chip bitstream buffering from full-tile size down to three-code-block size and considerably eliminate the iterations of the rate-distortion (RD) truncation.
---
paper_title: Compressive Acquisition CMOS Image Sensor: From the Algorithm to Hardware Implementation
paper_content:
In this paper, a new design paradigm referred to as compressive acquisition CMOS image sensors is introduced. The idea consists of compressing the data within each pixel prior to storage, and hence, reducing the size of the memory required for digital pixel sensor. The proposed compression algorithm uses a block-based differential coding scheme in which differential values are captured and quantized online. A time-domain encoding scheme is used in our CMOS image sensor in which the brightest pixel within each block fires first and is selected as the reference pixel. The differential values between subsequent pixels and the reference within each block are calculated and quantized, using a reduced number of bits as their dynamic range is compressed. The proposed scheme enables reduced error accumulation as full precision is used at the start of each block, while also enabling reduced memory requirement, and hence, enabling significant silicon area saving. A mathematical model is derived to analyze the performance of the algorithm. Experimental results on a field-programmable gate-array (FPGA) platform illustrate that the proposed algorithm enables more than 50% memory saving at a peak signal-to-noise ratio level of 30 dB with 1.5 bit per pixel.
---
paper_title: VLSI Architectures for JPEG 2000 EBCOT
paper_content:
EBCOT tier-1, as entropy encoder of JPEG 2000, is a huge time-consuming part (typically more than 50%) and is considered a bottleneck for the entire system. In this paper, EBCOT tier-1 algorithm, including serial coding mode and parallel coding mode, is introduced. The VLSI architectures for EBCOT tier-1 can be divided into two categories: parallel bit-plane coding scheme (ParaBCS), where all bit-planes in a code block are coded in parallel, and serial bit-plane coding scheme (SeriBCS), where all bit-planes in a code-block are coded in serial. These two schemes are compared in terms of system throughput, PSNR performance, power consumption, and memory size. Finally, two case studies (one is based on SeriBCS and the other is based on ParaBCS) are presented.
---
paper_title: MPEG (moving pictures expert group) video compression algorithm: a review
paper_content:
A video coding technique aimed at compression of video and its associated audio at about 1.5 Mbits/s has been developed by the Moving Pictures Expert Group (MPEG-Video). The Moving Pictures Expert Group is part of the ISO-IEC JTC1/SC2/WG11, an organization responsible for the standardization of coded representation of video and audio for information systems. The video compression technique developed by MPEG covers many applications from interactive systems on CD-ROM to delivery of video information over telecommunications networks. The MPEG video compression algorithm relies on two basic techniques: block based motion compensation for the reduction of the temporal redundancy and transform domain based compression for the reduction of spatial redundancy. Motion compensation techniques are applied with both predictive and interpolative techniques. The prediction error signal is further compressed with spatial redundancy reduction (DCT). The quality of the compressed video with the MPEG algorithm at about 1.5 Mbits/s has been compared to that of consumer grade VCR's. The quality, cost and features of the MPEG video algorithm make it directly applicable to personal computers and workstation thus allowing the development of many new applications integrating video, sound, images, text and graphics.
---
paper_title: A CMOS image sensor with analog two-dimensional DCT-based compression circuits for one-chip cameras
paper_content:
This paper presents a CMOS image sensor with on-chip compression using an analog two-dimensional discrete cosine transform (2-D DCT) processor and a variable quantization level analog-to-digital converter (ADC). The analog 2-D DCT processor is essentially suitable for the on-sensor image compression, since the analog image sensor signal can be directly processed. The small and low-power nature of the analog design allows us to achieve low-power, low-cost, one-chip digital video cameras. The 8/spl times/8-point analog 2-D DCT processor is designed with fully differential switched-capacitor circuits to obtain sufficient precision for video compression purposes. An imager array has a dedicated eight-channel parallel readout scheme for direct encoding with the analog 2-D DCT processor. The variable level quantization after the 2-D DCT can be performed by the ADC at the same time. A prototype CMOS image sensor integrating these core circuits for compression is implemented based on triple-metal double-polysilicon 0.35-/spl mu/m CMOS technology. Image encoding using the implemented analog 2-D DCT processor to the image captured by the sensor is successfully performed. The maximum peak signal-to-noise ratio (PSNR) is 36.7 dB.
---
paper_title: JPEG: Still Image Data Compression Standard
paper_content:
Foreword. Acknowledgments. Trademarks. Introduction. Image Concepts and Vocabulary. Aspects of the Human Visual Systems. The Discrete Cosine Transform (DCT). Image Compression Systems. JPEG Modes of Operation. JPEG Syntax and Data Organization. Entropy Coding Concepts. JPEG Binary Arithmetic Coding. JPEG Coding Models. JPEG Huffman Entropy Coding. Arithmetic Coding Statistical. More on Arithmetic Coding. Probability Estimation. Compression Performance. JPEG Enhancements. JPEG Applications and Vendors. Overview of CCITT, ISO, and IEC. History of JPEG. Other Image Compression Standards. Possible Future JPEG Directions. Appendix A. Appendix B. References. Index.
---
paper_title: On sensor image compression
paper_content:
In this paper, we propose a novel image sensor which compresses image signals on the sensor plane. Since an image signal is compressed on the sensor plane by making use of the parallel nature of image signals, the amount of signal read out from the sensor can be significantly reduced. Thus, the potential applications of the proposed sensor are high pixel rate cameras and processing systems which require very high speed imaging or very high resolution imaging. The very high bandwidth is the fundamental limitation to the feasibility of those high pixel rate sensors and processing systems. Conditional replenishment is employed for the compression algorithm. In each pixel, current pixel value is compared to that in the last replenished frame. The value and the address of the pixel are extracted and coded if the magnitude of the difference is greater than a threshold. Analog circuits have been designed for processing in each pixel. A first prototype of a VLSI chip has been fabricated. Some results of experiments obtained by using the first prototype are shown in this paper.
---
paper_title: Focal-Plane Algorithmically-Multiplying CMOS Computational Image Sensor
paper_content:
The CMOS image sensor computes two-dimensional convolution of video frames with a programmable digital kernel of up to 8 times 8 pixels in parallel directly on the focal plane. Three operations, a temporal difference, a multiplication and an accumulation are performed for each pixel readout. A dual-memory pixel stores two video frames. Selective pixel output sampling controlled by binary kernel coefficients implements binary-analog multiplication. Cross-pixel column-parallel bit-level accumulation and frame differencing are implemented by switched-capacitor integrators. Binary-weighted summation and concurrent quantization is performed by a bank of column-parallel multiplying analog-to-digital converters (MADCs). A simple digital adder performs row-wise accumulation during ADC readout. A 128 times 128 active pixel array integrated with a bank of 128 MADCs was fabricated in a 0.35 mum standard CMOS technology. The 4.4 mm times 2.9 mm prototype is experimentally validated in discrete wavelet transform (DWT) video compression and frame differencing.
---
paper_title: On the Complexity of Finite Sequences
paper_content:
A new approach to the problem of evaluating the complexity ("randomness") of finite sequences is presented. The proposed complexity measure is related to the number of steps in a self-delimiting production process by which a given sequence is presumed to be generated. It is further related to the number of distinct substrings and the rate of their occurrence along the sequence. The derived properties of the proposed measure are discussed and motivated in conjunction with other well-established complexity criteria.
---
paper_title: PRISM: A Video Coding Paradigm With Motion Estimation at the Decoder
paper_content:
We describe PRISM, a video coding paradigm based on the principles of lossy distributed compression (also called source coding with side information or Wyner-Ziv coding) from multiuser information theory. PRISM represents a major departure from conventional video coding architectures (e.g., the MPEGx, H.26x families) that are based on motion-compensated predictive coding, with the goal of addressing some of their architectural limitations. PRISM allows for two key architectural enhancements: (1) inbuilt robustness to "drift" between encoder and decoder and (2) the feasibility of a flexible distribution of computational complexity between encoder and decoder. Specifically, PRISM enables transfer of the computationally expensive video encoder motion-search module to the video decoder. Based on this capability, we consider an instance of PRISM corresponding to a near reversal in codec complexities with respect to today's codecs (leading to a novel light encoder and heavy decoder paradigm), in this paper. We present encouraging preliminary results on real-world video sequences, particularly in the realm of transmission losses, where PRISM exhibits the characteristic of rapid recovery, in contrast to contemporary codecs. This renders PRISM as an attractive candidate for wireless video applications.
---
paper_title: The rate-distortion function for source coding with side information at the decoder
paper_content:
Let \{(X_{k}, Y_{k}) \}^{ \infty}_{k=1} be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set \cal X . It is desired to encode the sequence \{X_{k}\} in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence \{ \hat{X}_{k} \} , where \hat{X}_{k} \in \hat{ \cal X} , the reproduction alphabet. The average distortion level is (1/n) \sum^{n}_{k=1} E[D(X_{k},\hat{X}_{k})] , where D(x,\hat{x}) \geq 0, x \in {\cal X}, \hat{x} \in \hat{ \cal X} , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information \{Y_{k}\} . In this paper we determine the quantity R \ast (d) , defined as the infimum ofrates R such that (with \varepsilon > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + \varepsilon . The main result is that R \ast (d) = \inf [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set \cal Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: {\cal Y} \times {\cal Z} \rightarrow \hat{ \cal X} , such that E[D(X,f(Y,Z))] \leq d . Let R_{X | Y}(d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information \{ Y_{k} \} . In nearly all cases it is shown that when d > 0 then R \ast(d) > R_{X|Y} (d) , so that knowledge of the side information at the encoder permits transmission of the \{X_{k}\} at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of \{X_{k}\} , i.e., d = \varepsilon for any \varepsilon >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate.
---
paper_title: Code and parse trees for lossless source encoding
paper_content:
This paper surveys the theoretical literature on fixed-to-variable-length lossless source code trees, called code trees, and on variable-length-to-fixed lossless source code trees, called parse trees. In particular, the following code tree topics are outlined in this survey: characteristics of the Huffman (1952) code tree; Huffman-type coding for infinite source alphabets and universal coding; the Huffman problem subject to a lexicographic constraint, or the Hu-Tucker (1982) problem; the Huffman problem subject to maximum codeword length constraints; code trees which minimize other functions besides average codeword length; coding for unequal cost code symbols, or the Karp problem, and finite state channels; and variants of Huffman coding in which the assignment of 0s and 1s within codewords is significant such as bidirectionality and synchronization. The literature on parse tree topics is less extensive. Treated here are: variants of Tunstall (1968) parsing; dualities between parsing and coding; dual tree coding in which parsing and coding are combined to yield variable-length-to-variable-length codes; and parsing and random number generation. Finally, questions related to counting and representing code and parse trees are also discussed.
---
paper_title: Subband image coding using watershed and watercourse lines of the wavelet transform.
paper_content:
Reports progress in primitive-based image coding using nonorthogonal dyadic wavelets. A 3D isotropic wavelet is used to approximate the difference-of-Gaussians (D-o-G) operator. Convolution of the image with dilated versions of the wavelet produces three band-pass signals that approximate multiscale smoothed second derivatives. An additional convolution of the image with a Gaussian-shaped low-pass wavelet creates a fourth subband signal that preserves low-frequency information not described by the three band-pass signals. The authors show that the original image can be recovered from the watershed and watercourse lines of the three band-pass signals plus the lowpass subband signal. By thresholding the watershed/watercourse representation, subsampling the low-pass subband, and using edge post emphasis, the authors achieve data reduction with little loss of fidelity. Further compression of the watersheds and watercourses is achieved by chain coding their shapes and predictive coding their amplitudes prior to lossless arithmetic coding. Results are presented for grey-level test images at data rates between 0.1 and 0.3 b/pixel.
---
paper_title: Image Compression Based On Best Wavelet Packet Bases
paper_content:
An adaptive subband decomposition technique for efficient signal representation and compression is introduced and tested in a rate-distortion framework. The best wavelet packet bases exhibit a SFFT (short-time fast Fourier transform) subband decomposition at one source instance, a wavelet decomposition at another instance, or any intermediate wavelet packet decomposition at yet other instances to best match the signal's characteristics. For a given subband decomposition, commonly used information measures such as entropy, distortion (mse), and rate (no. of coefficients) are minimized over all subbands decomposition to search the most efficient wavelet packet tree of signal and a method to provide such a wavepacket tree is proposed. Image coding application results using the joint rate-distortion cost measure demonstrated superior performance over the entropy only information cost measure. >
---
paper_title: Wyner-Ziv coding of motion video
paper_content:
In current interframe video compression systems, the encoder performs predictive coding to exploit the similarities of successive frames. The Wyner-Ziv theorem on source coding with side information available only at the decoder suggests that an asymmetric video codec, where individual frames are encoded separately, but decoded conditionally (given temporally adjacent frames) could achieve similar efficiency. We report the first results on a Wyner-Ziv coding scheme for motion video that uses intraframe encoding, but interframe decoding.
---
paper_title: A Universal Algorithm for Sequential Data Compression
paper_content:
A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.
---
paper_title: Arithmetic coding revisited
paper_content:
During its long gestation in the 1970s and early 1980s, arithmetic coding was widely regarded more as an academic curiosity than a practical coding technique. One factor that helped it gain the popularity it enjoys today was the publication in 1987 of source code for a multi symbol arithmetic coder in Communications of the ACM. Now (1995), our understanding of arithmetic coding has further matured, and it is timely to review the components of that implementation and summarise the improvements that we and other authors have developed since then. We also describe a novel method for performing the underlying calculation needed for arithmetic coding. Accompanying the paper is a "Mark II" implementation that incorporates the improvements we suggest. The areas examined include: changes to the coding procedure that reduce the number of multiplications and divisions and permit them to be done to low precision; the increased range of probability approximations and alphabet sizes that can be supported using limited precision calculation; data structures for support of arithmetic coding on large alphabets; the interface between the modelling and coding subsystems; the use of enhanced models to allow high performance compression. For each of these areas, we consider how the new implementation differs from the CACM package.
---
paper_title: MPEG-4 video and its potential for future multimedia services
paper_content:
The Moving Picture Experts Group (MPEG) committee, which originated the MPEG-1 and MPEG-2 video and audio compression standards, is currently developing MPEG-4 with wide industry participation. MPEG-4 is targeted for interactive multimedia applications and will become an international standard in 1998. It is expected that MPEG-4 will become the enabling technology for multimedia audio-visual communications as much as MPEG-2 has become the enabling technology for digital television. The purpose of the paper is to discuss the scope and potential of the MPEG-4 standardization activities for networked interactive multimedia applications with particular emphasis on the MPEG-4 video standard.
---
paper_title: Distributed Video Coding
paper_content:
Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.
---
paper_title: JPEG: Still Image Data Compression Standard
paper_content:
Foreword. Acknowledgments. Trademarks. Introduction. Image Concepts and Vocabulary. Aspects of the Human Visual Systems. The Discrete Cosine Transform (DCT). Image Compression Systems. JPEG Modes of Operation. JPEG Syntax and Data Organization. Entropy Coding Concepts. JPEG Binary Arithmetic Coding. JPEG Coding Models. JPEG Huffman Entropy Coding. Arithmetic Coding Statistical. More on Arithmetic Coding. Probability Estimation. Compression Performance. JPEG Enhancements. JPEG Applications and Vendors. Overview of CCITT, ISO, and IEC. History of JPEG. Other Image Compression Standards. Possible Future JPEG Directions. Appendix A. Appendix B. References. Index.
---
paper_title: MPEG-4 very low bit rate video
paper_content:
The MPEG video group is currently developing the so-called MPEG-4 video coding standard, targeted for future interactive multimedia video communications calling for content-based functionalities, universal access in error prone environments and high coding efficiency. Besides the provisions for content-based functionalities the MPEG-4 video standard will assist the efficient storage and transmission of very low bit rate video in error prone environments. This paper outlines the techniques that are currently being investigated by MPEG-4 to provide universal accessibility of video at very low bit rates and discusses the scope of some of the promising techniques under discussion.
---
paper_title: Compression of Individual Sequences via Variable-Rate Coding
paper_content:
Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequence x a quantity \rho(x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of \rho(x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.
---
paper_title: Charge-based prediction circuits for focal plane image compression
paper_content:
This work presents the design of a computational charge-based circuit to be part of a focal plane compression chip. The image compression scheme pursued is predictive coding. The proposed circuit computes the prediction error at every pixel. It carries out the computations by integrating the photocurrents of the pixels in a small neighborhood. The prediction weights for every pixel can be changed by changing the switching timing of the circuit making possible the use of adaptive prediction algorithms. The circuit is compact and can be integrated at the pixel level.
---
paper_title: A CMOS imager with focal plane compression
paper_content:
A focal plane video compression integrated circuit is presented. The design consists of a 128 /spl times/ 128 pixel array and a bank of column-level processors. Each one of the column-level processors performs the tasks of image decorrelation, quantization, and entropy encoding. The chip provides at its output a compressed bit stream. The integration of the quantizer and the entropy encoder at the column level is possible by sharing circuitry between a single-slope analog-to-digital converter and a Golomb-Rice entropy encoder. In addition, the design includes a low-complexity algorithm for the adaptation of the Golomb-Rice coder to the statistics of the video signal. The design has been fully verified through simulations and has been implemented in a 0.35 /spl mu/m CMOS technology. The chip layout occupies an area of 7 /spl times/ 5 mm/sup 2/.
---
paper_title: A CMOS Imager With Focal Plane Compression Using Predictive Coding
paper_content:
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit. The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizer/coder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 mum CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm times 5.96 mm which includes an 80 times 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
---
paper_title: CMOS wavelet compression imager architecture
paper_content:
The CMOS imager architecture implements /spl Delta//spl Sigma/-modulated Haar wavelet image compression on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental over-sampling analog-to-digital converters (ADCs). Each ADC performs column-wise distributed focal-plane sampling and concurrent signed weighted average quantization, realizing a one-dimensional spatial Haar wavelet transform. A digital delay and adder loop performs spatial accumulation over multiple adjacent ADC outputs. This amounts to computing a two-dimensional Haar wavelet transform, with no overhead in time and negligent overhead in area compared to a baseline digital imager architecture. The architecture is experimentally validated on a 0.35 micron CMOS prototype containing a bank of first-order incremental oversampling ADCs computing Haar wavelet transform on an emulated pixel array output. The architecture yields simulated computational throughput of 1.4 GMACS with SVGA imager resolution at 30 frames per second.
---
paper_title: Predictive coding on-sensor compression
paper_content:
This paper presents the design and measurements of a predictive coding on-sensor compression CMOS imager. Predictive coding is employed to decorrelate the image. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit. The decorrelated image is encoded with a bank of column-parallel entropy encoders. Each encoder is combined with a single-slope analog-to-digital converter (ADC) to reduce area complexity and power consumption. The area savings resulting from such combination allow to integrate an ADC and an entropy encoder at the column level. A prototype chip was fabricated in a 0.35 mum CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm times 5.96 mm which includes an 80 times 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
---
paper_title: A CMOS imager with pixel prediction for image compression
paper_content:
A predictive approach to on-chip focal plane compression is presented. This approach, unlike the transform coding techniques, is based on pixel prediction. In this technique, the value of each pixel is predicted by considering the values of its closest neighbor pixels. An analog circuit has been designed to accomplish a simple prediction function based on the values of two neighbor pixels. To demonstrate the approach, a prototype chip containing an imager and the prediction circuits has been designed, implemented, and tested.
---
paper_title: Focal-Plane Spatially Oversampling CMOS Image Compression Sensor
paper_content:
Image compression algorithms employ computationally expensive spatial convolutional transforms. The CMOS image sensor performs spatially compressing image quantization on the focal plane yielding digital output at a rate proportional to the mere information rate of the video. A bank of column-parallel first-order incremental DeltaSigma-modulated analog-to-digital converters (ADCs) performs column-wise distributed focal-plane oversampling of up to eight adjacent pixels and concurrent weighted average quantization. Number of samples per pixel and switched-capacitor sampling sequence order set the amplitude and sign of the pixel coefficient, respectively. A simple digital delay and adder loop performs spatial accumulation over up to eight adjacent ADC outputs during readout. This amounts to computing a two-dimensional block matrix transform with up to 8times8-pixel programmable kernel in parallel for all columns. Noise shaping reduces power dissipation below that of a conventional digital imager while the need for a peripheral DSP is eliminated. A 128times128 active pixel array integrated with a bank of 128 DeltaSigma-modulated ADCs was fabricated in a 0.35-mum CMOS technology. The 3.1 mm times 1.9-mm prototype captures 8-bit digital video at 30 frames/s and yields 4 GMACS projected computational throughput when scaled to HDTV 1080i resolution in discrete cosine transform (DCT) compression
---
paper_title: A novel integration of on-sensor wavelet compression for a CMOS imager
paper_content:
A novel integration of image compression and sensing is proposed to enhance the performance of a CMOS image sensor. By integrating a compression function onto the sensor focal plane, the image signal to be read out can be significantly reduced and consequently the pixel rate can be increased. This can be applied to overcome the communication bottleneck for high-resolution image sensing and high frame-rate image sensing or for power- and bandwidth-constrained devices such as cell phones. A modified Haar wavelet transform is implemented as the compression scheme. A simple but efficient computation design is developed to implement the transform on-chip.
---
paper_title: A High-Speed CMOS Image Sensor with On-chip Parallel Image Compression Circuits
paper_content:
This paper presents a high-speed CMOS image sensor with on-chip parallel image compression circuits. The chip consists of a pixel array, an A/D converter array with noise canceling function and an image compression processing element array and buffer memories. The image compression processing element is implemented with a 4times4 point discreate cosine transform(DCT) and a modified zigzag scanner with 4 blocks. A prototype high-speed CMOS image sensor integrating the image compression circuits is implemented based on 1-poly 5-metal 0.25-mum CMOS technology. Image encoding using the implemented parallel image compression circuits to the image captured by the high-speed image sensor is successfully performed at 3,000[frame/s].
---
paper_title: A compact-pixel tri-mode vision sensor
paper_content:
We present a custom image sensor capable of reporting intensity, spatial contrast, and temporal difference images at the pixel level. The smart pixel is composed of only 11 transistors, allowing tight integration of different functionalities in a 16×21 µm2 pixel area. The sensor array is 128×128 pixels, with a fill factor of 42%, and operates at 800 fps and 13M events/s with a power consumption of 1.2mW.
---
paper_title: A compressed digital output CMOS image sensor with analog 2-D DCT processors and ADC/quantizer
paper_content:
Progress in CMOS-based image sensors is creating opportunities for a low-cost, low-power one-chip video camera with digitizing, signal processing and image compression. Such a smart camera head acquires compressed digital moving pictures directly into portable multimedia computers. Video encoders using a moving picture coding standard such as MPEG and H.26x are not always suitable for integration of image encoding on the image sensor, because of the complexity and the power dissipation. On-sensor image compression such as a CCD image sensor for lossless image compression and a CMOS image sensor with pixel-level interframe coding are reported. A one-chip digital camera with on-sensor video compression is shown in the block diagram. The chip contains a 128/spl times/128-pixel sensor, 8-channel parallel read-out circuits, an analog 2-dimensional discrete cosine transform (2D DCT) processor and a variable quantization-level ADC (ADC/Q).
---
paper_title: A CMOS smart image sensor LSI for focal-plane compression
paper_content:
A CMOS smart image sensor LSI with video compression is presented. The proposed on-sensor compression scheme using an analog 2-D DCT processor is particularly useful to achieve low-power one-chip digital camera. A prototype image sensor LSI has been developed using 0.35 /spl mu/m double polysilicon, triple metal CMOS technology. Image coding using the implemented LSI has been demonstrated.
---
paper_title: CMOS image compression sensor with algorithmically-multiplying ADCs
paper_content:
A 128×128 CMOS image compression sensor fabricated in a 0.35µm CMOS process is reported. It computes block-matrix and convolutional image transforms with digital kernels of up to 8×8 pixels directly on the focal plane. A pixel output is sampled only when the corresponding bit of the kernel coefficient is one. Bit-wise accumulation of adjacent pixel outputs in a column is performed by the switched-capacitor accumulator circuit. A column-parallel algorithmic multiplying ADC performs binary-weighted summation by adding the accumulator circuit outputs with cyclic residues of the same binary weight. The signal range is maintained by generating two bits per cycle. The imager performs three computations per pixel readout. Image compression experimental results at 30fps and 8-bit output resolution are presented.
---
paper_title: CMOS Image Sensors with Video Compression
paper_content:
This paper describes CMOS image sensors integrating video compression circuits. The on-sensor compression is particularly useful for the low-power design of moving picture compression hardware, which is demanded especially in the mobile computing and telephony. Recent progress of the CMOS image sensor technology allows us to realise the integration of high-performance image sensor and computational functions on a chip. An example of the CMOS image sensor with compression is described. Prospects of the CMOS image sensors with moving picture compression are discussed.
---
paper_title: A CMOS image sensor with analog two-dimensional DCT-based compression circuits for one-chip cameras
paper_content:
This paper presents a CMOS image sensor with on-chip compression using an analog two-dimensional discrete cosine transform (2-D DCT) processor and a variable quantization level analog-to-digital converter (ADC). The analog 2-D DCT processor is essentially suitable for the on-sensor image compression, since the analog image sensor signal can be directly processed. The small and low-power nature of the analog design allows us to achieve low-power, low-cost, one-chip digital video cameras. The 8/spl times/8-point analog 2-D DCT processor is designed with fully differential switched-capacitor circuits to obtain sufficient precision for video compression purposes. An imager array has a dedicated eight-channel parallel readout scheme for direct encoding with the analog 2-D DCT processor. The variable level quantization after the 2-D DCT can be performed by the ADC at the same time. A prototype CMOS image sensor integrating these core circuits for compression is implemented based on triple-metal double-polysilicon 0.35-/spl mu/m CMOS technology. Image encoding using the implemented analog 2-D DCT processor to the image captured by the sensor is successfully performed. The maximum peak signal-to-noise ratio (PSNR) is 36.7 dB.
---
paper_title: Focal-Plane Algorithmically-Multiplying CMOS Computational Image Sensor
paper_content:
The CMOS image sensor computes two-dimensional convolution of video frames with a programmable digital kernel of up to 8 times 8 pixels in parallel directly on the focal plane. Three operations, a temporal difference, a multiplication and an accumulation are performed for each pixel readout. A dual-memory pixel stores two video frames. Selective pixel output sampling controlled by binary kernel coefficients implements binary-analog multiplication. Cross-pixel column-parallel bit-level accumulation and frame differencing are implemented by switched-capacitor integrators. Binary-weighted summation and concurrent quantization is performed by a bank of column-parallel multiplying analog-to-digital converters (MADCs). A simple digital adder performs row-wise accumulation during ADC readout. A 128 times 128 active pixel array integrated with a bank of 128 MADCs was fabricated in a 0.35 mum standard CMOS technology. The 4.4 mm times 2.9 mm prototype is experimentally validated in discrete wavelet transform (DWT) video compression and frame differencing.
---
paper_title: A compressed digital output CMOS image sensor with analog 2-D DCT processors and ADC/quantizer
paper_content:
Progress in CMOS-based image sensors is creating opportunities for a low-cost, low-power one-chip video camera with digitizing, signal processing and image compression. Such a smart camera head acquires compressed digital moving pictures directly into portable multimedia computers. Video encoders using a moving picture coding standard such as MPEG and H.26x are not always suitable for integration of image encoding on the image sensor, because of the complexity and the power dissipation. On-sensor image compression such as a CCD image sensor for lossless image compression and a CMOS image sensor with pixel-level interframe coding are reported. A one-chip digital camera with on-sensor video compression is shown in the block diagram. The chip contains a 128/spl times/128-pixel sensor, 8-channel parallel read-out circuits, an analog 2-dimensional discrete cosine transform (2D DCT) processor and a variable quantization-level ADC (ADC/Q).
---
paper_title: Transpose memory for video rate JPEG compression on highly parallel single-chip digital CMOS imager
paper_content:
A transpose switch matrix memory (TSMM) is proposed to enable a highly parallel single-chip CMOS sensor/image processor, Xetal, developed at Philips to perform JPEG compression at video rate (30 frames per second, fps) at an image dimension of 640/spl times/480 pixels. The integrated solution consists of 320 processing elements and 80 TSMMs, operates at 16 MHz clock rate and 3.3 V supply voltage, and is designed for fabrication at 0.25 micron technology. The processing system can sustain a maximum throughput of 5.12 billion operations per second consuming an estimated 120 mW providing a processing power efficiency of 7 BOPS/Watt. The Xetal architecture is capable of performing pixel level image processing such as fixed pattern noise (FPN) correction, defective pixel concealment, Bayer pattern filtering, RGB-YUV conversion, auto white balancing, and auto exposure control. The TSMM expands support to block level operations including chrominance subsampling, separable 8/spl times/8 recursive DCT, and ZZ scan required for JPEG.
---
paper_title: A real-time JPEG encoder for 1.3 mega pixel CMOS image sensor SoC
paper_content:
In this paper, we propose a hardware architecture of low-power and real-time JPEG encoder for 1.3 mega pixels CMOS image sensor SoC which can be applied to mobile communication devices. The proposed architecture has an efficient interface scheme with CMOS image sensor and other peripherals for real-time encoding. The JPEG encoder supports the base-line JPEG mode, and processes motion images of which resolution is up to 1280/spl times/960 (CCIR601 YCrCb 4:2:2,15 fps) by real-time processing. The JPEG encoder supports 8 types of resolution, and can serve the 4 levels of image quality through quantization matrix. The proposed JPEG encoder can transfer encoded motion pictures and raw image data from CMOS image sensor to external device through USB 2.0 and a compressed still image is stored at external pseudo SRAM through SRAM interface. And proposed core can communicate parameters of encoding type with other host by I2C. The proposed architecture was implemented with VHDL and verified for the functions with Synopsys and Modelsim. The encoder proposed in this paper was fabricated in process of 0.18u of Hynix semiconductor Inc.
---
paper_title: A CMOS Front-End for a Lossy Image Compression Sensor
paper_content:
A CMOS image sensor has been designed to perform the front-end image decomposition in a prediction-SPIHT image compression scheme. The prediction circuitry based on charge sharing is integrated inside the sensor array to perform 3-level image decomposition. A CMOS test chip has been prototyped and tested. The test results justify the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate but with much less data readout volume. Also, the sensor can be used to achieve comparable compression performance with much lower computational complexity compared to 2D discrete wavelet transform (DWT) based image compression.
---
paper_title: A CMOS image sensor for focal plane decomposition
paper_content:
An alternative image decomposition method that exploits prediction via nearby pixels has been integrated on a CMOS sensor focal plane. The proposed focal plane decomposition is compared to the 2D discrete wavelet transform (DWT) decomposition commonly used in state of the art compression schemes such as EBCOT and SPIHT. A technique for focal plane prediction is described that can be used to decompose the image into pyramid subbands as is done in DWT based approaches. Although its simulated compression performance is somewhat lower than that obtained using wavelet decomposition, prediction decomposition has much lower computational complexity and is suitable for implementation directly on the focal plane in a completely pixel parallel structure. When coupled with SPIHT-like encoding, the focal plane prediction technique obtains all the advantages of the DWT based SPIHT compression algorithm, including both high image quality at low bit rates and optimized data ordering for progressive image transmission. A CMOS test chip to perform single layer image decomposition based on pixel level parallel computation has been prototyped. The design and test results for this chip are discussed.
---
paper_title: A CMOS image sensor with focal plane SPIHT image compression
paper_content:
In this paper, a focal plane SPIHT image compression scheme has been integrated on the focal plane of CMOS image sensor. A test chip is prototyped in 0.35 mum technology to verify the design. In our image compression scheme, focal plane prediction replaces the traditional wavelet transform for image decomposition, and the traditional SPIHT coding is modified to reduce computational complexity and to suit the parallel implementation on the sensor focal plane. Simulated compression performance is still good, especially at low bit rates, compared to traditional SPIHT and other compression schemes. Integrating SPIHT compression on the focal plane not only reduces the sensor readout volume, but also provides all the advantages of SPIHT compression including both high image quality and optimized data ordering for progressive image transmission.
---
paper_title: Charge-based prediction circuits for focal plane image compression
paper_content:
This work presents the design of a computational charge-based circuit to be part of a focal plane compression chip. The image compression scheme pursued is predictive coding. The proposed circuit computes the prediction error at every pixel. It carries out the computations by integrating the photocurrents of the pixels in a small neighborhood. The prediction weights for every pixel can be changed by changing the switching timing of the circuit making possible the use of adaptive prediction algorithms. The circuit is compact and can be integrated at the pixel level.
---
paper_title: CMOS Camera With In-Pixel Temporal Change Detection and ADC
paper_content:
An array of 90times90 active pixel sensors (APS) with pixel-level embedded differencing and comparison is presented. The nMOS-only 6T 2C 25 mum times 25 mum pixel provides both analog readout of pixel intensity and a digital flag indicating temporal change at variable thresholds. Computation is performed through a pixel-level capacitively coupled comparator which also functions as analog-to-digital converter. The chip, fabricated in a 0.5 mum 3M2P CMOS, process consumes 4.2 mW of power while operating at 30 fps. Change sensitivity is 2.1% at an illumination of 1.7 W/cm2. Gating of raster-scanned pixel output by change detection typically produces a 20-fold compression in the data stream, depending on image conditions and reconstruction quality set by the change detection threshold.
---
paper_title: A CMOS Image Sensor for Multi-Level Focal Plane Image Decomposition
paper_content:
An alternative image decomposition method that exploits prediction via nearby pixels has been integrated on the CMOS image sensor focal plane. The proposed focal plane decomposition is compared to the 2-D discrete wavelet transform (DWT) decomposition commonly used in state of the art compression schemes such as SPIHT and JPEG2000. The method achieves comparable compression performance with much lower computational complexity and allows image compression to be implemented directly on the sensor focal plane in a completely pixel parallel structure. A CMOS prototype chip has been fabricated and tested. The test results validate the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate with much lower data readout volume. The features of the proposed decomposition scheme also benefit real-time, low rate and low power applications.
---
paper_title: Computational image sensor for on sensor compression
paper_content:
We propose a novel integration of image compression and sensing in order to enhance the performance of an image sensor. By integrating a compression function onto the sensor focal plane, the image signal to be read out from the sensor is significantly reduced and the pixel rate of the sensor ran consequently be increased. The potential applications of the proposed sensor are in high pixel-rate imaging, such as high frame-rate image sensing and high-resolution image sensing. The compression scheme we employ is a conditional replenishment, which detects and encodes moving areas. In this paper, we introduce two architectures for on-sensor compression; one is the pixel parallel approach and the other is the column parallel approach. We prototyped a VLSI chip of the proposed sensor based on the pixel parallel architecture. We show the design and describe the results of the experiments obtained by the prototype chip.
---
paper_title: A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change
paper_content:
A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN
---
paper_title: A QVGA 143dB dynamic range asynchronous address-event PWM dynamic image sensor with lossless pixel-level video compression
paper_content:
Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data.
---
paper_title: On sensor image compression
paper_content:
In this paper, we propose a novel image sensor which compresses image signals on the sensor plane. Since an image signal is compressed on the sensor plane by making use of the parallel nature of image signals, the amount of signal read out from the sensor can be significantly reduced. Thus, the potential applications of the proposed sensor are high pixel rate cameras and processing systems which require very high speed imaging or very high resolution imaging. The very high bandwidth is the fundamental limitation to the feasibility of those high pixel rate sensors and processing systems. Conditional replenishment is employed for the compression algorithm. In each pixel, current pixel value is compared to that in the last replenished frame. The value and the address of the pixel are extracted and coded if the magnitude of the difference is greater than a threshold. Analog circuits have been designed for processing in each pixel. A first prototype of a VLSI chip has been fabricated. Some results of experiments obtained by using the first prototype are shown in this paper.
---
paper_title: High speed digital smart image sensor with image compression function
paper_content:
We propose a high speed digital smart image sensor on which A/D conversion circuits and digital processing circuits of image compression are integrated. By on-sensor image compression, the proposed sensor can overcome a communication bottle neck between a sensor and peripherals that is one of the most serious problems for achieving high frame rates. The compression method uses the correlation between two consecutive frames. It has some compression modes by controlling spatial and temporal resolutions. In this paper, the algorithm of on-sensor-compression is explained and some experimental results are shown. finally, we show the design of new digital smart sensor.
---
paper_title: Effects of charge-based computation non-idealities on CMOS image compression sensors
paper_content:
We present a CMOS image sensor that performs focal plane image decomposition based on charge sharing computation circuitry. The effect of parasitic capacitance between capacitor bottom plate and substrate on the computational accuracy is discussed and a new circuit is also proposed to characterize the parasitic effects. The test results demonstrate that prediction based focal plane image compression can be realized inside the sensor array resulting in high compression performance using frame rate pixel parallel computation. This architecture can subsequently be combined with backend level testing encoding to form a complete compression sensor.
---
paper_title: A CMOS Image Sensor With On-Chip Image Compression Based on Predictive Boundary Adaptation and Memoryless QTD Algorithm
paper_content:
This paper presents the architecture, algorithm, and VLSI hardware of image acquisition, storage, and compression on a single-chip CMOS image sensor. The image array is based on time domain digital pixel sensor technology equipped with nondestructive storage capability using 8-bit Static-RAM device embedded at the pixel level. The pixel-level memory is used to store the uncompressed illumination data during the integration mode as well as the compressed illumination data obtained after the compression stage. An adaptive quantization scheme based on fast boundary adaptation rule (FBAR) and differential pulse code modulation (DPCM) procedure followed by an online, least storage quadrant tree decomposition (QTD) processing is proposed enabling a robust and compact image compression processor. A prototype chip including 64×64 pixels, read-out and control circuitry as well as an on-chip compression processor was implemented in 0.35 μm CMOS technology with a silicon area of 3.2×3.0 mm2 and an overall power of 17 mW. Simulation and measurements results show compression figures corresponding to 0.6-1 bit-per-pixel (BPP), while maintaining reasonable peak signal-to-noise ratio levels.
---
paper_title: A digital pixel sensor array with programmable dynamic range
paper_content:
This paper presents a digital pixel sensor (DPS) array employing a time domain analogue-to-digital conversion (ADC) technique featuring adaptive dynamic range and programmable pixel response. The digital pixel comprises a photodiode, a voltage comparator, and an 8-bit static memory. The conversion characteristics of the ADC are determined by an array-based digital control circuit, which linearizes the pixel response, and sets the conversion range. The ADC response is adapted to different lighting conditions by setting a single clock frequency. Dynamic range compression was also experimentally demonstrated. This clearly shows the potential of the proposed technique in overcoming the limited dynamic range typically imposed by the number of bits in a DPS. A 64 /spl times/ 64 pixel array prototype was manufactured in a 0.35-/spl mu/m, five-metal, single poly, CMOS process. Measurement results indicate a 100 dB dynamic range, a 41-s mean dark time and an average current of 1.6 /spl mu/A per DPS.
---
paper_title: PWM digital pixel sensor based on asynchronous self-resetting scheme
paper_content:
In this letter, a pulse-width modulated digital pixel sensor is presented along with its inherent advantages such as low power consumption and wide operating range. The pixel, which comprises an analog processor and an 8-bit memory cell, operates in an asynchronous self-resetting mode. In contrast to most CMOS image sensors, in our approach, the photocurrent signal is encoded as a pulse-width signal, and converted to an 8-bit digital code using a Gray counter. The dynamic range of the pixel can be adapted by simply modulating the clock frequency of the counter. To test the operation of the proposed pixel architecture, an image sensor array has been designed and fabricated in a 0.35-/spl mu/m CMOS technology, where each pixel occupies an area of 45/spl times/45 /spl mu/m/sup 2/. Here, the operation of the sensor is demonstrated through experimental results.
---
paper_title: A CMOS Image Sensor with on Chip Image Compression based on Predictive Boundary Adaptation and QTD Algorithm
paper_content:
This paper presents the architecture, algorithm and VLSI hardware of image acquisition, storage and compression on a single-chip CMOS image sensor. The image array is based on time domain digital pixel sensor technology equipped with nondestructive storage capability using 8-bit Static-RAM device embedded at the pixel level. An adaptive quantization scheme based on Fast Boundary Adaptation Rule (FBAR) and Differential Pulse Code Modulation (DPCM) procedure followed by an online Quadrant Tree Decomposition (QTD) processing is proposed enabling low power, robust and compact image compression processor. A prototype chip including 64 times 64 pixels, read-out and control circuitry as well as the compression processor was implemented in 0.35mum CMOS technology with a silicon area of 3.2 times3.0 mm2. Simulation results show compression figures corresponding to 0.75 Bit-per-Pixel (BPP), while maintaining reasonable PSNR levels.
---
paper_title: A novel DPS integrator for fast CMOS imagers
paper_content:
A novel DPS integrator scheme for fast CMOS imagers is presented, which combines the advantages of analog APS and DPS circuits. The reset-insensitive integrator proposal improves the linearity of the ADC curve, while it allows both low-power consumption for the active blocks and low-voltage operation for the switching devices even at high frame rates. In this sense, a comparative study is presented in 0.18 mum 1-poly 6-metal 1.8 V CMOS technology to demonstrate the advantages of this novel solution.
---
paper_title: Compressive Acquisition CMOS Image Sensor: From the Algorithm to Hardware Implementation
paper_content:
In this paper, a new design paradigm referred to as compressive acquisition CMOS image sensors is introduced. The idea consists of compressing the data within each pixel prior to storage, and hence, reducing the size of the memory required for digital pixel sensor. The proposed compression algorithm uses a block-based differential coding scheme in which differential values are captured and quantized online. A time-domain encoding scheme is used in our CMOS image sensor in which the brightest pixel within each block fires first and is selected as the reference pixel. The differential values between subsequent pixels and the reference within each block are calculated and quantized, using a reduced number of bits as their dynamic range is compressed. The proposed scheme enables reduced error accumulation as full precision is used at the start of each block, while also enabling reduced memory requirement, and hence, enabling significant silicon area saving. A mathematical model is derived to analyze the performance of the algorithm. Experimental results on a field-programmable gate-array (FPGA) platform illustrate that the proposed algorithm enables more than 50% memory saving at a peak signal-to-noise ratio level of 30 dB with 1.5 bit per pixel.
---
paper_title: Architecture of a digital pixel sensor array using 1-bit Hilbert predictive coding
paper_content:
In this paper, the architecture of a digital pixel sensor (DPS) array with an online 1-bit predictive coding algorithm using Hilbert scanning scheme is proposed. The architecture of the sensor array reduces by more than half the silicon area of the DPS by sampling and storing the differential values between the pixel and its prediction, featuring compressed dynamic range and hence requiring limited precision (only 1-bit signed value in the proposed architecture as compared to 8-bit unsigned full precision). Hilbert scanning is used to read-out the pixel's value, hence avoiding discontinuity in the read-out path, which is shown to improve the quality of the reconstructed image. The Hilbert scanning path is all carried out by hardware wire connection without increasing the circuit complexity of the sensor array. Reset pixels are inserted into scanning path to overcome the error accumulation problem inherent in predictive coding. System level simulation results show a PSNR of around 25dB can be reached while using the proposed 1-bit Hilbert predictive coding algorithm. VLSI implementation results illustrate a pixel level implementation featuring a pixel size reduction of 67% with a fill-factor of 40% compared with a standard PWM DPS architecture.
---
paper_title: Digital pixel sensor with on-line spatial and temporal compression scheme
paper_content:
In this paper, the concept of compressive acquisitive image sensor is extended to the temporal domain. Architecture of a compact CMOS image sensor designed based on the design paradigm capture ↦ compress ↦ store is proposed. The image sensor array is divided into quadrants. Spatial compression algorithm is integrated into quadrant-level circuit and carried out on-line before the storage phase. All the quadrants are further classified into background/non-background quadrants by an off-array adaptive judge logic. A reference array is built in the off-array logic to adaptively tracking the movement between frames. Temporal redundance between frames are removed in the readout phase. Experimental results show that the proposed algorithm enables more than 50% memory saving at a PSNR level of 27dB with around 0.5 BPP.
---
paper_title: Compressive acquisition CMOS image sensor using on-line sorting scheme
paper_content:
In this paper, we propose a compact block-based compressive acquisition CMOS image sensors. The overall image is decomposed into blocks. The proposed design reorders the raw captured pixel values within each block, only recording the brightest pixel value and the darkest pixel value for each block using full precision (typically 8-bit). During reconstruction, block models are built to model the value distribution within one block. As in a digital pixel sensor (DPS) array the pixel data are naturally ordered (from the brightest to the darkest pixel), the block-level hardware overload while implementing the proposed scheme is very low. The proposed scheme enables reduced memory requirement, and hence enabling significant silicon area saving. The compression performance and image quality of the proposed algorithm are thoroughly analyzed for different block sizes and a mathematical model is subsequently derived. Simulation result shows that the proposed algorithm enables more than 80% pixel-level memory reduction at a PSNR level around 23dB and using 1BPP.
---
paper_title: A CMOS Image Sensor With On-Chip Image Compression Based on Predictive Boundary Adaptation and Memoryless QTD Algorithm
paper_content:
This paper presents the architecture, algorithm, and VLSI hardware of image acquisition, storage, and compression on a single-chip CMOS image sensor. The image array is based on time domain digital pixel sensor technology equipped with nondestructive storage capability using 8-bit Static-RAM device embedded at the pixel level. The pixel-level memory is used to store the uncompressed illumination data during the integration mode as well as the compressed illumination data obtained after the compression stage. An adaptive quantization scheme based on fast boundary adaptation rule (FBAR) and differential pulse code modulation (DPCM) procedure followed by an online, least storage quadrant tree decomposition (QTD) processing is proposed enabling a robust and compact image compression processor. A prototype chip including 64×64 pixels, read-out and control circuitry as well as an on-chip compression processor was implemented in 0.35 μm CMOS technology with a silicon area of 3.2×3.0 mm2 and an overall power of 17 mW. Simulation and measurements results show compression figures corresponding to 0.6-1 bit-per-pixel (BPP), while maintaining reasonable peak signal-to-noise ratio levels.
---
paper_title: A CMOS Image Sensor for Multi-Level Focal Plane Image Decomposition
paper_content:
An alternative image decomposition method that exploits prediction via nearby pixels has been integrated on the CMOS image sensor focal plane. The proposed focal plane decomposition is compared to the 2-D discrete wavelet transform (DWT) decomposition commonly used in state of the art compression schemes such as SPIHT and JPEG2000. The method achieves comparable compression performance with much lower computational complexity and allows image compression to be implemented directly on the sensor focal plane in a completely pixel parallel structure. A CMOS prototype chip has been fabricated and tested. The test results validate the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate with much lower data readout volume. The features of the proposed decomposition scheme also benefit real-time, low rate and low power applications.
---
paper_title: Computational image sensor for on sensor compression
paper_content:
We propose a novel integration of image compression and sensing in order to enhance the performance of an image sensor. By integrating a compression function onto the sensor focal plane, the image signal to be read out from the sensor is significantly reduced and the pixel rate of the sensor ran consequently be increased. The potential applications of the proposed sensor are in high pixel-rate imaging, such as high frame-rate image sensing and high-resolution image sensing. The compression scheme we employ is a conditional replenishment, which detects and encodes moving areas. In this paper, we introduce two architectures for on-sensor compression; one is the pixel parallel approach and the other is the column parallel approach. We prototyped a VLSI chip of the proposed sensor based on the pixel parallel architecture. We show the design and describe the results of the experiments obtained by the prototype chip.
---
paper_title: A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change
paper_content:
A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN
---
paper_title: CMOS wavelet compression imager architecture
paper_content:
The CMOS imager architecture implements /spl Delta//spl Sigma/-modulated Haar wavelet image compression on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental over-sampling analog-to-digital converters (ADCs). Each ADC performs column-wise distributed focal-plane sampling and concurrent signed weighted average quantization, realizing a one-dimensional spatial Haar wavelet transform. A digital delay and adder loop performs spatial accumulation over multiple adjacent ADC outputs. This amounts to computing a two-dimensional Haar wavelet transform, with no overhead in time and negligent overhead in area compared to a baseline digital imager architecture. The architecture is experimentally validated on a 0.35 micron CMOS prototype containing a bank of first-order incremental oversampling ADCs computing Haar wavelet transform on an emulated pixel array output. The architecture yields simulated computational throughput of 1.4 GMACS with SVGA imager resolution at 30 frames per second.
---
paper_title: Compressive Acquisition CMOS Image Sensor: From the Algorithm to Hardware Implementation
paper_content:
In this paper, a new design paradigm referred to as compressive acquisition CMOS image sensors is introduced. The idea consists of compressing the data within each pixel prior to storage, and hence, reducing the size of the memory required for digital pixel sensor. The proposed compression algorithm uses a block-based differential coding scheme in which differential values are captured and quantized online. A time-domain encoding scheme is used in our CMOS image sensor in which the brightest pixel within each block fires first and is selected as the reference pixel. The differential values between subsequent pixels and the reference within each block are calculated and quantized, using a reduced number of bits as their dynamic range is compressed. The proposed scheme enables reduced error accumulation as full precision is used at the start of each block, while also enabling reduced memory requirement, and hence, enabling significant silicon area saving. A mathematical model is derived to analyze the performance of the algorithm. Experimental results on a field-programmable gate-array (FPGA) platform illustrate that the proposed algorithm enables more than 50% memory saving at a peak signal-to-noise ratio level of 30 dB with 1.5 bit per pixel.
---
paper_title: A 10 000 fps CMOS Sensor With Massively Parallel Image Processing
paper_content:
A high-speed analog VLSI image acquisition and pre-processing system has been designed and fabricated in a 0.35 ?m standard CMOS process. The chip features a massively parallel architecture enabling the computation of programmable low-level image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel or Laplacian filters are implemented on the circuit. For this purpose, each 35 ?m × 35 ?m pixel includes a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. The retina provides address-event coded output on three asynchronous buses: one output dedicated to the gradient and the other two to the pixel values. A 64 × 64 pixel proof-of-concept chip was fabricated. A dedicated embedded platform including FPGA and ADCs has also been designed to evaluate the vision chip. Measured results show that the proposed sensor successfully captures raw images up to 10 000 frames per second and runs low-level image processing at a frame rate of 2000 to 5000 frames per second.
---
paper_title: Focal-Plane Spatially Oversampling CMOS Image Compression Sensor
paper_content:
Image compression algorithms employ computationally expensive spatial convolutional transforms. The CMOS image sensor performs spatially compressing image quantization on the focal plane yielding digital output at a rate proportional to the mere information rate of the video. A bank of column-parallel first-order incremental DeltaSigma-modulated analog-to-digital converters (ADCs) performs column-wise distributed focal-plane oversampling of up to eight adjacent pixels and concurrent weighted average quantization. Number of samples per pixel and switched-capacitor sampling sequence order set the amplitude and sign of the pixel coefficient, respectively. A simple digital delay and adder loop performs spatial accumulation over up to eight adjacent ADC outputs during readout. This amounts to computing a two-dimensional block matrix transform with up to 8times8-pixel programmable kernel in parallel for all columns. Noise shaping reduces power dissipation below that of a conventional digital imager while the need for a peripheral DSP is eliminated. A 128times128 active pixel array integrated with a bank of 128 DeltaSigma-modulated ADCs was fabricated in a 0.35-mum CMOS technology. The 3.1 mm times 1.9-mm prototype captures 8-bit digital video at 30 frames/s and yields 4 GMACS projected computational throughput when scaled to HDTV 1080i resolution in discrete cosine transform (DCT) compression
---
| Title: CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis
Section 1: Introduction
Description 1: Introduction to the importance and applications of image sensors, the need for image compression, various compression algorithms and standards, and the advantages of integrating image compression with CMOS technology.
Section 2: Still Image-Compression Algorithms
Description 2: Overview of image compression algorithms, including lossy and lossless methods, spatial and transform domain techniques, and standardization efforts.
Section 3: On-Chip Image Compression for CMOS Image Sensors
Description 3: Detailed discussion of predictive coding and wavelet-based image compression integrated with CMOS image sensors, including infrastructure, designs, and effectiveness.
Section 4: Efficiency of the Image-Sensor Array Readout
Description 4: Analysis of different system designs for image-sensor readout efficiency and their impacts on image compression, including MATIA architecture and various block-based readout techniques.
Section 5: High-Performance Compression-Processor Design
Description 5: Exploration of the design and performance of high-efficiency off-array processors for image compression in CMOS sensors.
Section 6: Image Sensors with Focal-Plane Image Compression
Description 6: Investigation of focal-plane image compression techniques for image sensors, including the complexities and advantages of parallel in-array processing.
Section 7: APS Array with Focal-Plane Compression Processor
Description 7: Examination of APS arrays with integrated focal-plane processors for real-time image and video compression and processing, and their architectural designs.
Section 8: DPS Array Integrated Compression Processor
Description 8: Discussion about the latest DPS array image sensors, their integration with adaptive quantization schemes, and their advantages for parallel processing and storage requirements.
Section 9: Compressive Acquisition Image Sensor
Description 9: Introduction and evaluation of the compressive acquisition paradigm, which integrates image compression during the capture phase to reduce on-chip storage and enhance processing efficiency.
Section 10: Performance Analysis and Comparison
Description 10: Comparative analysis of various CMOS image sensors with on-chip compression in terms of resolution, throughput, frame rate, and power consumption, along with challenges associated with these designs.
Section 11: Conclusion
Description 11: Summary of the advantages and future potentials of integrating on-chip image compression with CMOS image sensors, highlighting key takeaways and expectations for future advancements. |
A Literature Review on Circle and Sphere Packing Problems: Models and Methodologies | 11 | ---
paper_title: Maximin Latin Hypercube Designs in Two Dimensions
paper_content:
The problem of finding a maximin Latin hypercube design in two dimensions can be described as positioning n non-attacking rooks on an n x n chessboard such that the minimal distance between pairs of rooks is maximized.Maximin Latin hypercube designs are important for the approximation and optimization of black box functions.In this paper general formulas are derived for maximin Latin hypercube designs for general n, when the distance measure is l8 or l1. Furthermore, for the distance measure l2 we obtain maximin Latin hypercube designs for n = 70 and approximate maximin Latin hypercube designs for the values of n.We show the reduction in the maximin distance caused by imposing the Latin hypercube design structure is small.This justifies the use of maximin Latin hypercube designs instead of unrestricted designs.(This abstract was borrowed from another version of this item.)
---
paper_title: A simulated annealing approach for the circular cutting problem
paper_content:
We propose a heuristic for the constrained and the unconstrained circular cutting problem based upon simulated annealing. We define an energy function, the small values of which provide a good concentration of the circular pieces on the left bottom corner of the initial rectangle. Such values of the energy correspond to configurations where pieces are placed in the rectangle without overlapping. Appropriate software has been devised and computational results and comparisons with some other algorithms are also provided and discussed. � 2003 Elsevier B.V. All rights reserved.
---
paper_title: Minimum perimeter rectangles that enclose congruent non-overlapping circles
paper_content:
We use computational experiments to find the rectangles of minimum perimeter into which a given number n of non-overlapping congruent circles can be packed. No assumption is made on the shape of the rectangles. In many of the packings found, the circles form the usual regular square-grid or hexagonal patterns or their hybrids. However, for most values of n in the tested range n =< 5000, e.g., for n = 7, 13, 17, 21, 22, 26, 31, 37, 38, 41, 43...,4997, 4998, 4999, 5000, we prove that the optimum cannot possibly be achieved by such regular arrangements. Usually, the irregularities in the best packings found for such n are small, localized modifications to regular patterns; those irregularities are usually easy to predict. Yet for some such irregular n, the best packings found show substantial, extended irregularities which we did not anticipate. In the range we explored carefully, the optimal packings were substantially irregular only for n of the form n = k(k+1)+1, k = 3, 4, 5, 6, 7, i.e., for n = 13, 21, 31, 43, and 57. Also, we prove that the height-to-width ratio of rectangles of minimum perimeter containing packings of n congruent circles tends to 1 as n tends to infinity.
---
paper_title: Dense packings of congruent circles in rectangles with a variable aspect ratio
paper_content:
We use computational experiments to find the rectangles of minimum area into which a given number n of non-overlapping congruent circles can be packed. No assumption is made on the shape of the rectangles. Most of the packings found have the usual regular square or hexagonal pattern. However, for 1495 values of n in the tested range n≤ 5000, specifically, for n = 49, 61, 79, 97, 107, …4999, we prove that the optimum cannot possibly be achieved by such regular arrangements.The evidence suggests that the limiting height-to-width ratio of rectangles containing an optimal hexagonal packing of circles tends to \( 2 - \sqrt 3 \) as n → ∞, if the limit exists.
---
paper_title: Improving Dense Packings of Equal Disks in a Square
paper_content:
We describe a new numerical procedure for generating dense packings of disks and spheres inside various geometric shapes. We believe that in some of the smaller cases, these packings are in fact optimal. When applied to the previously studied cases of packing n equal disks in a square, the procedure confirms all the previous record packings except for n = 32, 37, 48, and 50 disks, where better packings than those previously recorded are found. For n = 32 and 48, the new packings are minor variations of the previous record packings. However, for n = 37 and 50, the new patterns differ substantially. For example, they are mirror-symmetric, while the previous record packings are not.
---
paper_title: An improved typology of cutting and packing problems
paper_content:
The number of publications in the area of Cutting and Packing (C&P) has increased considerably over the last two decades. The typology of C&P problems introduced by Dyckhoff [Dyckhoff, H., 1990. A typology of cutting and packing problems. European Journal of Operational Research 44, 145–159] initially provided an excellent instrument for the organisation and categorisation of existing and new literature. However, over the years also some deficiencies of this typology became evident, which created problems in dealing with recent developments and prevented it from being accepted more generally. In this paper, the authors present an improved typology, which is partially based on Dyckhoff’s original ideas, but introduces new categorisation criteria, which define problem categories different from those of Dyckhoff. Furthermore, a new, consistent system of names is suggested for these problem categories. Finally, the practicability of the new scheme is demonstrated by using it as a basis for a categorisation of the C&P literature from the years between 1995 and 2004. � 2006 Elsevier B.V. All rights reserved.
---
paper_title: A simulated annealing approach for the circular cutting problem
paper_content:
We propose a heuristic for the constrained and the unconstrained circular cutting problem based upon simulated annealing. We define an energy function, the small values of which provide a good concentration of the circular pieces on the left bottom corner of the initial rectangle. Such values of the energy correspond to configurations where pieces are placed in the rectangle without overlapping. Appropriate software has been devised and computational results and comparisons with some other algorithms are also provided and discussed. � 2003 Elsevier B.V. All rights reserved.
---
paper_title: Packing up to 50 Equal Circles in a Square
paper_content:
The problem of maximizing the radius of n equal circles that can be packed into a given square is a well-known geometrical problem. An equivalent problem is to find the largest distance d, such that n points can be placed into the square with all mutual distances at least d. Recently, all optimal packings of at most 20 circles in a square were exactly determined. In this paper, computational methods to find good packings of more than 20 circles are discussed. The best packings found with up to 50 circles are displayed. A new packing of 49 circles settles the proof that when n is a square number, the best packing is the square lattice exactly when n≤ 36.
---
paper_title: Extremal problems for convex polygons
paper_content:
Consider a convex polygon V n with n sides, perimeter P n , diameter D n , area A n , sum of distances between vertices S n and width W n . Minimizing or maximizing any of these quantities while fixing another defines 10 pairs of extremal polygon problems (one of which usually has a trivial solution or no solution at all). We survey research on these problems, which uses geometrical reasoning increasingly complemented by global optimization methods. Numerous open problems are mentioned, as well as series of test problems for global optimization and non-linear programming codes.
---
paper_title: A New Verified Optimization Technique for the "Packing Circles in a Unit Square" Problems
paper_content:
This paper presents a new verified optimization method for the problem of finding the densest packings of nonoverlapping equal circles in a square. In order to provide reliable numerical results, the developed algorithm is based on interval analysis. As one of the most efficient parts of the algorithm, an interval-based version of a previous elimination procedure is introduced. This method represents the remaining areas still of interest as polygons fully calculated in a reliable way. Currently the most promising strategy of finding optimal circle packing configurations is to partition the original problem into subproblems. Still, as a result of the highly increasing number of subproblems, earlier computer-aided methods were not able to solve problem instances where the number of circles was greater than 27. The present paper provides a carefully developed technique resolving this difficulty by eliminating large groups of subproblems together. As a demonstration of the capabilities of the new algorithm the problems of packing 28, 29, and 30 circles were solved within very tight tolerance values. Our verified procedure decreased the uncertainty in the location of the optimal packings by more than 700 orders of magnitude in all cases.
---
paper_title: New results in the packing of equal circles in a square
paper_content:
Abstract The problem of finding the maximum diameter of n equal mutually disjoint circles inside a unit square is addressed in this paper. Exact solutions exist for only n = 1, …, 9,10,16,25,36 while for other n only conjectural solutions have been reported. In this work a max-min optimization approach is introduced which matches the best reported solutions in the literature for all n ⩽ 30, yields a better configuration for n = 15, and provides new results for n = 28 and 29.
---
paper_title: Dense packings of equal disks in an equilateral triangle: from 22 to 34 and beyond
paper_content:
Previously published packings of equal disks in an equilateral triangle have dealt with up to 21 disks. We use a new discrete-event simulation algorithm to produce packings for up to 34 disks. For each n in the range 22 ≤ n ≤ 34 we present what we believe to be the densest possible packing of n equal disks in an equilateral triangle. For these n we also list the second, often the third and sometimes the fourth best packings among those that we found. In each case, the structure of the packing implies that the minimum distance d(n) between disk centers is the root of polynomial Pn with integer coefficients. In most cases we do not explicitly compute Pn but in all cases we do compute and report d(n) to 15 significant decimal digits. Disk packings in equilateral triangles differ from those in squares or circles in that for triangles there are an infinite number of values of n for which the exact value of d(n) is known, namely, when n is of the form ∆(k) := k(k+1) 2 . It has also been conjectured that d(n−1) = d(n) in this case. Based on our computations, we present conjectured optimal packings for seven other infinite classes of n, namely
---
paper_title: An improved algorithm for the packing of unequal circles within a larger containing circle
paper_content:
Abstract This paper describes an approved algorithm for the problems of unequal circle packing – the quasi-physical quasi-human algorithm. First, the quasi-physical approach for the general packing problems is described in solving the pure problems of unequal circle packing. The method is an analogy to the physical model in which a number of smooth cylinders are packed inside a container. A quasi-human strategy is then proposed to trigger a jump for a stuck object in order to get out of local minima. Our method has been tested in numerical experiments. The computational results are presented, showing the merits of the proposed method. Our algorithm can be thought as an adoptive algorithm of the Tabu search.
---
paper_title: A beam search algorithm for the circular packing problem
paper_content:
In this paper, we propose to solve the circular packing problem (CPP) whose objective is to pack n different circles C"i of known radius r"i,[email protected]?N={1,...,n}, into the smallest containing circle C. The objective is to determine the radius r of C as well as the coordinates (x"i,y"i) of the center of the packed circles C"i,[email protected]?N. CPP is solved by using an adaptive beam search algorithm that combines the beam search, the local position distance and the dichotomous search strategy. Decisions at each node of the developed tree are based on the well-known maximum hole degree that uses the local minimum distance. The computational results, on a set of instances taken from the literature, show the effectiveness of the proposed algorithm.
---
paper_title: An improved typology of cutting and packing problems
paper_content:
The number of publications in the area of Cutting and Packing (C&P) has increased considerably over the last two decades. The typology of C&P problems introduced by Dyckhoff [Dyckhoff, H., 1990. A typology of cutting and packing problems. European Journal of Operational Research 44, 145–159] initially provided an excellent instrument for the organisation and categorisation of existing and new literature. However, over the years also some deficiencies of this typology became evident, which created problems in dealing with recent developments and prevented it from being accepted more generally. In this paper, the authors present an improved typology, which is partially based on Dyckhoff’s original ideas, but introduces new categorisation criteria, which define problem categories different from those of Dyckhoff. Furthermore, a new, consistent system of names is suggested for these problem categories. Finally, the practicability of the new scheme is demonstrated by using it as a basis for a categorisation of the C&P literature from the years between 1995 and 2004. � 2006 Elsevier B.V. All rights reserved.
---
paper_title: Solving circle packing problems by global optimization: Numerical results and industrial applications
paper_content:
A (general) circle packing is an optimized arrangement of N arbitrary sized circles inside a container (e.g., a rectangle or a circle) such that no two circles overlap. In this paper, we present several circle packing problems, review their industrial applications, and some exact and heuristic strategies for their solution. We also present illustrative numerical results using 'generic' global optimization software packages. Our work highlights the relevance of global optimization in solving circle packing problems, and points towards the necessary advancements in both theory and numerical practice.
---
| Title: A Literature Review on Circle and Sphere Packing Problems: Models and Methodologies
Section 1: Introduction
Description 1: Introduce cutting and packing problems, their applications, and the specific focus on circular objects in two and three dimensions.
Section 2: Polygonal Region Ω
Description 2: Discuss problems related to packing circles within polygonal regions, differentiating between two-dimensional and three-dimensional cases.
Section 3: Two-Dimensional Case
Description 3: Explore methods and models for packing circles in two-dimensional shapes such as squares, rectangles, and various polygons.
Section 4: Packing Circles into a Square
Description 4: Review methods and results for packing identical circles within a unit square to maximize minimal distances or other objectives.
Section 5: Packing Circles into a Rectangle
Description 5: Examine problems and solutions related to packing circles into a fixed-size or minimal area rectangle, with variations for identical and nonidentical circles.
Section 6: Packing Circles in a Compact Polygon Set
Description 6: Discuss the problem of covering a polygonal region with identical circles of minimal radius and methods used.
Section 7: Three-Dimensional Case
Description 7: Cover problems and methodologies for packing spheres in three-dimensional regions such as cubes, parallelepipeds, cylinders, and polytopes.
Section 8: Circular Region Ω
Description 8: Focus on circle packing problems (CPP) in circular regions, including applications, challenges, and distinctions between packing identical and nonidentical circles.
Section 9: Packing Identical Circles
Description 9: Summarize approaches for packing identical circles into a unit circle, including specific models and heuristic methods.
Section 10: Packing Nonidentical Circles
Description 10: Explore the methodologies for packing nonidentical circles into the smallest containing circle, emphasizing combinatorial and continuous optimization challenges.
Section 11: Conclusion
Description 11: Summarize the findings from the literature review and discuss the advancements in the field and potential future research directions. |
A Review of Recent Papers on Online Discussion in Teaching and Learning in Higher Education A REVIEW OF RECENT PAPERS ON ONLINE DISCUSSION IN TEACHING AND LEARNING IN HIGHER EDUCATION | 10 | ---
paper_title: A Critical Review of the Literature on Electronic Networks as Reflective Discourse Communities for Inservice Teachers
paper_content:
Over the past two decades, computer mediated communication (CMC) technologies have been used in a variety of efforts aimed at fostering teacher learning and teacher collaboration. This study is an attempt to closely examine the literature on the use of electronic networks for creating reflective teacher communities. We found that many teacher networks pursued the goal of building learning and reflective communities for teachers. Much was expected out of electronic teacher networks both as a practical solution to the stubborn problems in teacher professional development and as a new agent to create what is difficult to realize in face-to-face situations. However, we found a general lack of rigorous research on these networks. Little is known about the effectiveness of these networks on teacher learning. Few seriously examined to what degree the networks indeed were “communities” that promote “reflective discourses.”
---
paper_title: Educational Performance of ALN via Content Analysis
paper_content:
Learning in an ALN mode is modeled by a set of educational processes. The group is modeled by an abstract entity that provides services to the learners via its group educational processes. The learners reciprocate by their corresponding educational processes. Following findings of the Social Interdependence Theory of Cooperative Learning, we conjecture that the ALN is Cooperative Learning enhanced by extended think time. If ALN is structured for effective cooperation then the group dynamics will regulate the high level reasoning and the interpersonal relationships of the learners towards their highest levels. If this conjecture is found to be true, it identifies the maximization of reasoning and interpersonal relationships as one of the educational benefits of an ALN. To test the conjecture, we developed a methodology for the evaluation of the performance profiles of the ALN educational processes. Performance profiles are calculated via content analysis of the information flows exchanged between the participants, and the results are tested for reproducibility. We use this methodology to analyze three weeks of asynchronous discussions embedded in an ALN course of the Open University of Israel (OUI). The results of this analysis indicate the plausibility of our conjecture.
---
paper_title: Online Learning in Higher Education: a review of research on interactions among teachers and students
paper_content:
Online learning has become a widespread method for providing education at the graduate and undergraduate level. Although it is an extension of distance learning, the medium requires new modes of presentation and interaction. The purpose of this article is to provide an overview of the existing literature in communications, distance education, educational technology, and other education-related fields, to articulate what is currently known about online teaching and learning, how the field has been conceptualized in the various research communities, and what might be useful areas for future research. The review indicates that, although there has been extensive work to conceptualize and understand the social interactions and constructs entailed by online education, there has been little work that connects these concepts to subject-specific interactions and learning. That is, the literature provides insights into social aspects of online teaching and learning such as the development of community, the social r...
---
paper_title: Communities of Practice: guidelines for the design of online seminars in higher education
paper_content:
This article focuses on the Community of Practice (CoP) concept and its implications for designing online seminars in the university context. Student learning in seminars at universities is seen as peripheral participation in a particular scientific community—one of the many knowledge-creating CoPs that constitute a university. Introducing information technology into university education thus should be measured by the degree to which these new ways of teaching enhance students' access to scientific communities. This framing view of university education is connected to a social theory of learning where learning is seen as an essentially social, situated phenomenon. The concept of ‘legitimate peripheral participation’ in a community of practice is used to derive a design framework for online seminars. Using this framework, the authors implemented an online seminar on the topic of organisational knowledge management at the Johannes Kepler University Linz, Austria. The GroupWare platform BSCW (Basic Support f...
---
paper_title: Educational Performance of ALN via Content Analysis
paper_content:
Learning in an ALN mode is modeled by a set of educational processes. The group is modeled by an abstract entity that provides services to the learners via its group educational processes. The learners reciprocate by their corresponding educational processes. Following findings of the Social Interdependence Theory of Cooperative Learning, we conjecture that the ALN is Cooperative Learning enhanced by extended think time. If ALN is structured for effective cooperation then the group dynamics will regulate the high level reasoning and the interpersonal relationships of the learners towards their highest levels. If this conjecture is found to be true, it identifies the maximization of reasoning and interpersonal relationships as one of the educational benefits of an ALN. To test the conjecture, we developed a methodology for the evaluation of the performance profiles of the ALN educational processes. Performance profiles are calculated via content analysis of the information flows exchanged between the participants, and the results are tested for reproducibility. We use this methodology to analyze three weeks of asynchronous discussions embedded in an ALN course of the Open University of Israel (OUI). The results of this analysis indicate the plausibility of our conjecture.
---
paper_title: The anatomy of a distance education course: a case study analysis
paper_content:
This case study of a distance education course in children’s literature focuses on the creation of an interpretive community and the importance of that community in online learning. It also refines Michael G. Moore’s work on transactional distance to include the concept of a faculty member’s “restrained presence” in an effort to facilitate students’ personal responsibility for their own learning and for community building in an online learning environment.
---
paper_title: Asynchronous discussion in support of medical education
paper_content:
Although the potential of asynchronous discussion to support learning is widely recognized, student engagement remains problematic. Often, for example, students simply refuse to participate. Consequently the rich promise of asynchronous learning networks for supporting students’ learning can prove hard to achieve. After reviewing strategies for encouraging student participation in discussions in Asynchronous Learning Networks (ALN), we present a study that investigates how these strategies influenced students’ perceptions and use of the discussion area. We identify and explore factors that encouraged and inhibited student participation in asynchronous discussion, and evaluate student postings to an asynchronous discussion group by content analysis.The results question received wisdom about some of the pedagogic techniques advocated in the literature. Instead, results support the view that the major factors for stimulating student participation in asynchronous discussion are tutor enthusiasm and expertise. It appears that the tutor may be the root cause of engagement in discussions, an important conclusion, given that to date, the tutor’s role has remained relatively unexamined. We also note that participation in asynchronous discussion is inhibited when students allocate a low priority to participation, as may occur when participation is not assessed. Content analysis of an asynchronous discussion in this study reveals that contributions were not strongly interactive and that students were simply ‘playing the game’ of assessment, making postings that earned marks but rarely contributing otherwise. Thus the use of assessment to encourage students’ contributions appears to be only a superficial success; it seems likely that giving credit for postings changes behavior without necessarily improving learning. This finding has significant implications for curriculum design.
---
paper_title: Sharing designer and user perspectives of web site evaluation: a cross-campus collaborative learning experience
paper_content:
In this paper we present an online, collaborative process that facilitates usability evaluation of web sites. The online workspace consists of simple and effective proformas and computer-mediated discussion space to support usability evaluation. The system was designed and used by staff and students at two universities. Students, working in small teams, at each university, developed web sites and then evaluated the usability of web sites developed at the other university, using the results to improve their own sites. Our project evaluations show that the process provides valuable feedback on web site usability and provides students with the experience of usability evaluation from two important perspectives: those of a user and of a developer. Further, students develop important generic skills: the ability to participate in and critique computer supported cooperative work environments.
---
paper_title: Distance education via the Internet: the student experience
paper_content:
This is the second in a series of papers that describes the use of the Internet on a distance-taught undergraduate Computer Science course (Thomas et al., 1998). This paper examines students' experience of a large-scale trial in which students were taught using electronic communication exclusively. The paper compares the constitution and experiences of a group of Internet students to those of conventional distance learning students on the same course. Learning styles, background questionnaires, and learning outcomes were used in the comparison of the two groups. The study reveals comparable learning outcomes with no discrimination in grade as the result of using different communication media. The student experience is reported, highlighting the main gains and issues of using the Internet as a communication medium in distance education. This paper also shows that using the Internet in this context can provide students with a worthwhile experience.
---
paper_title: Teacher Participation in Computer Conferencing: socio-psychological dimensions
paper_content:
Abstract It has been pointed out in the literature on computer-mediated communication that the social dimensions of network design, user behaviour and user participation on computer networks are very important to the building of an electronic community. So far, no study has examined these dimensions systematically and the ways in which they mediate user participation. This article reports on a study conducted on a computer network for English teachers in Hong Kong schools, TeleNex, to investigate the teachers' participation on the network, and the technical and social psychological dimensions that mediated their participation. Data were obtained by questionnaires and by interviews with a small sample of teachers. The questionnaire results show significant differences between teachers who participated actively in conferencing and those who did not. The interview data provided further insights into the social psychological dimensions. Implications to be considered when building an electronic community of te...
---
paper_title: Network Analysis Of Knowledge Construction In Asynchronous Learning Networks
paper_content:
Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.
---
paper_title: Using unmediated computer conferencing to promote reflective practice and confidencebuilding in initial teacher education
paper_content:
Abstract The use of computer conferencing in initial teacher education (ITE) has been well documented, along with the barriers to its implementation. Existing research includes investigation into ways in which computer conferencing can be used as a medium to support reflective thinking and professional discourse between university tutors, teachers, student teachers and their peers during school placement. This article takes a different approach, examining instead whether computer conferencing can be successful between ITE peer groups, from different educational systems, without tutor moderation. It considers to what extent computer conferencing can raise students' confidence in the use of information and communication technologies and can encourage ‘reflective practice’ among student teachers. The report also considers the extent to which on-line discussion among student teachers can provide emotional support and stress relief during the course's most intensive period of teaching practice. Finally, it des...
---
paper_title: Following the thread in computer conferences
paper_content:
Abstract Computer conferencing systems allow students to discuss their ideas and learn from each other. However, the asynchronous nature of these discussions can result in large and complex collections of messages. Threading facilities help students to cope with this by structuring their discussions into parallel ‘conversations’. This paper discusses an investigation of students’ use of threading in two different conferencing systems. The context for the study was a small-group collaborative assignment in an Open University course. Conference transcripts were studied, and ‘message maps’ were created, in order to investigate the threading links made by students, in relation to the semantic links between the messages. The results show that the way in which threads are represented in a conferencing system can have a significant effect on how students use the system, and on the character of the resulting discussions.
---
paper_title: Design Elements for a CSCL Environment in a Teacher Training Programme
paper_content:
In the design of a Telematic Learning Environment (TLE) in which student teachers learn collaboratively, we consider three clusters of design elements as important: the Telematic Work Environment, the guidance of the instructor and the task instruction. We will have a look at the way group and task behaviour, triggered by these design elements influence the collaborative outcomes. Experiments have revealed that the technical environment is not as important as we had expected beforehand. This research shows that the task instruction (pre-imposed structure, role taking and intrinsic motivation for the task) and the group process itself have far more impact on the online collaborative work of the student teachers.
---
paper_title: Combining Qualitative Evaluation and Social Network Analysis for the Study of Classroom Social Interactions
paper_content:
Studying and evaluating real experiences that promote active and collaborative learning is a crucial field in CSCL. Major issues that remain unsolved deal with the merging of qualitative and quantitative methods and data, especially in educational settings that involve both physical and computer-supported collaboration. In this paper we present a mixed evaluation method that combines traditional sources of data with computer logs, and integrates quantitative statistics, qualitative data analysis and social network analysis in an overall interpretative approach. Several computer tools have been developed to assist in this process, integrated with generic software for qualitative analysis. The evaluation method and tools have been incrementally applied and validated in the context of an educational and research project that has been going on during the last three years. The use of the method is illustrated in this paper by an example consisting of the evaluation of a particular category within this project. The proposed method and tools aim at giving an answer to the need of innovative techniques for the study of new forms of interaction emerging in CSCL; at increasing the efficiency of the traditionally demanding qualitative methods, so that they can be used by teachers in curriculum-based experiences; and at the definition of a set of guidelines for bridging different data sources and analysis perspectives.
---
paper_title: The Use of Asynchronous Learning Networks in Nutrition Education: Student Attitude, Experiences and Performance
paper_content:
In this study a change in teaching strategy to involve a greater emphasis on asynchronous learning networks (ALNs) was implemented and the views of students (n=51) to this change were evaluated through responses to an online questionnaire. In response to Likert-type questions the majority of students demonstrated a positive view of this new model. Sixty-one percent of students felt that other types of online material would benefit the learning process and 80 % would recommend this module to a friend. Students acknowledged that the use of ALN-supported learning made the material easier to understand (52%), the lecturer more accessible (66%) and enabled them to take a more active role in the learning process (55%). Though only 10% of students utilized the asynchronous newsgroup more than 5 times, 77% found reading the contributions of others useful. Contrary to this 76% preferred the more familiar lecturebased environment for subject delivery. In response to open-ended questions students’ views were more reserved and highlighted a range of problems such as inadequate infrastructure, unreliable computers, and poor access to the online material as well as resistance to a new teaching paradigm. Student performance was influenced by age and contribution to the newsgroup. Those who were younger had a lower grade (47.8 ? 15.8) than those who were older (52.0 ? 11.4). Students with higher grades (56.2 ? 10.3) contributed to the newsgroup while students with lower grades (45.7 ? 12.5) did not. Based on these observations, it is apparent that students do appreciate the advantages of ALNsupported learning though for a shift toward this model to be effective problems of access and system failure must be resolved. Implications for future ALN-based modules are discussed.
---
paper_title: Beyond knowledge transmission? computer-supported learning in teacher education: some benefits in terms of stress, control and self-belief
paper_content:
This article outlines some benefits of a computer-supported interactive learning environment in terms of students' attitudes and factors associated with stress. In the 3rd year of a 4-year undergraduate course for primary teachers (B.Ed.), students were required to undertake a research module, which was seen in part as a preparation for a thesis dissertation in the final year. The subject matter was contained in several computer conferences; other learning experiences included some face-to-face contact and optional technical support sessions. Initial feedback from students indicated that there are benefits in terms of greater flexibility of work patterns, increased sense of control and enhanced self-esteem. Other issues are raised, including the role of the tutor, cost benefits and problems of access. More research is called for into the non-cognitive aspects of computer use.
---
paper_title: A Method to Increase Student Interaction Using Student Groups and Peer Review over the Internet
paper_content:
A method of peer review for student groups is proposed. In this method, groups of students publish their assignments results over the Internet. A fellow student group reviews their work and publishes their findings (on the Internet). Finally, the two groups debate their points of view in front of the class. The debate and healthy competition among groups give the students a chance to learn how to give and receive criticism in a constructive way. This should increase the students' ability to interact and work in groups, an important skill for computer science professionals.
---
paper_title: Implementing a CMC tutor group for an existing distance education course
paper_content:
‘Artificial Intelligence for Technology’ (T396) is a distance learning course provided by the Open University of the UK using face-to-face tutorials. In 1997 a pilot study was undertaken of a computer-mediated communication (CMC) tutor group which consisted of volunteers from around the UK. The student feedback raised a number of issues including: the need for a distinct function for the tutor group conference, the role of and demands on the tutor, and the benefits perceived by students. It is suggested that some issues arise from a conflict of cultures each with their own implicit assumptions. The traditional face-to-face tutorial model is sometimes at variance with the demands of the new CMC based tuition.
---
paper_title: Inter-Rater Reliability of an Electronic Discussion Coding System.
paper_content:
Abstract A ‘cognote’ system has been developed for coding electronic discussion groups and promoting critical thinking. Previous literature has provided an account of the strategy as applied to several academic settings. This article addresses the research around establishing the inter-rater reliability of the cognote system. The findings suggest three indicators of reliability, namely: 1. that raters assign similar grades to students' discussion group contributions; 2. that raters predominantly assign the same cognotes to students' discussion group contributions and 3. that raters are selecting in excess of 50% of the same text in assigning the same cognotes.
---
paper_title: Experiences of assessment: using phenomenography for evaluation
paper_content:
The aim of this paper is to explore the use of assessment as a tool for structuring students' experiences within a networked learning environment. It is suggested that this investigation may have a wider bearing on issues raised by the idea of aligning teaching approach with the students' approach to learning. The methods used are broadly phenomenographic and the use of this approach is proposed for the evaluation of networked learning in higher education. The work is drawn from the initial phase of a two-year study being undertaken at Lancaster University. The choice of phenomenography as the preferred methodological approach is explained and how this is appropriate for evaluation. An emphasis is placed upon the evaluative aspects of phenomenography, its focus on varieties of experience and the relationship between approaches adopted to learning and the outcomes of learning. The example, drawn from the research, examines student approaches in relation to the declared intentions of the course designers.
---
paper_title: Communication in a web-based conferencing system: the quality of computer-mediated interactions
paper_content:
Time constraints and teaching in crowded classrooms restrict in-depth dialogical interaction in teaching and learning. Electronic conferencing systems, however, have the potential to foster online discussions beyond class time. Case-based instruction also constitutes a promising approach in fostering learners’ participation and reflection. The purpose of this study was to investigate (a) the extent to which an electronic conferencing system, named COW (“Conferencing on the Web”), facilitates pre-service teachers’ communication outside their classroom, when discussing teaching cases from their field experiences, and (b) the potential of COW and case-based instruction to foster quality discourse and promote students’ critical-thinking skills. The results showed that students’ online discourse was mostly an exchange of personal experiences and did not reflect well-supported reasoning. Future research on the issue of interactivity should address motivational and affective variables related to the implementation of distance-education methods, variations in pedagogical activity and task structure, and the readiness of mentors and learners.
---
paper_title: DOES ONE SIZE FIT ALL? EXPLORING ASYNCHRONOUS LEARNING IN A MULTICULTURAL ENVIRONMENT
paper_content:
Computer-mediated classrooms coupled with heightened emphasis on removing geographic limitations have led to growing dependence on asynchronous learning networks as a delivery medium. An increasingly robust body of literature suggests both positive and negative implications of knowledge delivery using this medium. However, much less is known about the implications of this delivery method relative to the cultural differences which exist in a geographically limitless environment. Exploratory research from a graduate level course was used to ascertain some of the basic cross cultural issues which may be relevant in this environment. Using cultural context as a separator, twenty four participants evenly split between low context participants and high context participants were polled regarding their experience in the course. The poll addressed a number of key issues finding increasing frequency in the asynchronous learning network literature. Results confirm some of the published benefits as touted in the literature, but identify an additional set of issues for further research and evaluation.
---
paper_title: Communities of Practice: guidelines for the design of online seminars in higher education
paper_content:
This article focuses on the Community of Practice (CoP) concept and its implications for designing online seminars in the university context. Student learning in seminars at universities is seen as peripheral participation in a particular scientific community—one of the many knowledge-creating CoPs that constitute a university. Introducing information technology into university education thus should be measured by the degree to which these new ways of teaching enhance students' access to scientific communities. This framing view of university education is connected to a social theory of learning where learning is seen as an essentially social, situated phenomenon. The concept of ‘legitimate peripheral participation’ in a community of practice is used to derive a design framework for online seminars. Using this framework, the authors implemented an online seminar on the topic of organisational knowledge management at the Johannes Kepler University Linz, Austria. The GroupWare platform BSCW (Basic Support f...
---
paper_title: The anatomy of a distance education course: a case study analysis
paper_content:
This case study of a distance education course in children’s literature focuses on the creation of an interpretive community and the importance of that community in online learning. It also refines Michael G. Moore’s work on transactional distance to include the concept of a faculty member’s “restrained presence” in an effort to facilitate students’ personal responsibility for their own learning and for community building in an online learning environment.
---
paper_title: Teacher Participation in Computer Conferencing: socio-psychological dimensions
paper_content:
Abstract It has been pointed out in the literature on computer-mediated communication that the social dimensions of network design, user behaviour and user participation on computer networks are very important to the building of an electronic community. So far, no study has examined these dimensions systematically and the ways in which they mediate user participation. This article reports on a study conducted on a computer network for English teachers in Hong Kong schools, TeleNex, to investigate the teachers' participation on the network, and the technical and social psychological dimensions that mediated their participation. Data were obtained by questionnaires and by interviews with a small sample of teachers. The questionnaire results show significant differences between teachers who participated actively in conferencing and those who did not. The interview data provided further insights into the social psychological dimensions. Implications to be considered when building an electronic community of te...
---
paper_title: Sharing designer and user perspectives of web site evaluation: a cross-campus collaborative learning experience
paper_content:
In this paper we present an online, collaborative process that facilitates usability evaluation of web sites. The online workspace consists of simple and effective proformas and computer-mediated discussion space to support usability evaluation. The system was designed and used by staff and students at two universities. Students, working in small teams, at each university, developed web sites and then evaluated the usability of web sites developed at the other university, using the results to improve their own sites. Our project evaluations show that the process provides valuable feedback on web site usability and provides students with the experience of usability evaluation from two important perspectives: those of a user and of a developer. Further, students develop important generic skills: the ability to participate in and critique computer supported cooperative work environments.
---
paper_title: Collaboration and virtual mentoring: building relationships between pre-service and in-service special education teachers
paper_content:
Abstract This study explored the benefits and limitations of mentoring through the Internet in two special education preparation programs in the United States. Nicenet Internet Classroom Assistant was used to facilitate this virtual mentoring process. Seventeen undergraduate and 13 graduate students from two universities participated. The graduate students were mentors to undergraduates who were in the beginning of their senior year of their special education program. Pre- and post-surveys, Internet interactions, and video conferencing were used to gather participants' perceptions of their skill development in teaching, mentoring and collaboration and the effectiveness of virtual mentoring. Content analyses and descriptive statistics were employed to evaluate their responses. Results revealed that participants agreed that virtual mentoring was a positive experience, it supported their intervention and teaming skills and had some positive effects on their communication and teaching skills. However, virtual...
---
paper_title: Can a Collaborative Network Environment Enhance Essay-Writing Processes?.
paper_content:
The aim of this study is to examine whether a computer-supported learning environment enhances essay writing by providing an opportunity to share drafts with fellow students and receive feedback from a draft version. Data for this study were provided by 25 law students who were enrolled in a course in legal history at the University of Helsinki in February 2001. Both the students and the teacher were interviewed. The interviews showed that the students' experiences of the essay-writing process were very positive. The teacher's experiences were in line with the students'. The results showed that the students seemed to divide into two groups concerning their experiences towards sharing written drafts with peers: those who were very enthusiastic and enjoyed the possibility to share drafts and those who, on the other hand, felt that the idea of sharing unfinished essays was too threatening for them and required too much openness. The results further showed that the active use of a computer-supported learning environment was related to good essay grades.
---
paper_title: Assessing Activity-Based Learning for a Networked Course.
paper_content:
Networked environments offer new scope for presenting activity based courses, in which activities and reflection form the central backbone of course pedagogy. Such courses promise an enriching approach to study, but there are also challenges for the design of assessment. This paper describes a qualitative study of student and tutor perspectives on the assessment of an innovative undergraduate course at the UK Open University which has employed an activity-based approach. It discusses the relationship between assessment, student participation, and the development of skills, and then outlines the priorities for the design of assessment for such courses.
---
paper_title: Teacher Participation in Computer Conferencing: socio-psychological dimensions
paper_content:
Abstract It has been pointed out in the literature on computer-mediated communication that the social dimensions of network design, user behaviour and user participation on computer networks are very important to the building of an electronic community. So far, no study has examined these dimensions systematically and the ways in which they mediate user participation. This article reports on a study conducted on a computer network for English teachers in Hong Kong schools, TeleNex, to investigate the teachers' participation on the network, and the technical and social psychological dimensions that mediated their participation. Data were obtained by questionnaires and by interviews with a small sample of teachers. The questionnaire results show significant differences between teachers who participated actively in conferencing and those who did not. The interview data provided further insights into the social psychological dimensions. Implications to be considered when building an electronic community of te...
---
paper_title: Mirror, mirror, on my screen ... exploring online reflections
paper_content:
This paper suggests that, through the provision of opportunities for reflection–in–action at critical learning stages and with the support of a trained e–moderator, the participants in computer mediated conferencing (CMC) can be encouraged to engage in reflecting about their onscreen experiences. Such reflection aids the building of a productive online community of practice. In addition, by encouraging participants to reflect on later stages of their online training experiences, a reflection–on–action record can be built up. Participants' reflective processes can be captured through analysis of their on screen text messages and so be available for research purposes. Examples of conference text message reflections are given throughout the paper, drawn from the on screen reflections of Open University Business School (OUBS) Associate Lecturers who were working online through the medium of computer mediated conferencing for the first time. The conclusion is that reflection–on–practice in the online environment is beneficial for helping the participants to learn from online conferencing and can provide an excellent tool for qualitative research. Opportunities for reflection need to be built into the design of online conferences and facilitated by a trained e–moderator.
---
paper_title: Network Analysis Of Knowledge Construction In Asynchronous Learning Networks
paper_content:
Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.
---
paper_title: Following the thread in computer conferences
paper_content:
Abstract Computer conferencing systems allow students to discuss their ideas and learn from each other. However, the asynchronous nature of these discussions can result in large and complex collections of messages. Threading facilities help students to cope with this by structuring their discussions into parallel ‘conversations’. This paper discusses an investigation of students’ use of threading in two different conferencing systems. The context for the study was a small-group collaborative assignment in an Open University course. Conference transcripts were studied, and ‘message maps’ were created, in order to investigate the threading links made by students, in relation to the semantic links between the messages. The results show that the way in which threads are represented in a conferencing system can have a significant effect on how students use the system, and on the character of the resulting discussions.
---
paper_title: Putting the ‘C’ in ICT: using computer conferencing to foster a community of practice among student teachers
paper_content:
Abstract The expansion of communications technology has created countless new possibilities for using on-line communication in many areas of education. The focus of this article is on the use of on-line conferencing by a cohort of students studying for the Postgraduate Certificate in Education at the University of Ulster in Northern Ireland. This on-line discussion was initially developed with the primary aims of fostering skills in information and communications technology, providing a venue for reflective practice and reducing the sense of isolation often felt by students when they are dispersed throughout Northern Ireland on teaching practice. Involvement with the discussions convinced the author that the conferencing was also helping to build a community of practice among student teachers. Indeed, the content of participant dialogues suggests that this is the case, with the salient elements of such communities, namely, mutual engagement, joint enterprise and shared repertoire, being readily apparent t...
---
paper_title: Using narratives in conferences to improve the CMC learning environment
paper_content:
This paper reports on the use of short stories in Internet discussions to promote student learning. It describes off-campus teacher education students CMC discussions of short stories concerning issues in human development. The content of students' discussions is analysed, as is their perceptions of the value of the discussion stories. The results indicate that the use of narratives can improve the social environment of online conferences and contribute to collaborative student learning.
---
paper_title: Inter-Rater Reliability of an Electronic Discussion Coding System.
paper_content:
Abstract A ‘cognote’ system has been developed for coding electronic discussion groups and promoting critical thinking. Previous literature has provided an account of the strategy as applied to several academic settings. This article addresses the research around establishing the inter-rater reliability of the cognote system. The findings suggest three indicators of reliability, namely: 1. that raters assign similar grades to students' discussion group contributions; 2. that raters predominantly assign the same cognotes to students' discussion group contributions and 3. that raters are selecting in excess of 50% of the same text in assigning the same cognotes.
---
paper_title: Learning together and alone: Cooperative, competitive, and individualistic learning
paper_content:
Preface. 1.Cooperative, Competitive, and Individualistic Learning. 2.Cooperative Learning. 3.Informal Cooperative Learning. 4.Cooperative Base Groups. 5.Basic Elements of Cooperative Learning. 6.Integrated Use of All Types of Cooperative Learning. 7.Assessment and Evaluation. 8.Structuring Competitive Learning. 9.Structuring Individualistic Learning. 10.Integrated Use of Cooperative, Competitive, and Individualistic Learning. 11.Reflections. Glossary. References. Index.
---
paper_title: Recognising and promoting collaboration in an online asynchronous discussion
paper_content:
This paper reports on a study involving the identification and measurement of collaboration in an online asynchronous discussion (OAD). A conceptual framework served for the development of a model which conceptualises collaboration on a continuum of processes that move from social presence to production of an artefact. From this model, a preliminary instrument with six processes was developed. Through application of the instrument to an OAD, the instrument was further developed with indicators added for each process. Use of the instrument to analyse an OAD showed that it is effective for gaining insight into collaborative processes in which discussants in an OAD do or do not engage. Use of the instrument in other contexts would test and potentially strengthen its reliability and provide further insight into the collaborative processes in which individuals engage in OADs. Analysis of an OAD using the instrument revealed that participants engaged primarily in processes related to social presence and articulating individual perspectives, and did not reach a stage of sharing goals and producing shared artefacts. The results suggest that the higher-level processes related to collaboration in an OAD may need to be more explicitly and effectively promoted in order to counteract a tendency on the part of participants to remain at the level of individual rather than group or collaborative effort.
---
paper_title: Communication in a web-based conferencing system: the quality of computer-mediated interactions
paper_content:
Time constraints and teaching in crowded classrooms restrict in-depth dialogical interaction in teaching and learning. Electronic conferencing systems, however, have the potential to foster online discussions beyond class time. Case-based instruction also constitutes a promising approach in fostering learners’ participation and reflection. The purpose of this study was to investigate (a) the extent to which an electronic conferencing system, named COW (“Conferencing on the Web”), facilitates pre-service teachers’ communication outside their classroom, when discussing teaching cases from their field experiences, and (b) the potential of COW and case-based instruction to foster quality discourse and promote students’ critical-thinking skills. The results showed that students’ online discourse was mostly an exchange of personal experiences and did not reflect well-supported reasoning. Future research on the issue of interactivity should address motivational and affective variables related to the implementation of distance-education methods, variations in pedagogical activity and task structure, and the readiness of mentors and learners.
---
paper_title: Rethinking University Teaching: A Framework for the Effective Use of Educational Technology
paper_content:
Part 1 What students need from educational technology: teaching as mediating learning what students bring to learning the complexity of coming to know generating a teaching strategy. Part 2 Analyzing teaching methods and media introduction - categories of media forms: audio-visual media hypermedia interactive media adaptive media discursive media. Part 3 The design methodology: designing teaching materials setting up the learning context effective teaching with multi-media methods.
---
paper_title: Communities of Practice: guidelines for the design of online seminars in higher education
paper_content:
This article focuses on the Community of Practice (CoP) concept and its implications for designing online seminars in the university context. Student learning in seminars at universities is seen as peripheral participation in a particular scientific community—one of the many knowledge-creating CoPs that constitute a university. Introducing information technology into university education thus should be measured by the degree to which these new ways of teaching enhance students' access to scientific communities. This framing view of university education is connected to a social theory of learning where learning is seen as an essentially social, situated phenomenon. The concept of ‘legitimate peripheral participation’ in a community of practice is used to derive a design framework for online seminars. Using this framework, the authors implemented an online seminar on the topic of organisational knowledge management at the Johannes Kepler University Linz, Austria. The GroupWare platform BSCW (Basic Support f...
---
paper_title: The anatomy of a distance education course: a case study analysis
paper_content:
This case study of a distance education course in children’s literature focuses on the creation of an interpretive community and the importance of that community in online learning. It also refines Michael G. Moore’s work on transactional distance to include the concept of a faculty member’s “restrained presence” in an effort to facilitate students’ personal responsibility for their own learning and for community building in an online learning environment.
---
paper_title: Taking E-Moderating skills to the next level: Reflecting on the design of conferencing environments
paper_content:
This paper reports an analysis of computer conference structures set up for a distance education course in which major components of the teaching and learning involve group discussions and collaboration via asynchronous text-based conferencing. As well as adopting traditional e-moderator roles, tutors were required to design appropriate online spaces and navigation routes for students. Tutors’ views concerning conference structures focussed on tensions between enabling easy access to conference areas, facilitating the successful running of activities, and addressing students’ subsequent needs for retrieval of conference material for assessment tasks. The geographically dispersed course tutors initially explored these issues in reflective online conversations. Comparisons were made between structures that were set up differently but all used for essentially the same tasks and purposes. Evidence from conference messages, from student feedback given in questionnaire and interview responses, as well as from students’ written assignments, provided insights into the impact such structures may have on the student learning experience. Students found conference areas for their own group easy to navigate, but they had concerns about managing the large number of messages; these concerns centred on the volume, threading, linking, length, and language of messages.
---
paper_title: Asynchronous discussion groups in teacher training classes: Perceptions of native and non-native students
paper_content:
This paper discusses students’ perceptions of an asynchronous electronic discussion assignment implemented shortly after the technology had been introduced to the university. In addition to the weekly face-to-face class meetings, students in two graduate level teacher training courses were assigned to small groups for an entire semester and made weekly contributions to their group’s course web discussion forum in which they discussed course content. Students were to make explicit references to course readings and postings by their group members. The instructor evaluated students' postings on a weekly basis. At the end of the course, students completed a survey assessing their satisfaction and asking for their suggestions for modification of the particular assignment type and format. For all students, the extension of course-related discussions outside the regular face-to-face class meetings offered benefits in the form of greater social interaction with other class members; for the non-native speakers among the students, the asynchronous discussions facilitated assimilation of course content, but it was not perceived as providing additional language practice. For all students, the two main issues perceived as negative related to their perceptions of forced, unnatural interaction promoted by the asynchronous discussions and lack of topic prompts, the requirement to make connections to prior postings, and the frequency of required contributions to discussions. Possible reasons for students’ perceptions are explored and suggestions for further research are provided.
---
paper_title: Distance education via the Internet: the student experience
paper_content:
This is the second in a series of papers that describes the use of the Internet on a distance-taught undergraduate Computer Science course (Thomas et al., 1998). This paper examines students' experience of a large-scale trial in which students were taught using electronic communication exclusively. The paper compares the constitution and experiences of a group of Internet students to those of conventional distance learning students on the same course. Learning styles, background questionnaires, and learning outcomes were used in the comparison of the two groups. The study reveals comparable learning outcomes with no discrimination in grade as the result of using different communication media. The student experience is reported, highlighting the main gains and issues of using the Internet as a communication medium in distance education. This paper also shows that using the Internet in this context can provide students with a worthwhile experience.
---
paper_title: Using group communication to monitor web-based group learning
paper_content:
In a web group-learning environment, students must communicate with other group members on the Internet to accomplish group projects and share knowledge. Communication is likely to affect performance and so analysing the relationship between communicative relationships and group performance may help teachers to monitor groups effectively. Certain tasks are necessary to perform such an analysis — recording group communication, extracting communication relationships and determining the relationship between group communication and group performance. This study developed a method for determining relationships and rules for predicting performance to enable teachers to take act appropriately according to the predicted performance of the group. Four group performance indicators are considered — average grades within a group, project grade, frequency of resource-sharing and drop-out rate. Experimental results are presented, concerning the application of the methodology to a web class of 706 students, divided into 70 groups. The experimental results show that group communication patterns significantly affect group performance.
---
paper_title: Network Analysis Of Knowledge Construction In Asynchronous Learning Networks
paper_content:
Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.
---
paper_title: Design Elements for a CSCL Environment in a Teacher Training Programme
paper_content:
In the design of a Telematic Learning Environment (TLE) in which student teachers learn collaboratively, we consider three clusters of design elements as important: the Telematic Work Environment, the guidance of the instructor and the task instruction. We will have a look at the way group and task behaviour, triggered by these design elements influence the collaborative outcomes. Experiments have revealed that the technical environment is not as important as we had expected beforehand. This research shows that the task instruction (pre-imposed structure, role taking and intrinsic motivation for the task) and the group process itself have far more impact on the online collaborative work of the student teachers.
---
paper_title: DOES ONE SIZE FIT ALL? EXPLORING ASYNCHRONOUS LEARNING IN A MULTICULTURAL ENVIRONMENT
paper_content:
Computer-mediated classrooms coupled with heightened emphasis on removing geographic limitations have led to growing dependence on asynchronous learning networks as a delivery medium. An increasingly robust body of literature suggests both positive and negative implications of knowledge delivery using this medium. However, much less is known about the implications of this delivery method relative to the cultural differences which exist in a geographically limitless environment. Exploratory research from a graduate level course was used to ascertain some of the basic cross cultural issues which may be relevant in this environment. Using cultural context as a separator, twenty four participants evenly split between low context participants and high context participants were polled regarding their experience in the course. The poll addressed a number of key issues finding increasing frequency in the asynchronous learning network literature. Results confirm some of the published benefits as touted in the literature, but identify an additional set of issues for further research and evaluation.
---
paper_title: Learning within incoherent structures: the space of online discussion forums
paper_content:
Online discussion forums are an increasingly common use of new information and communication technologies in education. As a tool for promoting conversational modes of learning, it has been suggested that online discussion forums can lead to enhanced learning outcomes for students. However, there is a need to explore further the implications of the highly mediated nature of computer-based interaction on student learning within these virtual learning environments. This paper presents results from a detailed study of students' learning outcomes and patterns of interaction within an online discussion forum. The findings suggest that the typical nonlinear branching structure of online discussion may be insufficient for the realisation of truly conversational modes of learning. The paper discusses the implications of these findings in relation to students' learning.
---
paper_title: Sharing designer and user perspectives of web site evaluation: a cross-campus collaborative learning experience
paper_content:
In this paper we present an online, collaborative process that facilitates usability evaluation of web sites. The online workspace consists of simple and effective proformas and computer-mediated discussion space to support usability evaluation. The system was designed and used by staff and students at two universities. Students, working in small teams, at each university, developed web sites and then evaluated the usability of web sites developed at the other university, using the results to improve their own sites. Our project evaluations show that the process provides valuable feedback on web site usability and provides students with the experience of usability evaluation from two important perspectives: those of a user and of a developer. Further, students develop important generic skills: the ability to participate in and critique computer supported cooperative work environments.
---
paper_title: Asynchronous discussion groups in teacher training classes: Perceptions of native and non-native students
paper_content:
This paper discusses students’ perceptions of an asynchronous electronic discussion assignment implemented shortly after the technology had been introduced to the university. In addition to the weekly face-to-face class meetings, students in two graduate level teacher training courses were assigned to small groups for an entire semester and made weekly contributions to their group’s course web discussion forum in which they discussed course content. Students were to make explicit references to course readings and postings by their group members. The instructor evaluated students' postings on a weekly basis. At the end of the course, students completed a survey assessing their satisfaction and asking for their suggestions for modification of the particular assignment type and format. For all students, the extension of course-related discussions outside the regular face-to-face class meetings offered benefits in the form of greater social interaction with other class members; for the non-native speakers among the students, the asynchronous discussions facilitated assimilation of course content, but it was not perceived as providing additional language practice. For all students, the two main issues perceived as negative related to their perceptions of forced, unnatural interaction promoted by the asynchronous discussions and lack of topic prompts, the requirement to make connections to prior postings, and the frequency of required contributions to discussions. Possible reasons for students’ perceptions are explored and suggestions for further research are provided.
---
paper_title: Can a Collaborative Network Environment Enhance Essay-Writing Processes?.
paper_content:
The aim of this study is to examine whether a computer-supported learning environment enhances essay writing by providing an opportunity to share drafts with fellow students and receive feedback from a draft version. Data for this study were provided by 25 law students who were enrolled in a course in legal history at the University of Helsinki in February 2001. Both the students and the teacher were interviewed. The interviews showed that the students' experiences of the essay-writing process were very positive. The teacher's experiences were in line with the students'. The results showed that the students seemed to divide into two groups concerning their experiences towards sharing written drafts with peers: those who were very enthusiastic and enjoyed the possibility to share drafts and those who, on the other hand, felt that the idea of sharing unfinished essays was too threatening for them and required too much openness. The results further showed that the active use of a computer-supported learning environment was related to good essay grades.
---
paper_title: Assessing Activity-Based Learning for a Networked Course.
paper_content:
Networked environments offer new scope for presenting activity based courses, in which activities and reflection form the central backbone of course pedagogy. Such courses promise an enriching approach to study, but there are also challenges for the design of assessment. This paper describes a qualitative study of student and tutor perspectives on the assessment of an innovative undergraduate course at the UK Open University which has employed an activity-based approach. It discusses the relationship between assessment, student participation, and the development of skills, and then outlines the priorities for the design of assessment for such courses.
---
paper_title: Assessing Online Discussions Working ‘Along the Grain’ of Current Technology and Educational Culture
paper_content:
The paper reports a case study assessing asynchronous text-based online discussion amongst trainee teachers in Britain. It describes the project as working ‘along the grain’ of current technology and educational culture since it aims to exploit the capabilities of the ICT used (e.blackboard1) in ways which could be enacted in schools today whilst at the same time giving due attention to modify the constraints of time, place and hierarchy that the ICT revolution threatens in schools. Quantitative and qualitative results are discussed. The relationship between ICT and the wider social, political and cultural context is also discussed. The paper concludes with areas for further research and points to the need to review current assessment cultures in schools.
---
paper_title: Teacher Participation in Computer Conferencing: socio-psychological dimensions
paper_content:
Abstract It has been pointed out in the literature on computer-mediated communication that the social dimensions of network design, user behaviour and user participation on computer networks are very important to the building of an electronic community. So far, no study has examined these dimensions systematically and the ways in which they mediate user participation. This article reports on a study conducted on a computer network for English teachers in Hong Kong schools, TeleNex, to investigate the teachers' participation on the network, and the technical and social psychological dimensions that mediated their participation. Data were obtained by questionnaires and by interviews with a small sample of teachers. The questionnaire results show significant differences between teachers who participated actively in conferencing and those who did not. The interview data provided further insights into the social psychological dimensions. Implications to be considered when building an electronic community of te...
---
paper_title: Collaborative knowledge building to promote in-service teacher training in environmental education
paper_content:
Abstract Environmental education (EE) is a problematic field in teacher education for many reasons. First, there is no consensus about its central concepts. Second, environmental education emerged as a response to environmental problems. Environmental educators do not agree on what are real environmental problems and what are exaggerated fears. For many educators, global warming is a serious environmental problem, but for those who view it in a geological perspective of long-term climatic change, it is not such a problem. When teachers are provided with the possibility of sharing problems of EE and building knowledge collaboratively with university experts, what do they do? What kind of problems do teachers regard as important? What kinds of problems do the university experts regard as important? These questions were investigated through the use of a database program called Knowledge Forum®. Knowledge Forum®is a shared virtual environment for collaborative knowledge building. This article analyses the use...
---
paper_title: Mirror, mirror, on my screen ... exploring online reflections
paper_content:
This paper suggests that, through the provision of opportunities for reflection–in–action at critical learning stages and with the support of a trained e–moderator, the participants in computer mediated conferencing (CMC) can be encouraged to engage in reflecting about their onscreen experiences. Such reflection aids the building of a productive online community of practice. In addition, by encouraging participants to reflect on later stages of their online training experiences, a reflection–on–action record can be built up. Participants' reflective processes can be captured through analysis of their on screen text messages and so be available for research purposes. Examples of conference text message reflections are given throughout the paper, drawn from the on screen reflections of Open University Business School (OUBS) Associate Lecturers who were working online through the medium of computer mediated conferencing for the first time. The conclusion is that reflection–on–practice in the online environment is beneficial for helping the participants to learn from online conferencing and can provide an excellent tool for qualitative research. Opportunities for reflection need to be built into the design of online conferences and facilitated by a trained e–moderator.
---
paper_title: Network Analysis Of Knowledge Construction In Asynchronous Learning Networks
paper_content:
Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.
---
paper_title: Using unmediated computer conferencing to promote reflective practice and confidencebuilding in initial teacher education
paper_content:
Abstract The use of computer conferencing in initial teacher education (ITE) has been well documented, along with the barriers to its implementation. Existing research includes investigation into ways in which computer conferencing can be used as a medium to support reflective thinking and professional discourse between university tutors, teachers, student teachers and their peers during school placement. This article takes a different approach, examining instead whether computer conferencing can be successful between ITE peer groups, from different educational systems, without tutor moderation. It considers to what extent computer conferencing can raise students' confidence in the use of information and communication technologies and can encourage ‘reflective practice’ among student teachers. The report also considers the extent to which on-line discussion among student teachers can provide emotional support and stress relief during the course's most intensive period of teaching practice. Finally, it des...
---
paper_title: Variety is the spice of life: student use of CMC in the context of campus based study
paper_content:
At present, courses within British higher education institutions offer a somewhat haphazard patchwork of IT-based learning resources. Through university intranets it is now possible for many students to follow at least parts of their courses online. However, the provision available is highly dependent on local resources and individual tutors. This paper focuses on student discussion supported via computer mediated communication, but not in the context of distance learning. Rather the focus is upon campus-based study, where students are working with one another in a sustained mode over a period of time. In the context of an ESRC 'Virtual Society?' research project we have been using the online dialogues together with interviews and questionnaires to examine two third year level psychology courses at different universities. In both cases tutors used web resources to facilitate computer-mediated communication as an integral part of the course. Different contexts for learning were created by the differing stances of the tutors. One tutor took an active, participatory role whereas the other tutor remained a non-participant. Both, however, wanted to create wide-ranging discussion amongst the learners. The differing roles of tutors were associated with a marked difference in communication styles and perceived learning outcomes.
---
paper_title: FACE-TO-FACE VERSUS THREADED DISCUSSIONS: THE ROLE OF TIME AND HIGHER-ORDER THINKING
paper_content:
This study compares the experiences of students in face-to-face (in class) discussions with threaded discussions and also evaluates the threaded discussions for evidence of higher-order thinking. Students were enrolled in graduate-level classes that used both modes (face-to-face and online) for course-related discussions; their end-of-course evaluations of both experiences were grouped for analysis and themes constructed based on their comments. Themes included the “expansion of time,” “experience of time,” “quality of the discussion,” “needs of the student,” and “faculty expertise.” While there are advantages to holding discussions in either setting, students most frequently noted that using threaded discussions increased the amount of time they spent on class objectives and that they appreciated the extra time for reflection on course issues. The face-to-face format also had value as a result of its immediacy and energy, and some students found one mode a better “fit” with their preferred learning mode. The analysis of higher-order thinking was based on a content analysis of the threaded discussions only. Each posting was coded as one of the four cognitive-processing categories described by Garrison and colleagues [1]: 18% were triggering questions, 51% were exploration, 22% were integration, and 7% resolution. A fifth category – social – was appropriate for 3% of the responses and only 12% of the postings included a writing error. This framework provides some support for the assertion that higher-order thinking can and does occur in online discussions; strategies for increasing the number of responses in the integration and resolution categories are discussed.
---
paper_title: Computer conferencing with access to a 'guest expert' in the professional development of special educational needs coordinators
paper_content:
This article describes and outlines the implications of a one-year case study of students’ use of the computer conferencing facility of a postgraduate module for special educational needs coordinators (SENCOs) at a distance-learning institution. This facility incorporates a virtual space for a ‘guest expert’. The aim of the study was to inform future development of courses at a time when computer conferencing was just becoming widespread in the university concerned. Quantitative data associated with the volume and patterns of individual participation in the computer conference were collected as well as interview material from students, tutors and the ‘guest expert’. Findings from the study indicate that computer conferencing has the potential to facilitate the professional development of teachers as reflective practitioners and researchers. However, they also point to a number of barriers to student participation that must be addressed. These include access issues related to time constraints, unfamiliarity with the medium, and lack of confidence in expressing personal views in a public arena. A major conclusion drawn from this study is that it may be appropriate to consider future developments which incorporate the assumption that, in computer conferences of large professional development courses, students are much more likely to participate through reading rather than making personal contributions to conference discussions. This opens the possibility of reconceptualising the role of the ‘guest expert’ as two or more discussants with relevant expertise dialoguing with each other while students follow a threaded discussion and/or make personal contributions.
---
paper_title: Putting the ‘C’ in ICT: using computer conferencing to foster a community of practice among student teachers
paper_content:
Abstract The expansion of communications technology has created countless new possibilities for using on-line communication in many areas of education. The focus of this article is on the use of on-line conferencing by a cohort of students studying for the Postgraduate Certificate in Education at the University of Ulster in Northern Ireland. This on-line discussion was initially developed with the primary aims of fostering skills in information and communications technology, providing a venue for reflective practice and reducing the sense of isolation often felt by students when they are dispersed throughout Northern Ireland on teaching practice. Involvement with the discussions convinced the author that the conferencing was also helping to build a community of practice among student teachers. Indeed, the content of participant dialogues suggests that this is the case, with the salient elements of such communities, namely, mutual engagement, joint enterprise and shared repertoire, being readily apparent t...
---
paper_title: Beyond knowledge transmission? computer-supported learning in teacher education: some benefits in terms of stress, control and self-belief
paper_content:
This article outlines some benefits of a computer-supported interactive learning environment in terms of students' attitudes and factors associated with stress. In the 3rd year of a 4-year undergraduate course for primary teachers (B.Ed.), students were required to undertake a research module, which was seen in part as a preparation for a thesis dissertation in the final year. The subject matter was contained in several computer conferences; other learning experiences included some face-to-face contact and optional technical support sessions. Initial feedback from students indicated that there are benefits in terms of greater flexibility of work patterns, increased sense of control and enhanced self-esteem. Other issues are raised, including the role of the tutor, cost benefits and problems of access. More research is called for into the non-cognitive aspects of computer use.
---
paper_title: Communities of Practice: guidelines for the design of online seminars in higher education
paper_content:
This article focuses on the Community of Practice (CoP) concept and its implications for designing online seminars in the university context. Student learning in seminars at universities is seen as peripheral participation in a particular scientific community—one of the many knowledge-creating CoPs that constitute a university. Introducing information technology into university education thus should be measured by the degree to which these new ways of teaching enhance students' access to scientific communities. This framing view of university education is connected to a social theory of learning where learning is seen as an essentially social, situated phenomenon. The concept of ‘legitimate peripheral participation’ in a community of practice is used to derive a design framework for online seminars. Using this framework, the authors implemented an online seminar on the topic of organisational knowledge management at the Johannes Kepler University Linz, Austria. The GroupWare platform BSCW (Basic Support f...
---
paper_title: The anatomy of a distance education course: a case study analysis
paper_content:
This case study of a distance education course in children’s literature focuses on the creation of an interpretive community and the importance of that community in online learning. It also refines Michael G. Moore’s work on transactional distance to include the concept of a faculty member’s “restrained presence” in an effort to facilitate students’ personal responsibility for their own learning and for community building in an online learning environment.
---
paper_title: Asynchronous discussion in support of medical education
paper_content:
Although the potential of asynchronous discussion to support learning is widely recognized, student engagement remains problematic. Often, for example, students simply refuse to participate. Consequently the rich promise of asynchronous learning networks for supporting students’ learning can prove hard to achieve. After reviewing strategies for encouraging student participation in discussions in Asynchronous Learning Networks (ALN), we present a study that investigates how these strategies influenced students’ perceptions and use of the discussion area. We identify and explore factors that encouraged and inhibited student participation in asynchronous discussion, and evaluate student postings to an asynchronous discussion group by content analysis.The results question received wisdom about some of the pedagogic techniques advocated in the literature. Instead, results support the view that the major factors for stimulating student participation in asynchronous discussion are tutor enthusiasm and expertise. It appears that the tutor may be the root cause of engagement in discussions, an important conclusion, given that to date, the tutor’s role has remained relatively unexamined. We also note that participation in asynchronous discussion is inhibited when students allocate a low priority to participation, as may occur when participation is not assessed. Content analysis of an asynchronous discussion in this study reveals that contributions were not strongly interactive and that students were simply ‘playing the game’ of assessment, making postings that earned marks but rarely contributing otherwise. Thus the use of assessment to encourage students’ contributions appears to be only a superficial success; it seems likely that giving credit for postings changes behavior without necessarily improving learning. This finding has significant implications for curriculum design.
---
paper_title: Sharing designer and user perspectives of web site evaluation: a cross-campus collaborative learning experience
paper_content:
In this paper we present an online, collaborative process that facilitates usability evaluation of web sites. The online workspace consists of simple and effective proformas and computer-mediated discussion space to support usability evaluation. The system was designed and used by staff and students at two universities. Students, working in small teams, at each university, developed web sites and then evaluated the usability of web sites developed at the other university, using the results to improve their own sites. Our project evaluations show that the process provides valuable feedback on web site usability and provides students with the experience of usability evaluation from two important perspectives: those of a user and of a developer. Further, students develop important generic skills: the ability to participate in and critique computer supported cooperative work environments.
---
paper_title: Assessing online collaborative learning: Process and product
paper_content:
The assessment of online collaborative study presents new opportunities and challenges, both in terms of separating the process and product of collaboration, and in the support of skills development. The purpose of this paper is to explore the role of assessment with respect to the processes and products of online collaborative study. It describes a qualitative case study of staff and students perspectives on two UK Open University courses which have used a variety of models of online collaborative assessment. The findings underline the importance of assessment in ensuring online participation, and in supporting the practice and development of online collaborative learning. They have led to a number of recommendations for the assessment of online collaborative learning.
---
paper_title: Asynchronous discussion groups in teacher training classes: Perceptions of native and non-native students
paper_content:
This paper discusses students’ perceptions of an asynchronous electronic discussion assignment implemented shortly after the technology had been introduced to the university. In addition to the weekly face-to-face class meetings, students in two graduate level teacher training courses were assigned to small groups for an entire semester and made weekly contributions to their group’s course web discussion forum in which they discussed course content. Students were to make explicit references to course readings and postings by their group members. The instructor evaluated students' postings on a weekly basis. At the end of the course, students completed a survey assessing their satisfaction and asking for their suggestions for modification of the particular assignment type and format. For all students, the extension of course-related discussions outside the regular face-to-face class meetings offered benefits in the form of greater social interaction with other class members; for the non-native speakers among the students, the asynchronous discussions facilitated assimilation of course content, but it was not perceived as providing additional language practice. For all students, the two main issues perceived as negative related to their perceptions of forced, unnatural interaction promoted by the asynchronous discussions and lack of topic prompts, the requirement to make connections to prior postings, and the frequency of required contributions to discussions. Possible reasons for students’ perceptions are explored and suggestions for further research are provided.
---
paper_title: Assessing Activity-Based Learning for a Networked Course.
paper_content:
Networked environments offer new scope for presenting activity based courses, in which activities and reflection form the central backbone of course pedagogy. Such courses promise an enriching approach to study, but there are also challenges for the design of assessment. This paper describes a qualitative study of student and tutor perspectives on the assessment of an innovative undergraduate course at the UK Open University which has employed an activity-based approach. It discusses the relationship between assessment, student participation, and the development of skills, and then outlines the priorities for the design of assessment for such courses.
---
paper_title: Mirror, mirror, on my screen ... exploring online reflections
paper_content:
This paper suggests that, through the provision of opportunities for reflection–in–action at critical learning stages and with the support of a trained e–moderator, the participants in computer mediated conferencing (CMC) can be encouraged to engage in reflecting about their onscreen experiences. Such reflection aids the building of a productive online community of practice. In addition, by encouraging participants to reflect on later stages of their online training experiences, a reflection–on–action record can be built up. Participants' reflective processes can be captured through analysis of their on screen text messages and so be available for research purposes. Examples of conference text message reflections are given throughout the paper, drawn from the on screen reflections of Open University Business School (OUBS) Associate Lecturers who were working online through the medium of computer mediated conferencing for the first time. The conclusion is that reflection–on–practice in the online environment is beneficial for helping the participants to learn from online conferencing and can provide an excellent tool for qualitative research. Opportunities for reflection need to be built into the design of online conferences and facilitated by a trained e–moderator.
---
paper_title: Network Analysis Of Knowledge Construction In Asynchronous Learning Networks
paper_content:
Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.
---
paper_title: FACE-TO-FACE VERSUS THREADED DISCUSSIONS: THE ROLE OF TIME AND HIGHER-ORDER THINKING
paper_content:
This study compares the experiences of students in face-to-face (in class) discussions with threaded discussions and also evaluates the threaded discussions for evidence of higher-order thinking. Students were enrolled in graduate-level classes that used both modes (face-to-face and online) for course-related discussions; their end-of-course evaluations of both experiences were grouped for analysis and themes constructed based on their comments. Themes included the “expansion of time,” “experience of time,” “quality of the discussion,” “needs of the student,” and “faculty expertise.” While there are advantages to holding discussions in either setting, students most frequently noted that using threaded discussions increased the amount of time they spent on class objectives and that they appreciated the extra time for reflection on course issues. The face-to-face format also had value as a result of its immediacy and energy, and some students found one mode a better “fit” with their preferred learning mode. The analysis of higher-order thinking was based on a content analysis of the threaded discussions only. Each posting was coded as one of the four cognitive-processing categories described by Garrison and colleagues [1]: 18% were triggering questions, 51% were exploration, 22% were integration, and 7% resolution. A fifth category – social – was appropriate for 3% of the responses and only 12% of the postings included a writing error. This framework provides some support for the assertion that higher-order thinking can and does occur in online discussions; strategies for increasing the number of responses in the integration and resolution categories are discussed.
---
paper_title: Inside online learning: Comparing conceptual and technique learning performance in place-based and ALN formats
paper_content:
Online learning is coming of age. ‘Traditional’ universities are embracing online components to courses, online courses, and even complete online programs. With the advantage of distance and time insensitivity for the learning process, there appears to be a growing sense that this form of teaching and learning has strong pedagogical merit. Research has shown that students do comparatively well in this new format. There is, however, a lack of evidence illustrating particular strengths and weaknesses of online teaching and learning. This paper discusses experiences with a single course taught using two forms: (1) traditional place-based, and (2) a form of asynchronous learning network (ALN) defined as interactive virtual seminars. Differences in learning performance are tested using longitudinal observations. In a course comprised of both conceptual material and the application of techniques, the students performed overall equally well in either place-based or virtual format. Their degree of learning, however, differed significantly between conceptual and technique-based material. Implications are promising, showing that there are relative strengths to be exploited in both place-based and virtual formats.
---
paper_title: Communities of Practice: guidelines for the design of online seminars in higher education
paper_content:
This article focuses on the Community of Practice (CoP) concept and its implications for designing online seminars in the university context. Student learning in seminars at universities is seen as peripheral participation in a particular scientific community—one of the many knowledge-creating CoPs that constitute a university. Introducing information technology into university education thus should be measured by the degree to which these new ways of teaching enhance students' access to scientific communities. This framing view of university education is connected to a social theory of learning where learning is seen as an essentially social, situated phenomenon. The concept of ‘legitimate peripheral participation’ in a community of practice is used to derive a design framework for online seminars. Using this framework, the authors implemented an online seminar on the topic of organisational knowledge management at the Johannes Kepler University Linz, Austria. The GroupWare platform BSCW (Basic Support f...
---
paper_title: Asynchronous discussion in support of medical education
paper_content:
Although the potential of asynchronous discussion to support learning is widely recognized, student engagement remains problematic. Often, for example, students simply refuse to participate. Consequently the rich promise of asynchronous learning networks for supporting students’ learning can prove hard to achieve. After reviewing strategies for encouraging student participation in discussions in Asynchronous Learning Networks (ALN), we present a study that investigates how these strategies influenced students’ perceptions and use of the discussion area. We identify and explore factors that encouraged and inhibited student participation in asynchronous discussion, and evaluate student postings to an asynchronous discussion group by content analysis.The results question received wisdom about some of the pedagogic techniques advocated in the literature. Instead, results support the view that the major factors for stimulating student participation in asynchronous discussion are tutor enthusiasm and expertise. It appears that the tutor may be the root cause of engagement in discussions, an important conclusion, given that to date, the tutor’s role has remained relatively unexamined. We also note that participation in asynchronous discussion is inhibited when students allocate a low priority to participation, as may occur when participation is not assessed. Content analysis of an asynchronous discussion in this study reveals that contributions were not strongly interactive and that students were simply ‘playing the game’ of assessment, making postings that earned marks but rarely contributing otherwise. Thus the use of assessment to encourage students’ contributions appears to be only a superficial success; it seems likely that giving credit for postings changes behavior without necessarily improving learning. This finding has significant implications for curriculum design.
---
paper_title: Teacher's and students' perspectives on on-line learning in a social constructivist learning environment
paper_content:
This article summarises the teaching and learning experiences of students in higher education with regard to the use of computer conferencing. The on-line unit was designed and implemented within a social constructivist framework, which promoted interaction, collaboration and experiential learning. Students who undertook this unit were practising science and mathematics teachers. They ranged in their experience of on-line learning from inexperienced users to seasoned, on-line learners. However, common among the participants is their novel exposure to being a part of a learning community where interaction and communication took precedence over individual learning. This article looks at the learning experiences from the perspectives of the facilitator and three on-line learners, based on the participants' personal stories specifically in terms of: (a) concerns associated with the implementation of the on-line learning; (b) the differences encountered when using technology for the first time to learn; (c) the difference in perspective, before and after the on-line learning experience; and (d) the differences in the learning experience in light of exposure to a new theoretical framework and a new mode of delivery. The conclusions from the stories suggest that the interaction and the social presence of others grew from the use of a social constructivist approach to teaching and learning, within the context of computer-mediated communication technologies. The role of the facilitator in this setting was to provide a context for social learning and to maintain a student-centred approach. Alternatively, the role of the students was to proactively engage in peer learning.
---
paper_title: Using group communication to monitor web-based group learning
paper_content:
In a web group-learning environment, students must communicate with other group members on the Internet to accomplish group projects and share knowledge. Communication is likely to affect performance and so analysing the relationship between communicative relationships and group performance may help teachers to monitor groups effectively. Certain tasks are necessary to perform such an analysis — recording group communication, extracting communication relationships and determining the relationship between group communication and group performance. This study developed a method for determining relationships and rules for predicting performance to enable teachers to take act appropriately according to the predicted performance of the group. Four group performance indicators are considered — average grades within a group, project grade, frequency of resource-sharing and drop-out rate. Experimental results are presented, concerning the application of the methodology to a web class of 706 students, divided into 70 groups. The experimental results show that group communication patterns significantly affect group performance.
---
paper_title: Mirror, mirror, on my screen ... exploring online reflections
paper_content:
This paper suggests that, through the provision of opportunities for reflection–in–action at critical learning stages and with the support of a trained e–moderator, the participants in computer mediated conferencing (CMC) can be encouraged to engage in reflecting about their onscreen experiences. Such reflection aids the building of a productive online community of practice. In addition, by encouraging participants to reflect on later stages of their online training experiences, a reflection–on–action record can be built up. Participants' reflective processes can be captured through analysis of their on screen text messages and so be available for research purposes. Examples of conference text message reflections are given throughout the paper, drawn from the on screen reflections of Open University Business School (OUBS) Associate Lecturers who were working online through the medium of computer mediated conferencing for the first time. The conclusion is that reflection–on–practice in the online environment is beneficial for helping the participants to learn from online conferencing and can provide an excellent tool for qualitative research. Opportunities for reflection need to be built into the design of online conferences and facilitated by a trained e–moderator.
---
paper_title: Using unmediated computer conferencing to promote reflective practice and confidencebuilding in initial teacher education
paper_content:
Abstract The use of computer conferencing in initial teacher education (ITE) has been well documented, along with the barriers to its implementation. Existing research includes investigation into ways in which computer conferencing can be used as a medium to support reflective thinking and professional discourse between university tutors, teachers, student teachers and their peers during school placement. This article takes a different approach, examining instead whether computer conferencing can be successful between ITE peer groups, from different educational systems, without tutor moderation. It considers to what extent computer conferencing can raise students' confidence in the use of information and communication technologies and can encourage ‘reflective practice’ among student teachers. The report also considers the extent to which on-line discussion among student teachers can provide emotional support and stress relief during the course's most intensive period of teaching practice. Finally, it des...
---
paper_title: Variety is the spice of life: student use of CMC in the context of campus based study
paper_content:
At present, courses within British higher education institutions offer a somewhat haphazard patchwork of IT-based learning resources. Through university intranets it is now possible for many students to follow at least parts of their courses online. However, the provision available is highly dependent on local resources and individual tutors. This paper focuses on student discussion supported via computer mediated communication, but not in the context of distance learning. Rather the focus is upon campus-based study, where students are working with one another in a sustained mode over a period of time. In the context of an ESRC 'Virtual Society?' research project we have been using the online dialogues together with interviews and questionnaires to examine two third year level psychology courses at different universities. In both cases tutors used web resources to facilitate computer-mediated communication as an integral part of the course. Different contexts for learning were created by the differing stances of the tutors. One tutor took an active, participatory role whereas the other tutor remained a non-participant. Both, however, wanted to create wide-ranging discussion amongst the learners. The differing roles of tutors were associated with a marked difference in communication styles and perceived learning outcomes.
---
paper_title: Beyond knowledge transmission? computer-supported learning in teacher education: some benefits in terms of stress, control and self-belief
paper_content:
This article outlines some benefits of a computer-supported interactive learning environment in terms of students' attitudes and factors associated with stress. In the 3rd year of a 4-year undergraduate course for primary teachers (B.Ed.), students were required to undertake a research module, which was seen in part as a preparation for a thesis dissertation in the final year. The subject matter was contained in several computer conferences; other learning experiences included some face-to-face contact and optional technical support sessions. Initial feedback from students indicated that there are benefits in terms of greater flexibility of work patterns, increased sense of control and enhanced self-esteem. Other issues are raised, including the role of the tutor, cost benefits and problems of access. More research is called for into the non-cognitive aspects of computer use.
---
paper_title: A Method to Increase Student Interaction Using Student Groups and Peer Review over the Internet
paper_content:
A method of peer review for student groups is proposed. In this method, groups of students publish their assignments results over the Internet. A fellow student group reviews their work and publishes their findings (on the Internet). Finally, the two groups debate their points of view in front of the class. The debate and healthy competition among groups give the students a chance to learn how to give and receive criticism in a constructive way. This should increase the students' ability to interact and work in groups, an important skill for computer science professionals.
---
paper_title: Inter-Rater Reliability of an Electronic Discussion Coding System.
paper_content:
Abstract A ‘cognote’ system has been developed for coding electronic discussion groups and promoting critical thinking. Previous literature has provided an account of the strategy as applied to several academic settings. This article addresses the research around establishing the inter-rater reliability of the cognote system. The findings suggest three indicators of reliability, namely: 1. that raters assign similar grades to students' discussion group contributions; 2. that raters predominantly assign the same cognotes to students' discussion group contributions and 3. that raters are selecting in excess of 50% of the same text in assigning the same cognotes.
---
paper_title: Can a Collaborative Network Environment Enhance Essay-Writing Processes?.
paper_content:
The aim of this study is to examine whether a computer-supported learning environment enhances essay writing by providing an opportunity to share drafts with fellow students and receive feedback from a draft version. Data for this study were provided by 25 law students who were enrolled in a course in legal history at the University of Helsinki in February 2001. Both the students and the teacher were interviewed. The interviews showed that the students' experiences of the essay-writing process were very positive. The teacher's experiences were in line with the students'. The results showed that the students seemed to divide into two groups concerning their experiences towards sharing written drafts with peers: those who were very enthusiastic and enjoyed the possibility to share drafts and those who, on the other hand, felt that the idea of sharing unfinished essays was too threatening for them and required too much openness. The results further showed that the active use of a computer-supported learning environment was related to good essay grades.
---
paper_title: Distance education via the Internet: the student experience
paper_content:
This is the second in a series of papers that describes the use of the Internet on a distance-taught undergraduate Computer Science course (Thomas et al., 1998). This paper examines students' experience of a large-scale trial in which students were taught using electronic communication exclusively. The paper compares the constitution and experiences of a group of Internet students to those of conventional distance learning students on the same course. Learning styles, background questionnaires, and learning outcomes were used in the comparison of the two groups. The study reveals comparable learning outcomes with no discrimination in grade as the result of using different communication media. The student experience is reported, highlighting the main gains and issues of using the Internet as a communication medium in distance education. This paper also shows that using the Internet in this context can provide students with a worthwhile experience.
---
paper_title: Teacher Participation in Computer Conferencing: socio-psychological dimensions
paper_content:
Abstract It has been pointed out in the literature on computer-mediated communication that the social dimensions of network design, user behaviour and user participation on computer networks are very important to the building of an electronic community. So far, no study has examined these dimensions systematically and the ways in which they mediate user participation. This article reports on a study conducted on a computer network for English teachers in Hong Kong schools, TeleNex, to investigate the teachers' participation on the network, and the technical and social psychological dimensions that mediated their participation. Data were obtained by questionnaires and by interviews with a small sample of teachers. The questionnaire results show significant differences between teachers who participated actively in conferencing and those who did not. The interview data provided further insights into the social psychological dimensions. Implications to be considered when building an electronic community of te...
---
paper_title: GENDER AND ONLINE DISCOURSE IN THE PRINCIPLES OF ECONOMICS
paper_content:
Collaboration is the heart of online learning. Interaction among course participants brings excitement to the online environment and creates knowledge as a group activity. Impediments to active collaboration reduce group, as well as individual, potentialities. Past studies of online discussions have found differences in the style of female and male conversations that could impede the learning process. The conversational styles of female and male students in two online principles of economics classes were analyzed in the present study. The null hypothesis posited no difference in the styles of online discourse between female and male students. The null hypothesis was rejected, implying gender differences in conversational styles. The tone of male postings was more optimistic than the tone of female postings. Female conversations used words revealing social isolation and the rejection of social norms. The paper also discussed the issue of the male X-factor in the principles of economics from a sociolinguistic perspective.
---
paper_title: Using unmediated computer conferencing to promote reflective practice and confidencebuilding in initial teacher education
paper_content:
Abstract The use of computer conferencing in initial teacher education (ITE) has been well documented, along with the barriers to its implementation. Existing research includes investigation into ways in which computer conferencing can be used as a medium to support reflective thinking and professional discourse between university tutors, teachers, student teachers and their peers during school placement. This article takes a different approach, examining instead whether computer conferencing can be successful between ITE peer groups, from different educational systems, without tutor moderation. It considers to what extent computer conferencing can raise students' confidence in the use of information and communication technologies and can encourage ‘reflective practice’ among student teachers. The report also considers the extent to which on-line discussion among student teachers can provide emotional support and stress relief during the course's most intensive period of teaching practice. Finally, it des...
---
paper_title: Off-Line Factors Contributing to Online Engagement
paper_content:
Abstract Online discourse environments are increasingly popular both in distance education contexts and as adjuncts to face-to-face learning. For many participants such contexts are experienced as positive, community-supported learning opportunities, but this is not the case for everyone. Understanding more about the online and off-line factors that contribute to the online experience is important in order to support equitable online learning. This study has analysed patterns of engagement and disengagement in one particular learning context; that of pre-service, math-anxious elementary candidates enrolled in a two-year pre-service program. Program supports for the self-declared math-anxious participants (n = 20 from a total cohort of 57) included small-group math investigations and participation in an online learning environment. Results show tremendous variability in levels of contribution and that the online context provided most learning support for participants who had had successful social and subje...
---
paper_title: FACE-TO-FACE VERSUS THREADED DISCUSSIONS: THE ROLE OF TIME AND HIGHER-ORDER THINKING
paper_content:
This study compares the experiences of students in face-to-face (in class) discussions with threaded discussions and also evaluates the threaded discussions for evidence of higher-order thinking. Students were enrolled in graduate-level classes that used both modes (face-to-face and online) for course-related discussions; their end-of-course evaluations of both experiences were grouped for analysis and themes constructed based on their comments. Themes included the “expansion of time,” “experience of time,” “quality of the discussion,” “needs of the student,” and “faculty expertise.” While there are advantages to holding discussions in either setting, students most frequently noted that using threaded discussions increased the amount of time they spent on class objectives and that they appreciated the extra time for reflection on course issues. The face-to-face format also had value as a result of its immediacy and energy, and some students found one mode a better “fit” with their preferred learning mode. The analysis of higher-order thinking was based on a content analysis of the threaded discussions only. Each posting was coded as one of the four cognitive-processing categories described by Garrison and colleagues [1]: 18% were triggering questions, 51% were exploration, 22% were integration, and 7% resolution. A fifth category – social – was appropriate for 3% of the responses and only 12% of the postings included a writing error. This framework provides some support for the assertion that higher-order thinking can and does occur in online discussions; strategies for increasing the number of responses in the integration and resolution categories are discussed.
---
paper_title: Computer conferencing with access to a 'guest expert' in the professional development of special educational needs coordinators
paper_content:
This article describes and outlines the implications of a one-year case study of students’ use of the computer conferencing facility of a postgraduate module for special educational needs coordinators (SENCOs) at a distance-learning institution. This facility incorporates a virtual space for a ‘guest expert’. The aim of the study was to inform future development of courses at a time when computer conferencing was just becoming widespread in the university concerned. Quantitative data associated with the volume and patterns of individual participation in the computer conference were collected as well as interview material from students, tutors and the ‘guest expert’. Findings from the study indicate that computer conferencing has the potential to facilitate the professional development of teachers as reflective practitioners and researchers. However, they also point to a number of barriers to student participation that must be addressed. These include access issues related to time constraints, unfamiliarity with the medium, and lack of confidence in expressing personal views in a public arena. A major conclusion drawn from this study is that it may be appropriate to consider future developments which incorporate the assumption that, in computer conferences of large professional development courses, students are much more likely to participate through reading rather than making personal contributions to conference discussions. This opens the possibility of reconceptualising the role of the ‘guest expert’ as two or more discussants with relevant expertise dialoguing with each other while students follow a threaded discussion and/or make personal contributions.
---
paper_title: Student characteristics and computer-mediated communication
paper_content:
Abstract Use of computer-mediated communication systems (CMCS) to support coursework is increasing, both as a means for students to prepare for using CMCS in their careers and as a mechanism for delivering distance education. But it is not clear whether the same student characteristics lead to academic success using CMCS as with traditional face-to-face (FTF) communication. This paper reports the results of a correlational study of the relationship between individual characteristics and use of CMCS in a team project situation. On most measures the results suggest CMCS will be adopted and used successfully by the same types of students who do well in courses conducted via FTF communication, e.g., students with high-achievement or high-aptitude characteristics. However, personality type was linked to substantial deviations in CMCS usage, suggesting that personality may influence academic success in unanticipated ways.
---
paper_title: The role of cognitive style in educational computer conferencing
paper_content:
This paper reports an investigation of the impact of students’ cognitive style on their effective use of educational text-based computer-mediated conferences. The research centres on an empirical study involving students from three courses run by the British Open University. Statistical analysis of the data does not suggest that cognitive style has a strong influence on student participation in the conference, but does suggest that, contrary to expectations, ‘imagers’ may send more messages to conferences than ‘verbalisers’. The data also suggest a possible link between certain cognitive styles and course completion, and that the interaction of different styles within a group, as described by ) team roles, may have an indirect influence on task completion. [ABSTRACT FROM AUTHOR]
---
paper_title: Can a Collaborative Network Environment Enhance Essay-Writing Processes?.
paper_content:
The aim of this study is to examine whether a computer-supported learning environment enhances essay writing by providing an opportunity to share drafts with fellow students and receive feedback from a draft version. Data for this study were provided by 25 law students who were enrolled in a course in legal history at the University of Helsinki in February 2001. Both the students and the teacher were interviewed. The interviews showed that the students' experiences of the essay-writing process were very positive. The teacher's experiences were in line with the students'. The results showed that the students seemed to divide into two groups concerning their experiences towards sharing written drafts with peers: those who were very enthusiastic and enjoyed the possibility to share drafts and those who, on the other hand, felt that the idea of sharing unfinished essays was too threatening for them and required too much openness. The results further showed that the active use of a computer-supported learning environment was related to good essay grades.
---
paper_title: Collaborative knowledge building to promote in-service teacher training in environmental education
paper_content:
Abstract Environmental education (EE) is a problematic field in teacher education for many reasons. First, there is no consensus about its central concepts. Second, environmental education emerged as a response to environmental problems. Environmental educators do not agree on what are real environmental problems and what are exaggerated fears. For many educators, global warming is a serious environmental problem, but for those who view it in a geological perspective of long-term climatic change, it is not such a problem. When teachers are provided with the possibility of sharing problems of EE and building knowledge collaboratively with university experts, what do they do? What kind of problems do teachers regard as important? What kinds of problems do the university experts regard as important? These questions were investigated through the use of a database program called Knowledge Forum®. Knowledge Forum®is a shared virtual environment for collaborative knowledge building. This article analyses the use...
---
paper_title: Following the thread in computer conferences
paper_content:
Abstract Computer conferencing systems allow students to discuss their ideas and learn from each other. However, the asynchronous nature of these discussions can result in large and complex collections of messages. Threading facilities help students to cope with this by structuring their discussions into parallel ‘conversations’. This paper discusses an investigation of students’ use of threading in two different conferencing systems. The context for the study was a small-group collaborative assignment in an Open University course. Conference transcripts were studied, and ‘message maps’ were created, in order to investigate the threading links made by students, in relation to the semantic links between the messages. The results show that the way in which threads are represented in a conferencing system can have a significant effect on how students use the system, and on the character of the resulting discussions.
---
paper_title: The Use of Asynchronous Learning Networks in Nutrition Education: Student Attitude, Experiences and Performance
paper_content:
In this study a change in teaching strategy to involve a greater emphasis on asynchronous learning networks (ALNs) was implemented and the views of students (n=51) to this change were evaluated through responses to an online questionnaire. In response to Likert-type questions the majority of students demonstrated a positive view of this new model. Sixty-one percent of students felt that other types of online material would benefit the learning process and 80 % would recommend this module to a friend. Students acknowledged that the use of ALN-supported learning made the material easier to understand (52%), the lecturer more accessible (66%) and enabled them to take a more active role in the learning process (55%). Though only 10% of students utilized the asynchronous newsgroup more than 5 times, 77% found reading the contributions of others useful. Contrary to this 76% preferred the more familiar lecturebased environment for subject delivery. In response to open-ended questions students’ views were more reserved and highlighted a range of problems such as inadequate infrastructure, unreliable computers, and poor access to the online material as well as resistance to a new teaching paradigm. Student performance was influenced by age and contribution to the newsgroup. Those who were younger had a lower grade (47.8 ? 15.8) than those who were older (52.0 ? 11.4). Students with higher grades (56.2 ? 10.3) contributed to the newsgroup while students with lower grades (45.7 ? 12.5) did not. Based on these observations, it is apparent that students do appreciate the advantages of ALNsupported learning though for a shift toward this model to be effective problems of access and system failure must be resolved. Implications for future ALN-based modules are discussed.
---
paper_title: Taking E-Moderating skills to the next level: Reflecting on the design of conferencing environments
paper_content:
This paper reports an analysis of computer conference structures set up for a distance education course in which major components of the teaching and learning involve group discussions and collaboration via asynchronous text-based conferencing. As well as adopting traditional e-moderator roles, tutors were required to design appropriate online spaces and navigation routes for students. Tutors’ views concerning conference structures focussed on tensions between enabling easy access to conference areas, facilitating the successful running of activities, and addressing students’ subsequent needs for retrieval of conference material for assessment tasks. The geographically dispersed course tutors initially explored these issues in reflective online conversations. Comparisons were made between structures that were set up differently but all used for essentially the same tasks and purposes. Evidence from conference messages, from student feedback given in questionnaire and interview responses, as well as from students’ written assignments, provided insights into the impact such structures may have on the student learning experience. Students found conference areas for their own group easy to navigate, but they had concerns about managing the large number of messages; these concerns centred on the volume, threading, linking, length, and language of messages.
---
| Title: A Review of Recent Papers on Online Discussion in Teaching and Learning in Higher Education
Section 1: INTRODUCTION
Description 1: Introduce the theme of the paper, provide background on asynchronous online discussions, and discuss the aim and scope of the review.
Section 2: Sample and Selection Criteria
Description 2: Describe the categorization and selection process for the sample of reviewed papers, including journal, discipline, country, and software used.
Section 3: Themes and Focus Questions
Description 3: Detail the categorization of papers by themes with associated focus questions, including Curriculum Design, Theoretical Assumptions, Claims Made, and Conditions.
Section 4: Curriculum Design
Description 4: Discuss issues in categorizing curriculum design and describe types of online discussion activities identified in the literature.
Section 5: Theoretical Assumptions about Teaching and Learning
Description 5: Explain the theories of teaching and learning that underpin the reviewed work, with a focus on social constructivism, media theory, and social psychology.
Section 6: Claims Made for Asynchronous Online Discussion within the Case Studies
Description 6: Provide an overview of the claims made regarding the benefits of asynchronous online discussion and highlight the constraints and opportunities presented in the studies.
Section 7: Optimal Conditions for Asynchronous Online Discussion
Description 7: Describe the key conditions associated with successful asynchronous online discussions, including curriculum design, instructor support, learners' behavior and attitudes, and software considerations.
Section 8: CONCLUSION
Description 8: Summarize insights from the reviewed papers, highlighting best practices, consensus on the benefits of asynchronous online discussion, and conditions promoting learner engagement.
Section 9: Broad Consensus on Best Practices
Description 9: Outline the broad consensus on best practices for asynchronous online discussion, including curriculum, instructor, learner, and software recommendations.
Section 10: Directions for Future Research
Description 10: Suggest areas for future research, such as developing curriculum models, clarifying the role of interaction, transferability to other settings, and addressing the responsibilities of learners and teachers. |
Load Balancing Optimization in LTE/LTE-A Cellular Networks: A Review | 8 | ---
paper_title: Adaptive Neuro-Fuzzy Inference System for Dynamic Load Balancing in 3GPP LTE
paper_content:
ANFIS is applicable in modeling of key parameters when investigating the performance and functionality of wireless networks. The need to save both capital and operational expenditure in the management of wireless networks cannot be over-emphasized. Automation of network operations is a veritable means of achieving the necessary reduction in CAPEX and OPEX. To this end, next generations networks such WiMAX and 3GPP LTE and LTE-Advanced provide support for self-optimization, self-configuration and self-healing to minimize human-to-system interaction and hence reap the attendant benefits of automation. One of the most important optimization tasks is load balancing as it affects network operation right from planning through the lifespan of the network. Several methods for load balancing have been proposed. While some of them have a very buoyant theoretical basis, they are not practically implementable at the current state of technology. Furthermore, most of the techniques proposed employ iterative algorithm, which in itself is not computationally efficient. This paper proposes the use of soft computing, precisely adaptive neuro-fuzzy inference system for dynamic QoS-aware load balancing in 3GPP LTE. Three key performance indicators (i.e. number of satisfied user, virtual load and fairness distribution index) are used to adjust hysteresis task of load balancing.
---
| Title: Load Balancing Optimization in LTE/LTE-A Cellular Networks: A Review
Section 1: Introduction
Description 1: Introduce the increasing demand for bandwidth and the resulting load imbalance in cellular networks, and the role of load balancing techniques.
Section 2: Overview of LTE/LTE -Advanced
Description 2: Provide background information on LTE and LTE-A technologies, including their architecture, features, and components.
Section 3: Load Balancing Mechanism
Description 3: Discuss various load balancing techniques, emphasizing active mode and idle mode load balancing in LTE/LTE-A networks.
Section 4: Active Mode Load Balancing
Description 4: Explain the active mode load balancing process and its advantages in managing user traffic and radio conditions for load balancing.
Section 5: Idle Mode Load Balancing
Description 5: Describe the challenges and mechanisms of achieving idle mode load balancing, including the adjustment of cell reselection parameters.
Section 6: Handover in LTE-Advanced
Description 6: Detail the importance of handovers in load balancing schemes, including intra-LTE and inter-RAT handovers.
Section 7: Classical and Advanced Load Balancing Approaches
Description 7: Review existing load balancing methods, such as dynamic channel assignment, coverage area-based techniques, and advanced approaches like game-theoretic and adaptive neuro-fuzzy methods.
Section 8: Conclusion
Description 8: Summarize the importance of load balancing in cellular networks and the need for self-optimizing mechanisms for future generation networks. |
Low power processor architectures and contemporary techniques for power optimization—a review | 12 | ---
paper_title: Pipeline gating: speculation control for energy reduction
paper_content:
Branch prediction has enabled microprocessors to increase instruction level parallelism (ILP) by allowing programs to speculatively execute beyond control boundaries. Although speculative execution is essential for increasing the instructions per cycle (IPC), it does come at a cost. A large amount of unnecessary work results from wrong-path instructions entering the pipeline due to branch misprediction. Results generated with the SimpleScalar tool set using a 4-way issue pipeline and various branch predictors show an instruction overhead of 16% to 105% for event instruction committed. The instruction overhead will increase in the future as processors use more aggressive speculation and wider issue widths. In this paper we present an innovative method for power reduction ,which, unlike previous work that sacrificed flexibility or performance reduces power in high-performance microprocessors without impacting performance. In particular we introduce a hardware mechanism called pipeline gating to control rampant speculation in the pipeline. We present inexpensive mechanisms for determining when a branch is likely to mispredict, and for stopping wrong-path instructions from entering the pipeline. Results show up to a 38% reduction in wrong-path instructions with a negligible performance loss (/spl ap/1%). Best of all, even in programs with a high branch prediction accuracy, performance does not noticeable degrade. Our analysis indicates that there is little risk in implementing this method in existing processors since it does not impact performance and can benefit energy reduction.
---
paper_title: Energy Dissipation In General Purpose Microprocessors
paper_content:
In this paper we investigate possible ways to improve the energy efficiency of a general purpose microprocessor. We show that the energy of a processor depends on its performance, so we chose the energy-delay product to compare different processors. To improve the energy-delay product we explore methods of reducing energy consumption that do not lead to performance loss (i.e. wasted energy), and explore methods to reduce delay by exploiting instruction level parallelism. We found that careful design reduced the energy dissipation by almost 25%. Pipelining can give approximately a 2/spl times/ improvement in energy-delay product. Superscalar issue, however, does not improve the energy-delay product any further since the overhead required offsets the gains in performance. Further improvements will be hard to come by since a large fraction of the energy (50-80%) is dissipated in the clock network and the on-chip memories. Thus, the efficiency of processors will depend more on the technology being used and the algorithm chosen by the programmer than the micro-architecture.
---
paper_title: Predicting short circuit power from timing models
paper_content:
Power dissipation is becoming a major show stopper for integrated circuit design especially in the server and pervasive computing technologies. Careful consideration of power requirements is expected to bring major changes in the way we design and analyze integrated circuit performance. This paper proposes a practical methodology to evaluate the short-circuit power of static CMOS gates via effective use of timing information from timing analysis. We introduce three methods to estimate short-circuit power of a static CMOS circuit without requiring explicit circuit simulation. Our proposed methodology offers practical advantages over previous approaches, which heavily rely on simple special device models. Proposed approach is experimented with an extensive set of benchmark examples and several device models and found very accurate.
---
paper_title: Rate Monotonic vs. EDF: Judgment Day
paper_content:
Since the first results published in 1973 by Liu and Layland on the Rate Monotonic (RM) and Earliest Deadline First (EDF) algorithms, a lot of progress has been made in the schedulability analysis of periodic task sets. Unfortunately, many misconceptions still exist about the properties of these two scheduling methods, which usually tend to favor RM more than EDF. Typical wrong statements often heard in technical conferences and even in research papers claim that RM is easier to analyze than EDF, it introduces less runtime overhead, it is more predictable in overload conditions, and causes less jitter in task execution.Since the above statements are either wrong, or not precise, it is time to clarify these issues in a systematic fashion, because the use of EDF allows a better exploitation of the available resources and significantly improves system's performance.This paper compares RM against EDF under several aspects, using existing theoretical results, specific simulation experiments, or simple counterexamples to show that many common beliefs are either false or only restricted to specific situations.
---
paper_title: An intra-task dvfs technique based on statistical analysis of hardware events
paper_content:
The importance and demand for various types of optimization techniques for program execution is growing rapidly. In particular, dynamic optimization techniques are regarded as important. Although conventional techniques usually generated an execution model for dynamic optimization by qualitatively analyzing the behaviors of computer systems in a knowledge-based manner, the proposed technique generates models by statistically analyzing the behaviors from quantitative data of hardware events. In the present paper, a novel dynamic voltage and frequency scaling (DVFS) method based on statistical analysis is proposed. The proposed technique is a hybrid technique in which static information, such as the breakpoint of program phases and, dynamic information, such as the number of cache misses given by the performance counter, are used together. Relationships between the performance and values of performance counters are learned statistically in advance. The compiler then inserts a run-time code for predicting the performance and setting the appropriate frequency/voltage depending on the predicted performance. The proposed technique can greatly reduce the energy consumption while satisfying soft timing constraints.
---
paper_title: Scheduling for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994]. What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance.
---
paper_title: Dynamic voltage and frequency scaling under a precise energy model considering variable and fixed components of the system power dissipation
paper_content:
This work presents a dynamic voltage and frequency scaling (DVFS) technique that minimizes the total system energy consumption for performing a task while satisfying a given execution time constraint. We first show that in order to guarantee minimum energy for task execution by using DVFS it is essential to divide the system power into active and standby power components. Next, we present a new DVFS technique, which considers not only the active power, but also the standby component of the system power. This is in sharp contrast with previous DVFS techniques, which only consider the active power component. We have implemented the proposed DVFS technique on the BitsyX platform - an Intel PXA255-based platform manufactured by ADS Inc., and report detailed power measurements on this platform. These measurements show that, compared to conventional DVFS techniques, an additional system energy saving of up to 18% can be achieved while satisfying the user-specified timing constraints.
---
paper_title: Power Attack Resistant Cryptosystem Design: A Dynamic Voltage and Frequency Switching Approach
paper_content:
A novel power attack resistant cryptosystem is presented in this paper. Security in digital computing and communication is becoming increasingly important. Design techniques that can protect cryptosystems from leaking information have been studied by several groups. Power attacks, which infer program behavior from observing power supply current into a processor core, are important forms of attacks. Various methods have been proposed to countermeasure the popular and efficient power attacks. However, these methods do not adequately protect against power attacks and may introduce new vulnerabilities. In this work, we addressed a novel approach against the power attacks, i.e., Dynamic Voltage and Frequency Switching (DVFS). Three designs, naive, improved and advanced implementations, have been studied to test the efficiency of DVFS against power attacks. A final advanced realization of our novel cryptosystem was given out, which achieved enough high power trace entropy and time trace entropy to block all kinds of power attacks, with 27% energy reduction and 16% time overhead for DES encryption and decryption algorithms.
---
paper_title: Improving the Efficiency of Power Management Techniques by Using Bayesian Classification
paper_content:
This paper presents a supervised learning based dynamic power management (DPM) framework for a multicore processor, where a power manager (PM) learns to predict the system performance state from some readily available input features (such as the state of service queue occupancy and the task arrival rate) and then uses this predicted state to look up the optimal power management action from a pre-computed policy lookup table. The motivation for utilizing supervised learning in the form of a Bayesian classifier is to reduce overhead of the PM which has to recurrently determine and issue voltage-frequency setting commands to each processor core in the system. Experimental results reveal that the proposed Bayesian classification based DPM technique ensures system-wide energy savings under rapidly and widely varying workloads.
---
paper_title: Designing Embedded Processors A Low Power Perspective
paper_content:
As we embrace the world of personal, portable, and perplexingly complex digital systems, it has befallen upon the bewildered designer to take advantage of the available transistors to produce a system which is small, fast, cheap and correct, yet possesses increased functionality. Increasingly, these systems have to consume little energy. Designers are increasingly turning towards small processors, which are low power, and customize these processors both in software and hardware to achieve their objectives of a low power system, which is verified, and has short design turnaround times. Designing Embedded Processorsexamines the many ways in which processor based systems are designed to allow low power devices. It looks at processor design methods, memory optimization, dynamic voltage scaling methods, compiler methods, and multi processor methods. Each section has an introductory chapter to give a breadth view, and have a few specialist chapters in the area to give a deeper perspective. The book provides a good starting point to engineers in the area, and to research students embarking upon the exciting area of embedded systems and architectures.
---
paper_title: Real-time dynamic voltage scaling for low-power embedded operating systems
paper_content:
In recent years, there has been a rapid and wide spread of non-traditional computing platforms, especially mobile and portable computing devices. As applications become increasingly sophisticated and processing power increases, the most serious limitation on these devices is the available battery life. Dynamic Voltage Scaling (DVS) has been a key technique in exploiting the hardware characteristics of processors to reduce energy dissipation by lowering the supply voltage and operating frequency. The DVS algorithms are shown to be able to make dramatic energy savings while providing the necessary peak computation power in general-purpose systems. However, for a large class of applications in embedded real-time systems like cellular phones and camcorders, the variable operating frequency interferes with their deadline guarantee mechanisms, and DVS in this context, despite its growing importance, is largely overlooked/under-developed. To provide real-time guarantees, DVS must consider deadlines and periodicity of real-time tasks, requiring integration with the real-time scheduler. In this paper, we present a class of novel algorithms called real-time DVS (RT-DVS) that modify the OS's real-time scheduler and task management service to provide significant energy savings while maintaining real-time deadline guarantees. We show through simulations and a working prototype implementation that these RT-DVS algorithms closely approach the theoretical lower bound on energy consumption, and can easily reduce energy consumption 20% to 40% in an embedded real-time system.
---
paper_title: The Scheduling to Achieve Optimized Performance of Randomly Addressed Polling Protocol
paper_content:
A collision occurs in real network access if two or more packets are simultaneously transmitted. Hence, the contentioncollision must be resolved when applying a protocol in the wireless data network. In this paper,we adopt the concept of elimination and dynamic tree expansion in Randomly AddressedPolling (RAP) protocol to reduce the delay time and enhance the throughput. Analyses resultsindicate that the throughput performance of this algorithm is up to about 0.9 and thedelay time rapidly decays.
---
paper_title: An Optimal Algorithm for Scheduling Soft-Aperiodic Tasks in Fixed-Priority Preemptive Systems
paper_content:
A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented. This algorithm is proved to be optimal in the sense that it provides the shortest aperiodic response time among all possible aperiodic service methods. Simulation studies show that it offers substantial performance improvements over current approaches, including the sporadic server algorithm. Moreover, standard queuing formulas can be used to predict aperiodic response times over a wide range of conditions. The algorithm can be extended to schedule hard deadline aperiodics and to efficiently reclaim unused periodic service time when periodic tasks have stochastic execution times. >
---
paper_title: Low Power Synthesis of Dynamic Logic Circuits Using Fine-Grained Clock Gating
paper_content:
Clock power consumes a significant fraction of total power dissipation in high speed precharge/evaluate logic styles. In this paper, we present a novel low-cost design methodology for reducing clock power in the active mode for dynamic circuits with fine-grained clock gating. The proposed technique also improves switching power by preventing redundant computations. A logic synthesis approach for domino/skewed logic styles based on Shannon expansion is proposed, that dynamically identifies idle parts of logic and applies clock gating to them to reduce power in the active mode of operation. Results on a set of MCNC benchmark circuits in predictive 70nm process exhibit improvements of 15% to 64% in total power with minimal overhead in terms of delay and area compared to conventionally synthesized domino/skewed logic.
---
paper_title: Keeping hot chips cool
paper_content:
With 90nm CMOS in production and 65nm testing in progress, power has been pushed to the forefront of design metrics. This paper will outline practical techniques that are used to reduce both leakage as well as active power in a standard-cell library based high-performance design flow. We will discuss the design and cost issues for using different power saving techniques such as: power gating to reduce leakage, multiple and hybrid threshold libraries for leakage reduction and multiple supply voltage based design. In addition techniques to reduce clock tree power will be presented as power consumed in clocks accounts for a significant portion of total chip power. Practical aspects of implementing these techniques will also be discussed.
---
paper_title: Design of a family of sleep transistor cells for a clustered power-gating flow in 65nm technology
paper_content:
Clustered sleep transistor insertion is an effective leakage power reduction technique that is well-suited for integration in an automated design flow and offers a flexible tradeoff between area, delay overhead and turn-on transition time. In this work, we focus on the design of a family of sleep transistor cells, fully compatible with the physical design rules of a commercial 65nm CMOS library. We describe circuit-level and layout optimizations, as well as the cell characterization procedure required to support automated sleep transistor cell selection and instantiation in a clustered power-gating insertion flow.
---
paper_title: Physical design methodology of power gating circuits for standard-cell-based design
paper_content:
The application of power gating circuits to semicustom design based on standard-cell elements is limited due to the requirement of customizing cells that are tailored for power gating or the requirement of customizing physical design methodologies for placement and power network. We propose a new power network architecture that enables use of conventional standard-cell elements. A few custom library elements are developed wherever needed, including output interface circuits and data retention storage elements. A novel method of current switch design is also described. The proposed methodology is applied to ISCAS benchmark circuits, and also to a commercial Viterbi decoder with 0.18/spl mu/m CMOS technology.
---
paper_title: Low-power circuits and technology for wireless digital systems
paper_content:
As CMOS technology scales to deep-submicron dimensions, designers face new challenges in determining the proper balance between aggressive high-performance transistors and lower-performance transistors to optimize system power and performance for a given application. Determining this balance is crucial for battery-powered handheld devices in which transistor leakage and active power limit the available system performance. This paper explores these questions and describes circuit techniques for low-power communication systems which exploit the capabilities of advanced CMOS technology.
---
paper_title: Using dynamic cache management techniques to reduce energy in a high-performance processor
paper_content:
In this paper, we propose a technique that uses an additional mini cache, the LO-Cache, located between the instruction cache (I-Cache) and the CPU core. This mechanism can provide the instruction stream to the data path and, when managed properly, it can effectively eliminate the need for high utilization of the more expensive I-Cache. In this work, we propose, implement, and evaluate a series of run-time techniques for dynamic analysis of the program instruction access behavior, which are then used to preactively guide the access of the LO-Cache. The basic idea is that only the most frequently executed portions of the code should be stored in the LO-Cache since this is where the program spends most of its time. We present experimental results to evaluate the effectiveness of our scheme in terms of performance and energy dissipation for a series of SPEC95 benchmarks. We also discuss the performance and energy tradeoffs that are involved in these dynamic schemes.
---
paper_title: A 160-MHz, 32-b, 0.5-W CMOS RISC Microprocessor
paper_content:
This paper describes a 160 MHz 500 mW 32 b StrongARM(R) microprocessor designed for low-power, low-cost applications. The chip implements the ARM(R) V4 instruction set and is bus compatible with earlier implementations. The pin interface runs at 3.3 V but the internal power supplies can vary from 1.5 to 2.2 V, providing various options to balance performance and power dissipation. At 160 MHz internal clock speed with a nominal Vdd of 1.65 V, it delivers 185 Dhrystone 2.1 MIPS while dissipating less than 450 mW. The range of operating points runs from 100 MHz at 1.65 V dissipating less than 300 mW to 200 MHz at 2.0 V for less than 900 mW. An on-chip PLL provides the internal clock based on a 3.68 MHz clock input. The chip contains 2.5 million transistors, 90% of which are in the two 16 kB caches. It is fabricated in a 0.35-/spl mu/m three-metal CMOS process with 0.35 V thresholds and 0.25 /spl mu/m effective channel lengths. The chip measures 7.8 mm/spl times/6.4 mm and is packaged in a 144-pin plastic thin quad flat pack (TQFP) package.
---
paper_title: Reducing power in superscalar processor caches using subbanking, multiple line buffers and bit-line segmentation
paper_content:
Modern microprocessors employ one or two levels of on-chip caches to bridge the burgeoning speed disparities between the processor and the RAM. These SRAM caches are a major source of power dissipation. We investigate architectural techniques, that do not compromise the processor cycle time, for reducing the power dissipation within the on-chip cache hierarchy in superscalar microprocessors. We use a detailed register-level simulator of a superscalar microprocessor that simulates the execution of the SPEC benchmarks and SPICE measurements for the actual layout of a 0.5 micron, 4-metal layer cache, optimized for a 300 MHz, clock. We show that a combination of subbanking, multiple line buffers and bit-line segmentation can reduce the on-chip cache power dissipation by as much as 75% in a technology-independent manner.
---
paper_title: The filter cache: an energy efficient memory structure
paper_content:
Most modern microprocessors employ one or two levels of on-chip caches in order to improve performance. These caches are typically implemented with static RAM cells and often occupy a large portion of the chip area. Not surprisingly, these caches often consume a significant amount of power. In many applications, such as portable devices, low power is more important than performance. We propose to trade performance for power consumption by filtering cache references through an unusually small L1 cache. An L2 cache, which is similar in size and structure to a typical L1 cache, is positioned behind the filter cache and serves to reduce the performance loss. Experimental results across a wide range of embedded applications show that the filter cache results in improved memory system energy efficiency. For example, a direct mapped 256-byte filter cache achieves a 58% power reduction while reducing performance by 21%, corresponding to a 51% reduction in the energy-delay product over conventional design.
---
paper_title: Power savings in embedded processors through decode filter cache
paper_content:
In embedded processors, instruction fetch and decode can consume more than 40% of processor power. An instruction filter cache can be placed between the CPU core and the instruction cache to service the instruction stream. Power savings in instruction fetch result from accesses to a small cache. In this paper, we introduce a decode filter cache to provide a decoded instruction stream. On a hit in the decode filter cache, fetching from the instruction cache and the subsequent decoding is eliminated, which results in power savings in both instruction fetch and instruction decode. We propose to classify instructions into cacheable or uncacheable depending on the decoded width. Then sectored cache design is used in the decode filter cache so that cacheable and uncacheable instructions can coexist in a decode filter cache sector. Finally, a prediction mechanism is presented to reduce the decode filter cache miss penalty. Experimental results show average 34% processor power reduction and less than 1% performance degradation.
---
paper_title: Designing Embedded Processors A Low Power Perspective
paper_content:
As we embrace the world of personal, portable, and perplexingly complex digital systems, it has befallen upon the bewildered designer to take advantage of the available transistors to produce a system which is small, fast, cheap and correct, yet possesses increased functionality. Increasingly, these systems have to consume little energy. Designers are increasingly turning towards small processors, which are low power, and customize these processors both in software and hardware to achieve their objectives of a low power system, which is verified, and has short design turnaround times. Designing Embedded Processorsexamines the many ways in which processor based systems are designed to allow low power devices. It looks at processor design methods, memory optimization, dynamic voltage scaling methods, compiler methods, and multi processor methods. Each section has an introductory chapter to give a breadth view, and have a few specialist chapters in the area to give a deeper perspective. The book provides a good starting point to engineers in the area, and to research students embarking upon the exciting area of embedded systems and architectures.
---
paper_title: Drowsy caches: simple techniques for reducing leakage power
paper_content:
On-chip caches represent a sizable fraction of the total power consumption of microprocessors. Although large caches can significantly improve performance, they have the potential to increase power consumption. As feature sizes shrink, the dominant component of this power loss will be leakage. However, during a fixed period of time the activity in a cache is only centered on a small subset of the lines. This behavior can be exploited to cut the leakage power of large caches by putting the cold cache lines into a state preserving, low-power drowsy mode. Moving lines into and out of drowsy state incurs a slight performance loss. In this paper we investigate policies and circuit techniques for implementing drowsy caches. We show that with simple architectural techniques, about 80%-90% of the cache lines can be maintained in a drowsy state without affecting performance by more than 1%. According to our projections, in a 0.07um CMOS process, drowsy caches will be able to reduce the total energy (static and dynamic) consumed in the caches by 50%-75%. We also argue that the use of drowsy caches can simplify the design and control of low-leakage caches, and avoid the need to completely turn off selected cache lines and lose their state.
---
paper_title: Let caches decay: reducing leakage energy via exploitation of cache generational behavior
paper_content:
Power dissipation is increasingly important in CPUs ranging from those intended for mobile use, all the way up to high-performance processors for highend servers. Although the bulk of the power dissipated is dynamic switching power, leakage power is also beginning to be a concern. Chipmakers expect that in future chip generations, leakage's proportion of total chip power will increase significantly. This article examines methods for reducing leakage power within the cache memories of the CPU. Because caches comprise much of a CPU chip's area and transistor counts, they are reasonable targets for attacking leakage. We discuss policies and implementations for reducing cache leakage by invalidating and "turning off" cache lines when they hold data not likely to be reused. In particular, our approach is targeted at the generational nature of cache line usage. That is, cache lines typically have a flurry of frequent use when first brought into the cache, and then have a period of "dead time" before they are evicted. By devising effective, low-power ways of deducing dead time, our results show that in many cases we can reduce L1 cache leakage energy by 4x in SPEC2000 applications without having an impact on performance. Because our decay-based techniques have notions of competitive online algorithms at their roots, their energy usage can be theoretically bounded at within a factor of two of the optimal oracle-based policy. We also examine adaptive decay-based policies that make energy-minimizing policy choices on a per-application basis by choosing appropriate decay intervals individually for each cache line. Our proposed adaptive policies effectively reduce L1 cache leakage energy by 5x for the SPEC2000 with only negligible degradations in performance.
---
paper_title: IATAC: a smart predictor to turn-off L2 cache lines
paper_content:
As technology evolves, power dissipation increases and cooling systems become more complex and expensive. There are two main sources of power dissipation in a processor: dynamic power and leakage. Dynamic power has been the most significant factor, but leakage will become increasingly significant in future. It is predicted that leakage will shortly be the most significant cost as it grows at about a 5× rate per generation. Thus, reducing leakage is essential for future processor design. Since large caches occupy most of the area, they are one of the leakiest structures in the chip and hence, a main source of energy consumption for future processors.This paper introduces IATAC (inter-access time per access count), a new hardware technique to reduce cache leakage for L2 caches. IATAC dynamically adapts the cache size to the program requirements turning off cache lines whose content is not likely to be reused. Our evaluation shows that this approach outperforms all previous state-of-the-art techniques. IATAC turns off 65p of the cache lines across different L2 cache configurations with a very small performance degradation of around 2p.
---
paper_title: Cache decay: exploiting generational behavior to reduce cache leakage power
paper_content:
Power dissipation is increasingly important in CPUs ranging from those intended for mobile use, all the way up to high performance processors for high-end servers. While the bulk of the power dissipated is dynamic switching power leakage power is also beginning to be a concern. Chipmakers expect that in future chip generations, leakage's proportion of total chip power will increase significantly. This paper examines methods for reducing leakage power within the cache memories of the CPU. Because caches comprise much of a CPU chip's area and transistor counts, they are reasonable targets for attacking leakage. We discuss policies and implementations for reducing cache leakage by invalidating and "turning off" cache lines when they hold data not likely to be reused. In particular our approach is targeted at the generational nature of cache line usage. That is, cache lines typically have a flurry of frequent use when first brought into the cache, and then have a period of "dead time" before they are evicted. By devising effective, low-power ways of deducing dead time, our results show that in many cases we can reduce Ll cache leakage energy by 4/spl times/ in SPEC2000 applications without impacting performance. Because our decay-based techniques have notions of competitive on-line algorithms at their roots, their energy usage can be theoretically bounded at within a factor of two of the optimal oracle-based policy. We also examine adaptive decay-based policies that make energy-minimizing policy choices on a per-application basis by choosing appropriate decay intervals individually, for each cache line. Our proposed adaptive policies effectively reduce Ll cache leakage energy by 5/spl times/ for the SPEC2000 with only negligible degradations in performance.
---
paper_title: Cache design trade-offs for power and performance optimization: a case study
paper_content:
Caches consume a significant amount of energy in modern microprocessors. To design an energy-efficient microprocessor, it is important to optimize cache energy consumption. This paper examines performance and power trade-offs in cache designs and the effectiveness of energy reduction for several novel cache design techniques targeted for low power.
---
paper_title: Drowsy region-based caches: minimizing both dynamic and static power dissipation
paper_content:
Power consumption within the memory hierarchy grows in importance as on-chip data caches occupy increasingly greater die area. Among dynamic power conservation schemes, horizontal partitioning reduces average power per data access by employing multiple smaller structures or using cache subbanks. For instance, region-based caching places small caches dedicated to stack and global accesses next to the L1 data cache. With respect to static power dissipation, leakage power may be addressed at both circuit and architectural levels. Drowsy caches reduce leakage power by keeping inactive lines in a low-power mode. Here we merge drowsy and region-based caching to reduce overall cache power consumption, showing that the combination yields more benefits than either alone. Applications from the MiBench suite exhibit power reductions in the cache system of up to 68-71%, depending on memory configuration, with a small increase in execution time
---
paper_title: Low-power cache organization through selective tag translation for embedded processors with virtual memory support
paper_content:
In this paper we present a novel cache architecture for energy-efficient data caches in embedded processors with virtual memory. Application knowledge regarding the nature of memory references is used to eliminate tag address translations for most of the cache accesses. We introduce a novel cache tagging scheme, where both virtual and physical tags co-exist in the cache tag arrays. Physical tags and special handling for the super-set cache index bits are used for references to shared data regions in order to avoid cache consistency problems. By eliminating the need for address translation on cache access for the majority of references, a significant power reduction is achieved. We outline an efficient hardware architecture for the proposed approach, where the application information is captured in a reprogrammable way and the cache architecture is minimally modified. Our experimental results show energy reductions for the address translation hardware in the range of 90%, while the reduction for the entire cache architecture is within the range of 25%-30%.
---
paper_title: Low cost instruction cache designs for tag comparison elimination
paper_content:
Tag comparison elimination (TCE) is an effective approach to reduce I-cache energy. Current research focuses on finding good tradeoffs between hardware cost and percentage of comparisons that can be removed. For this purpose, two low cost innovations are proposed in this paper. We design a small dedicated TCE table whose size is flexible both horizontally (entry size) and vertically (number of entries). The design also minimizes interactions with the I-cache. For a 64-way 16K cache, the new design reduces the tag comparisons to 4.0% with a fraction only 20% of the hardware cost of the way memoization technique [5]. The result is 40% better compared to a recent proposed low cost design [2] of comparable hardware cost.
---
paper_title: Designing Embedded Processors A Low Power Perspective
paper_content:
As we embrace the world of personal, portable, and perplexingly complex digital systems, it has befallen upon the bewildered designer to take advantage of the available transistors to produce a system which is small, fast, cheap and correct, yet possesses increased functionality. Increasingly, these systems have to consume little energy. Designers are increasingly turning towards small processors, which are low power, and customize these processors both in software and hardware to achieve their objectives of a low power system, which is verified, and has short design turnaround times. Designing Embedded Processorsexamines the many ways in which processor based systems are designed to allow low power devices. It looks at processor design methods, memory optimization, dynamic voltage scaling methods, compiler methods, and multi processor methods. Each section has an introductory chapter to give a breadth view, and have a few specialist chapters in the area to give a deeper perspective. The book provides a good starting point to engineers in the area, and to research students embarking upon the exciting area of embedded systems and architectures.
---
paper_title: Reducing the frequency of tag compares for low power I-cache design
paper_content:
In current processors, the cache controller, which contains the cache directory and other logic such as tag comparators, is active for each instruction fetch and is responsible for 20-25% of the power consumed in the Icache. Reducing the power consumed by the cache controller is important for low power I-cache design. We present three architectural modi cations, which in concert, allow us to reduce the cache controller activity to less than 2% for most applications. The rst modi cation involves comparing cache tags for only those instructions that result in fetches from a new cache block. The second modi cation involves the tagging of those branches that cause instructions to be fetched from a new cache block. The third modi cation involves augmenting the I-cache with a small on-chip memory called the S-cache. The most frequently executed basic blocks of code are statically allocated to the S-cache before program execution. We present empirical data to show the e ect that these modi cations have on the cache con-
---
paper_title: Locality-driven architectural cache sub-banking for leakage energy reduction
paper_content:
In most processors, caches account for the largest fraction of onchip transistors, thus being a primary candidate for tackling the leakage problem. Existing architectural solutions usually rely on customized cache structures, which are needed to implement some kind of power management policy. Memory arrays, however, are carefully developed and finely tuned by foundries, and their internal structure is typically non accessible to system designers. In this work, we focus on the reduction of leakage energy in caches, without interfering with its internal design. We proposed a truly architectural solution that is based on cache sub-banking and on the detection and mapping of the application localities, detected from a profiling of the cache access patterns. By customizing the mapping between the application address space and the cache, we can expose as much address space idleness as possible, thus resulting in shutdown potential which allows significant leakage saving. Results show leakage energy reduction of up to 48% (about 30% on average), with marginal impact on miss rate or execution time.
---
paper_title: Way-predicting set-associative cache for high performance and low energy consumption
paper_content:
This paper proposes a new approach using way prediction for achieving high performance and low energy consumption of set-associative caches. By accessing only a single cache way predicted, instead of accessing all the ways in a set, the energy consumption can be reduced. This paper shows that the way-predicting set-associative cache improves the ED (energy-delay) product by 60-70% compared to a conventional set-associative cache,.
---
paper_title: A non-uniform cache architecture for low power system design
paper_content:
This paper proposes a non-uniform cache architecture for reducing the power consumption of memory systems. The non-uniform cache allows having different associativity values (i.e., the number of cache-ways) for different cache-sets. An algorithm determines the optimum number of cache-ways for each cache-set and generates object code suitable for the non-uniform cache memory. The paper also proposes a compiler technique for reducing redundant cache-way accesses and cache-tag accesses. Experiments demonstrate that the technique can reduce the power consumption of memory systems by up to 76% compared to the best result achieved by the conventional method.
---
paper_title: A way-halting cache for low-energy high-performance systems
paper_content:
Caches contribute to much of a microprocessor system's power and energy consumption. We have developed a new cache architecture, called a way-halting cache, that reduces energy while imposing no performance overhead. Our way-halting cache is a four-way set-associative cache that stores the four lowest-order bits of all ways' tags into a fully associative memory, which we call the halt tag array. The lookup in the halt tag array is done in parallel with, and is no slower than, the set-index decoding. The halt tag array pre-determines which tags cannot match due to their low-order four bits mismatching. Further accesses to ways with known mismatching tags are then halted, thus saving power. Our halt tag array has an additional feature of using static logic only, rather than dynamic logic used in highly associative caches. We provide data from experiments on 17 benchmarks drawn from MediaBench and Spec 2000, based on our layouts in 0.18 micron CMOS technology. On average, 55% savings of memory-access related energy were obtained over a conventional four-way set-associative cache. We show that energy savings are greater than previous methods, and nearly twice that of highly-associative caches, while imposing no performance overhead and only 2% cache area overhead.
---
paper_title: Partitioned instruction cache architecture for energy efficiency
paper_content:
The demand for high-performance architectures and powerful battery-operated mobile devices has accentuated the need for low-power systems. In many media and embedded applications, the memory system can consume more than 50p of the overall system energy, making it a ripe candidate for optimization. To address this increasingly important problem, this article studies energy-efficient cache architectures in the memory hierarchy that can have a significant impact on the overall system energy consumption.Existing cache optimization approaches have looked at partitioning the caches at the circuit level and enabling/disabling these cache partitions (subbanks) at the architectural level for both performance and energy. In contrast, this article focuses on partitioning the cache resources architecturally for energy and energy-delay optimizations. Specifically, we investigate ways of splitting the cache into several smaller units, each of which is a cache by itself (called a subcache). Subcache architectures not only reduce the per-access energy costs, but can potentially improve the locality behavior as well.The proposed subcache architecture employs a page-based placement strategy, a dynamic page remapping policy, and a subcache prediction policy in order to improve the memory system energy behavior, especially on-chip cache energy. Using applications from the SPECjvm98 and SPEC CPU2000 benchmarks, the proposed subcache architecture is shown to be very effective in improving both the energy and energy-delay metrics. It is more beneficial in larger caches as well.
---
paper_title: Synthesis and optimization of digital circuits
paper_content:
From the Publisher: ::: Synthesis and Optimization of Digital Circuits offers a modern, up-to-date look at computer-aided design (CAD) of very large-scale integration (VLSI) circuits. In particular, this book covers techniques for synthesis and optimization of digital circuits at the architectural and logic levels, i.e., the generation of performance-and/or area-optimal circuits representations from models in hardware description languages. The book provides a thorough explanation of synthesis and optimization algorithms accompanied by a sound mathematical formulation and a unified notation. The text covers the following topics: modern hardware description languages (e.g., VHDL, Verilog); architectural-level synthesis of data flow and control units, including algorithms for scheduling and resource binding; combinational logic optimization algorithms for two-level and multiple-level circuits; sequential logic optimization methods; and library binding techniques, including those applicable to FPGAs.
---
paper_title: The optimum pipeline depth considering both power and performance
paper_content:
The impact of pipeline length on both the power and performance of a microprocessor is explored both by theory and by simulation. A theory is presented for a range of power/performance metrics, BIPSm/W. The theory shows that the more important power is to the metric, the shorter the optimum pipeline length that results. For typical parameters neither BIPS/W nor BIPS2/W yield an optimum, i.e., a non-pipelined design is optimal. For BIPS3/W the optimum, averaged over all 55 workloads studied, occurs at a 22.5 FO4 design point, a 7 stage pipeline, but this value is highly dependent on the assumed growth in latch count with pipeline depth. As dynamic power grows, the optimal design point shifts to shorter pipelines. Clock gating pushes the optimum to deeper pipelines. Surprisingly, as leakage power grows, the optimum is also found to shift to deeper pipelines. The optimum pipeline depth varies for different classes of workloads: SPEC95 and SPEC2000 integer applications, traditional (legacy) database and on-line transaction processing applications, modern (e. g. web) applications, and floating point applications.
---
paper_title: Pipeline gating: speculation control for energy reduction
paper_content:
Branch prediction has enabled microprocessors to increase instruction level parallelism (ILP) by allowing programs to speculatively execute beyond control boundaries. Although speculative execution is essential for increasing the instructions per cycle (IPC), it does come at a cost. A large amount of unnecessary work results from wrong-path instructions entering the pipeline due to branch misprediction. Results generated with the SimpleScalar tool set using a 4-way issue pipeline and various branch predictors show an instruction overhead of 16% to 105% for event instruction committed. The instruction overhead will increase in the future as processors use more aggressive speculation and wider issue widths. In this paper we present an innovative method for power reduction ,which, unlike previous work that sacrificed flexibility or performance reduces power in high-performance microprocessors without impacting performance. In particular we introduce a hardware mechanism called pipeline gating to control rampant speculation in the pipeline. We present inexpensive mechanisms for determining when a branch is likely to mispredict, and for stopping wrong-path instructions from entering the pipeline. Results show up to a 38% reduction in wrong-path instructions with a negligible performance loss (/spl ap/1%). Best of all, even in programs with a high branch prediction accuracy, performance does not noticeable degrade. Our analysis indicates that there is little risk in implementing this method in existing processors since it does not impact performance and can benefit energy reduction.
---
paper_title: Pipeline stage unification: a low-energy consumption technique for future mobile processors
paper_content:
Recent mobile processors are required to exhibit both low-energy consumption and high performance. To satisfy these requirements, dynamic voltage scaling (DVS) is currently employed. However, its effectiveness will be limited in the future because of shrinking the variable supply voltage range. As an alternative, we previously proposed pipeline stage unification (PSU), which unifies multiple pipeline stages without reducing the supply voltage at a power-saving mode. This paper compares effectiveness of PSU to DVS in current and future process generations. Our evaluation results show PSU will reduce energy consumption by 27-34% more than DVS after about 10 years.
---
paper_title: Bipartitioning and encoding in low-power pipelined circuits
paper_content:
In this article, we present a bipartition dual-encoding architecture for low-power pipelined circuits. We exploit the bipartition approach as well as encoding techniques to reduce power dissipation not only of combinational logic blocks but also of the pipeline registers. Based on Shannon expansion, we partition a given circuit into two subcircuits such that the number of different outputs of both subcircuits are reduced, and then encode the output of both subcircuits to minimize the Hamming distance for transitions with a high switching probability. We measure the benefits of four different combinational bipartitioning and encoding architectures for comparison. The transistor-level simulation results show that bipartition dual-encoding can effectively reduce power by 72.7p for the pipeline registers and 27.1p for the total power consumption on average. To the best of our knowledge, it is the first work that presents an in-depth study on bipartition and encoding techniques to optimize power for pipelined circuits.
---
paper_title: Architecture Level Power-Performance Tradeoffs for Pipelined Designs
paper_content:
This paper presents a method to investigate power-performance tradeoffs in digital pipelined designs. The method is applied at the architectural level of the design. It is shown that addressing the tradeoffs at this level results in significant savings in power consumption without impacting the performance. The reduction in power is obtained through reducing the number of registers used in implementing the pipeline stages. The method has been validated by synthesizing a floating-point unit with different pipeline stages and power consumption of the designs were obtained using industry standard tools. It is shown that it is possible to obtain up to 18% reduction in power without affecting the clock period and with less area.
---
paper_title: Power and energy reduction via pipeline balancing
paper_content:
Minimizing power dissipation is an important design requirement for both portable and non-portable systems. In this work, we propose an architectural solution to the power problem that retains performance while reducing power. The technique, known as Pipeline Balancing (PLB), dynamically tunes the resources of a general purpose processor to the needs of the program by monitoring performance within each program. We analyze metrics for triggering PLB, and detail instruction queue design and energy savings based on an extension of the Alpha 21264 processor. Using a detailed simulator, we present component and full chip power and energy savings for single and multi-threaded execution. Results show an issue queue and execution unit power reduction of up to 23% and 13%, respectively, with an average performance loss of 1% to 2%.
---
paper_title: Adaptive pipeline depth control for processor power-management
paper_content:
A method of managing the power consumption of an embedded, single-issue processor by controlling its pipeline depth is proposed. The execution time will be increased but, if the method is applied to applications with slack time, the user-perceived performance may not be degraded Two techniques are shown using an existing asynchronous processor as a starting point. The first method controls the pipeline occupancy using a token mechanism, the second enables adjacent pipeline stages to be merged, by making the latches between them 'permanently' transparent. An energy reduction of up to 16% is measured, using a collection of five benchmarks.
---
paper_title: A low-power bus design using joint repeater insertion and coding
paper_content:
In this paper, we propose joint repeater insertion and crosstalk avoidance coding as a low-power alternative to repeater insertion for global bus design in nanometer technologies. We develop a methodology to calculate the repeater size and separation that minimize the total power dissipation for joint repeater insertion and coding for a specific delay target. This methodology is employed to obtain power vs. delay trade-offs for 130-nm, 90-nm, 65-nm, and 45-nm technology nodes. Using ITRS technology scaling data, we show that proposed technique provides 54%, 67%, and 69% power savings over optimally repeater-inserted 10-mm 32-bit bus at 90-nm, 65-nm, and 45-nm technology nodes, respectively, while achieving the same delay.
---
paper_title: Low-swing interconnect interface circuits
paper_content:
This paper reviews a number of low-swing on-chip interconnect schemes, and presents a thorough analysis of their effectiveness and limitations. In addition, several new interface circuits, presenting even more energy savings, are proposed. Some of these circuits not only reduce the interconnect swing, but also use very-low supply voltages, so as to obtain quadratic energy savings. The performance of each of the presented circuits is thoroughly examined using simulation on a benchmark interconnect circuit. Energy savings with a factor of seven have been observed for some of the schemes.
---
paper_title: A survey of techniques for energy efficient on-chip communication
paper_content:
Interconnects have been shown to be a dominant source of energy consumption in modern day System-on-Chip (SoC) designs. With a large (and growing) number of electronic systems being designed with battery considerations in mind, minimizing the energy consumed in on-chip interconnects becomes crucial. Further, the use of nanometer technologies is making it increasingly important to consider reliability issues during the design of SoC communication architectures. Continued supply voltage scaling has led to decreased noise margins, making interconnects more susceptible to noise sources such as crosstalk, power supply noise, radiation induced defects, etc. The resulting transient faults cause the interconnect to behave as an unreliable transport medium for data signals. Therefore, fault tolerant communication mechanism, such as Automatic Repeat Request (ARQ), Forward Error Correction (FEC), etc., which have been widely used in the networking community, are likely to percolate to the SoC domain. This paper presents a survey of techniques for energy efficient on-chip communication. Techniques operating at different levels of the communication design hierarchy are described, including circuit-level techniques, such as low voltage signaling, architecture-level techniques, such as communication architecture selection and bus isolation, system-level techniques, such as communication based power management and dynamic voltage scaling for interconnects, and network-level techniques, such as error resilient encoding for packetized on-chip communication. Emerging technologies, such as Code Division Multiple Access (CDMA) based buses, and wireless interconnects are also surveyed.
---
paper_title: Self-heating-aware optimal wire sizing under Elmore delay model
paper_content:
Global interconnect temperature keeps rising in the current and future technologies due to self-heating and the adiabatic property of top metal layers. The thermal effects impact adversely both reliability and performance of the interconnect wire, shortening the interconnect lifetime and increasing the interconnect delay. Such effects must be considered during the process of interconnect design. In this paper, one important argument is that the traditional linear dependence between wire resistance and wire width is no longer adequate for high layer interconnects due to the adiabatic property of these wires. By using curve fitting technique, we propose a quadratic model to represent the resistance of interconnect, which is aware of the thermal effects. Based on this model and the Elmore delay model, we derived a linear optimal wire sizing formula in form of f (x) = ax + b. Compared to non-thermal-aware exponential wire sizing formula in form of f (x) = ae-bx, we observed a 49.7% average delay gain with different choices of physical parameters.
---
paper_title: Low Power CMOS Bi-Directional Voltage Converter
paper_content:
A bi-directional CMOS voltage interface cir- cuit is proposed for applications that require an interface between two circuits operating at different voltage levels. The circuit can also be used as a level converter at the driver and receiver ends of long interconnect lines for low swing applications. The proposed interface circuit operates at high speed while consuming very little power. Operation of the interface circuit is verified by both simulation and experimental test circuits.
---
paper_title: Power consumption estimation in CMOS VLSI chips
paper_content:
Power consumption from logic circuits, interconnections, clock distribution, on chip memories, and off chip driving in CMOS VLSI is estimated. Estimation methods are demonstrated and verified. An estimate tool is created. Power consumption distribution between interconnections, clock distribution, logic gates, memories, and off chip driving are analyzed by examples. Comparisons are done between cell library, gate array, and full custom design. Also comparisons between static and dynamic logic are given. Results show that the power consumption of all interconnections and off chip driving can be up to 20% and 65% of the total power consumption respectively. Compared to cell library design, gate array designed chips consume about 10% more power, and power reduction in full custom designed chips could be 15%. >
---
paper_title: Design theory and implementation for low-power segmented bus systems
paper_content:
The concept of bus segmentation has been proposed to minimize power consumption by reducing the switched capacitance on each bus [Chen et al. 1999]. This paper details the design theory and implementation issues of segmented bus systems. Based on a graph model and the Gomory-Hu cut-equivalent tree algorithm, a bus can be partitioned into several bus segments separated by pass transistors. Highly communicating devices are placed to adjacent bus segments, so most data communication can be achieved by switching a small portion of the bus segments. Thus, a significant amount of power consumption can be saved. It can be proved that the proposed bus partitioning method achieves an optimal solution. The concept of tree clustering is also proposed to merge bus segments for further power reduction. The design flow, which includes bus tree construction in the register-transfer level and bus segmentation cell placement and routing in the physical level, is discussed for design implementation. The technology has been applied to a μ-controller design, and simulation results by PowerMill show significant improvement in power consumption.
---
paper_title: Saving power in the control path of embedded processors
paper_content:
CMOS circuits consume power during the charging and discharging of capacitances. Reducing switching activity then, saves power in embedded processors. The authors' two-pronged attack uses Gray code addressing and cold scheduling to eliminate bit switches. >
---
paper_title: Power-optimal encoding for DRAM address bus (poster session)
paper_content:
This paper presents Pyramid code, an optimal code for transmitting sequential addresses over a DRAM bus. Constructed by finding an Eulerian cycle on a complete graph, this code is optimal for conventional DRAM in the sense that it minimizes the switching activity on the time-multiplexed address bus from CPU to DRAM. Experimental results on a large number of testbenches with different characteristics (i.e. sequential vs. random memory access behaviors) are reported and demonstrate a reduction of bus activity by as much as 50%.
---
paper_title: Some issues in gray code addressing
paper_content:
Gray code addressing is one of the techniques previously proposed to reduce switching activity on high capacitance address bus lines. However in order to convert a system to gray address encoding there are several issues a designer needs to consider. This paper analyzes two issues which include gray code encodings for counter increments other than one and tradeoffs in power consumption incurred due to code conversions (binary to gray, gray to binary) when considering address increments and adders. Results are shown for different encodings and different configurations.
---
paper_title: Bus-Invert Coding for Low-Power I/O
paper_content:
Technology trends and especially portable applications drive the quest for low-power VLSI design. Solutions that involve algorithmic, structural or physical transformations are sought. The focus is on developing low-power circuits without affecting too much the performance (area, latency, period). For CMOS circuits most power is dissipated as dynamic power for charging and discharging node capacitances. This is why many promising results in low-power design are obtained by minimizing the number of transitions inside the CMOS circuit. While it is generally accepted that because of the large capacitances involved much of the power dissipated by an IC is at the I/O little has been specifically done for decreasing the I/O power dissipation. We propose the bus-invert method of coding the I/O which lowers the bus activity and thus decreases the I/O peak power dissipation by 50% and the I/O average power dissipation by up to 25%. The method is general but applies best for dealing with buses. This is fortunate because buses are indeed most likely to have very large capacitances associated with them and consequently dissipate a lot of power. >
---
paper_title: Power optimization and management in embedded systems
paper_content:
Power-efficient design requires reducing power dissipation in all parts of the design and during all stages of the design process subject to constraints on the system performance and quality of service (QoS). Power-aware high-level language compilers, dynamic power management policies, memory management schemes, bus encoding techniques, and hardware design tools are needed to meet these often-conflicting design requirements. This paper reviews techniques and tools for power-efficient embedded system design, considering the hardware platform, the application software, and the system software. Design examples from an Intel StrongARM based system are provided to illustrate the concepts and the techniques. This paper is not intended as a comprehensive review, rather as a starting point for understanding power-aware design methodologies and techniques targeted toward embedded systems.
---
paper_title: Asymptotic zero-transition activity encoding for address busses in low-power microprocessor-based systems
paper_content:
In microprocessor-based systems, large power savings can be achieved through reduction of the transition activity of the on- and off-chip buses. This is because the total capacitance being switched when a voltage change occurs on a bus line is usually sensibly larger than the capacitive load that must be charged/discharged when internal nodes toggle. In this paper, we propose an encoding scheme which is suitable for reducing the switching activity on the lines of an address bus. The technique relies on the observation that, in a remarkable number of cases, patterns traveling onto address buses are consecutive. Under this condition it may therefore be possible, for the devices located at the receiving end of the bus, to automatically calculate the address to be received at the next clock cycle; consequently, the transmission of the new pattern can be avoided, resulting in an overall switching activity decrease. We present analytical and experimental analyses showing the improved performance of our encoding scheme when compared to both binary and Gray addressing schemes, the latter being widely accepted as the most efficient method for address bus encoding. We also propose power and timing efficient implementations of the encoding and the decoding logic, and we discuss the applicability of the technique to real microprocessor-based designs.
---
paper_title: Exploiting the locality of memory references to reduce the address bus energy
paper_content:
The energy consumption at the I/O pins is a significant part of the overall chip consumption. This paper presents a method for encoding an external address bus which lowers its activity and, thus, decreases the energy. This method relies on the locality of memory references. Since applications favor a few working zones of their address space at each instant, for an address to one of these zones only the offset of this reference with respect to the previous reference to that zone needs to be sent over the bus, along with an identifier of the current working zone. This is combined with a modified one-shot encoding for the offset. An estimate of the area and energy overhead of the encoder/decoder are given; their effect is small. The approach has been applied to two memory-intensive examples, obtaining a bus-activity reduction of about 2/3 in both of them. Comparisons are given with previous methods for bus encoding, showing significant improvement.
---
paper_title: An Evolutionary Approach for Reducing the Energy in Address Buses
paper_content:
In this paper we present a genetic approach for the efficient generation of an encoder to minimize switching activity on the high-capacity lines of a communication bus. The approach is a static one in the sense that the encoder is realized ad hoc according to the traffic on the bus. This is not, however, a limiting hypothesis if the application scenario considered is that of embedded systems. An embedded system, in fact, executes the same application throughout its lifetime and so it is possible to have detailed knowledge of the trace of the patterns transmitted on a bus following execution of a specific application. The approach is compared with the most efficient encoding schemes proposed in the literature on both multiplexed and separate buses. The results obtained demonstrate the validity of the approach, which on average saves up to 50% of the transitions normally required.
---
paper_title: Address bus encoding techniques for system-level power optimization
paper_content:
The power dissipated by system-level buses is the largest contribution to the global power of complex VLSI circuits. Therefore, the minimization of the switching activity at the I/O interfaces can provide significant savings on the overall power budget. This paper presents innovative encoding techniques suitable for minimizing the switching activity of system-level address buses. In particular, the schemes illustrated here target the reduction of the average number of bus line transitions per clock cycle. Experimental results, conducted on address streams generated by a real microprocessor, have demonstrated the effectiveness of the proposed methods.
---
paper_title: System-level power optimization of special purpose applications: the Beach Solution
paper_content:
This paper describes a new approach to low-power bus encoding, called "The Beach Solution", which is thought for power optimization of digital systems containing an embedded processor or a microcontroller executing a special-purpose software routine. The main difference between the proposed method and existing bus encoding techniques is that it is strongly application-dependent, in the sense that it is Based on the analysis of the execution stream of a given program. This allows an accurate computation of the correlations that may exist between blocks of bits in consecutive patterns, and that can be successfully exploited to determine an encoding which minimizes the bus transition activity. Experimental results, obtained on a set of special-purpose applications, are very promising; reductions of the bus activity up to 64.8% (41.9% on average) have been achieved over the original address streams.
---
paper_title: Reducing address bus transition for low power memory mapping
paper_content:
We present low power techniques for mapping arrays in behavioral specifications to physical memory, specifically for memory intensive behaviors that exhibit regularity in their memory access patterns. Our approach exploits this regularity in memory accesses by reducing the number of transitions on the memory address bus. We study the impact of different strategies for mapping arrays in behaviors to physical memory, on power dissipation during memory accesses. We describe a heuristic for selecting a memory mapping strategy to achieve low power, and present an evaluation of the architecture that implements the mapping techniques to study the transition count overhead. Experiments on several image processing benchmarks indicate power savings of upto 63 % through reduced transition activity on the memory address bus.
---
paper_title: Exploiting the routing flexibility for energy/performance aware mapping of regular NoC architectures
paper_content:
In this paper we present an algorithm which automatically maps the IPs onto a generic regular Network on Chip (NoC) architecture and constructs a deadlock-free deterministic routing function such that the total communication energy is minimized. At the same time, the performance of the resulting communication system is guaranteed to satisfy the specified constraints through bandwidth reservation. As the main contribution, we first formulate the problem of energy/performance aware mapping, in a topological sense, and show how the routing flexibility can be exploited to expand the solution space and improve the solution quality An efficient branch-and-bound algorithm is then described to solve this problem. Experimental results show that the proposed algorithm is very fast, and significant energy savings can be achieved. For instance, for a complex video/audio application, 51.7% energy savings have been observed, on average, compared to an ad-hoc implementation.
---
paper_title: Coupling-driven bus design for low-power application-specific systems
paper_content:
In modern embedded systems including communication and multimedia applications, a large fraction of power is consumed during memory access and data transfer. Thus, buses should be designed and optimized to consume reasonable power while delivering sufficient performance. In this paper, we address a bus ordering problem for low-power application-specific systems. A heuristic algorithm is proposed to determine the order in a way that effective lateral component of capacitance is reduced, thereby reducing the power consumed by buses. Experimental results for various examples indicate that an average power saving from 30% to 46.7%, depending on capacitance components, can be obtained without any circuit overhead.
---
paper_title: Architectural power optimization by bus splitting
paper_content:
A split-bus architecture is proposed to improve the power dissipation for global data exchange among a set of modules. The resulting bus splitting problem is formulated and solved combinatorially. Experimental results show that the power saving of the split-bus architecture compared to the monolithic-bus architecture varies from 16% to 50%, depending on the characteristics of the data transfer among the modules and the configuration of the split bus. The proposed split-bus architecture can be extended to multi-way split-bus when a large number of modules are to be connected.
---
paper_title: A Technology-Aware and Energy-Oriented Topology Exploration for On-Chip Networks
paper_content:
As packet-switching interconnection networks replace buses and dedicated wires to become the standard on-chip interconnection fabric, reducing their power consumption has been identified to be a major design challenge. Network topologies have high impact on network power consumption. Technology scaling is another important factor that affects network power since each new technology changes semiconductor physical properties. As shown in this paper, these two aspects need to be considered synergistically. In this paper, we characterize the impact of process technologies on network energy for a range of topologies, starting from 2-dimensional meshes/tori, to variants of meshes/tori that incorporate higher dimensions, multiple hierarchies and express channels. We present a method which uses an analytical model to predict the most energy-efficient topology based on network size and architecture parameters for future technologies. Our model is validated against cycle-accurate network power simulation and shown to arrive at the same predictions. We also show how our method can be applied to actual parallel benchmarks with a case study. We see this work as a starting point for defining a roadmap of future on-chip networks.
---
paper_title: Design theory and implementation for low-power segmented bus systems
paper_content:
The concept of bus segmentation has been proposed to minimize power consumption by reducing the switched capacitance on each bus [Chen et al. 1999]. This paper details the design theory and implementation issues of segmented bus systems. Based on a graph model and the Gomory-Hu cut-equivalent tree algorithm, a bus can be partitioned into several bus segments separated by pass transistors. Highly communicating devices are placed to adjacent bus segments, so most data communication can be achieved by switching a small portion of the bus segments. Thus, a significant amount of power consumption can be saved. It can be proved that the proposed bus partitioning method achieves an optimal solution. The concept of tree clustering is also proposed to merge bus segments for further power reduction. The design flow, which includes bus tree construction in the register-transfer level and bus segmentation cell placement and routing in the physical level, is discussed for design implementation. The technology has been applied to a μ-controller design, and simulation results by PowerMill show significant improvement in power consumption.
---
paper_title: Code compression for embedded systems
paper_content:
Memory is one of the most restricted resources in many modern embedded systems. Code compression can provide substantial savings in terms of size. In a compressed code CPU, a cache miss triggers the decompression of a main memory block, before it gets transferred to the cache. Because the code must be decompressible starting from any point (or at least at cache block boundaries), most file-oriented compression techniques cannot be used. We propose two algorithms to compress code in a space-efficient and simple to decompress way, one which is independent of the instruction set and another which depends on the instruction set. We perform experiments on true instruction sets, a typical RISC (MIPS) and a typical CISC (x86) and compare our results to existing file-oriented compression algorithms.
---
paper_title: The emerging power crisis in embedded processors: what can a poor compiler do?
paper_content:
It is widely acknowledged that even as VLSI technology advances, there is a looming crisis that is an important obstacle to the widespread deployment of mobile embedded devices, namely that of power. This problem can be tackled at many levels like devices, logic, operating systems, micro-architecture and compiler. While there have been various proposals for specific compiler optimizations for power, there has not been any attempt to systematically map out the space for possible improvements. In this paper, we quantitatively characterize the limits of what a compiler can do in optimizing for power using precise modeling of a state-of-the-art embedded processor in conjunction with a robust compiler. We provide insights to how compiler optimizations interact with the internal workings of a processor from the perspective of power consumption. The goal is to point out the promising and not so promising directions of work in this area, to guide the future compiler designer.
---
paper_title: A Survey and Taxonomy of GALS Design Styles
paper_content:
Single-clocked digital systems are largely a thing of the past. Although most digital circuits remain synchronous, many designs feature multiple clock domains, often running at different frequencies. Using an asynchronous interconnect decouples the timing issues for the separate blocks. Systems employing such schemes are called globally asynchronous, locally synchronous (GALS). To minimize time to market, large SoC designs must integrate many functional blocks with minimal design effort. These blocks are usually designed using standard synchronous methods and often have different clocking requirements. A GALS approach can facilitate fast block reuse by providing wrapper circuits to handle interblock communication across clock domain boundaries. SoCs may also achieve power savings by clocking different blocks at their minimum speeds. For example, Scott et al. describe the advantages of GALS design for an embedded-processor peripheral bus.
---
paper_title: A Low-Power Application Specific Instruction Set Processor Using Asynchronous Function Units
paper_content:
Low-power design became crucial with widespread use of the embedded systems. The embedded processors need to be efficient in order to achieve real-time requirements with low power consumption for specific algorithms. As the superset of traditional very long instruction word (VLIW) architecture, the main advantages of transport triggered architecture (TTA) are its simplicity and flexibility. In TTA processors, the special function units can be utilized to increase performance or reduce power dissipation. Asynchronous circuits have characteristic of low power consumption and it is possible to exploit this characteristic to design embedded processors. In this article, we designed a low-power embedded processor using asynchronous function units. The processor core is globally synchronous and locally asynchronous implementation using not only synchronous function units but also asynchronous function units. We also presented an efficient design flow that use asynchronous circuits in TTA framework, which is only a synchronous design environment. The test result shows that this processor has lower power dissipation than its pure synchronous version that only uses synchronous function units.
---
paper_title: Interfacing synchronous and asynchronous modules within a high-speed pipeline
paper_content:
This paper describes a new technique for integrating asynchronous modules within a high-speed synchronous pipeline. Our design eliminates potential metastability problems by using a clock generated by a stoppable ring oscillator, which is capable of driving the large clock load found in present day microprocessors. Using the ATACS design tool, we designed highly optimized transistor-level circuits to control the ring oscillator and generate the clock and handshake signals with minimal overhead. Our interface architecture requires no redesign of the synchronous circuitry. Incorporating asynchronous modules in a high-speed pipeline improves performance by exploiting data-dependent delay variations. Since the speed of the synchronous circuitry tracks the speed of the ring oscillator under different processes, temperatures, and voltages, the entire chip operates at the speed dictated by the current operating conditions, rather than being governed by the worst-case conditions. These two factors together can lead to a significant improvement in average-case performance. The interface design is tested using the 0.6 /spl mu/m HP CMOS14B process in HSPICE.
---
paper_title: GALS at ETH Zurich: success or failure?
paper_content:
The Integrated Systems Laboratory (IIS) of ETH Zurich (Swiss Federal Institute of Technology) has been active in globally-asynchronous locally-synchronous (GALS) research since 1998. During this time, a number of GALS circuits have been fabricated and tested successfully on silicon. From a hardware designers point of view, this article summarizes the evolution from proof of concept designs over multi-point interconnects to applications that specifically take advantage of GALS operation to improve cryptographic security. In spite of the fact that they fail to address numerous idiosyncrasies of GALS (such as good partitioning into synchronous islands, port controller design, pausable clock generators, design for test, etc.), hierarchical design flows have been found to form a workable basis. What prevents GALS from gaining a wider acceptance mainly is the initial effort required to come up with a design flow that is efficient and dependable.
---
paper_title: Design of on-chip and off-chip interfaces for a GALS NoC architecture
paper_content:
In this paper, we propose the design of on-chip and off-chip interfaces adapted to a globally asynchronous locally synchronous (GALS) network-on-chip (NoC) architecture. The proposed on-chip interface not only handles the resynchronization between the synchronous and asynchronous NoC domains, but also implements NoC communication priorities. This design is based on existing multi-clock synchronization fifos based on Gray code, and is adapted to standard implementation tools. Concerning Off-chip communications, a new concept of mixed synchronous/asynchronous dual mode NoC port is proposed as an efficient off-chip NoC interface for NoC-based open-platform prototyping. These interfaces have been successfully implemented in a 0.13/spl mu/m CMOS technology.
---
paper_title: An FPGA for implementing asynchronous circuits
paper_content:
Field-programmable gate arrays are a dominant implementation medium for digital circuits, especially for glue logic. Unfortunately, they do not support asynchronous circuits. This is a significant problem because many aspects of glue logic and communication interfaces involve asynchronous elements, or require the interconnection of synchronous components operating under independent clocks. We describe Montage, the first FPGA to explicitly support asynchronous circuit implementation, and its mapping software. Montage can be used to realize asynchronous interface circuits or to prototype complete asynchronous systems, thus bringing the benefits of rapid prototyping to asynchronous design. Unfortunately, implementation media for asynchronous circuits and systems have not kept up with those for the synchronous world. Programmable logic devices do not include the special non-digital circuits required by asynchronous design methodologies (e.g., arbiters and synchronizers) nor do they facilitate hazard-free logic implementations. This leads to huge inefficiencies in the implementation of asynchronous designs as circuits require a variety of seperate devices. This has caused most asynchronous designers to focus on custom or semi-custom integrated circuits, thus incurring greater expense in time and money. The net effect has been that optimized and robust asynchronous circuits have not become a part of typical system designs. The asynchronous circuits that must be included are usually designed in an ad-hoc manner with many underlying assumptions. This is a highly error- prone process, and causes implementations to be unnecessarily delicate to delay variations. Field-programmable gate arrays, one of today's dominant media for prototyping and implementing digital circuits, are also inappropriate for constructing more than the simplest asynchronous interfaces. They lack the critical elements at the heart of today's asynchronous designs. Unfortunately, resolving this problem is not just a simple matter of adding these elements to the programmable array. The FPGA must also have predictable routing delay and must not introduce hazards in either the logic or routing. Futhermore, the mapping tools must also be modified to handle asynchronous concerns, especially the proper decomposition of logic to fit into the programmable logic blocks and the proper routing of signals to ensure that required timing relationships are met. Ideally, we need an FPGA that can support both synchronous and asynchronous circuits with comparable efficiency. As a step in this direction we present Montage, an integrated system of FPGA architecture and mapping software designed to support both asynchronous circuits and synchronous interfaces. The architecture provides circuits with hazard-free logic and routing, mutual exclusion elements to handle metastability, and methods for initializing unclocked elements. The mapping software generates placement and signal routing sensitive to the timing demands of asynchronous methods. With these features, the Montage system forms a prototyping and implementation medium for asynchronous designs, providing asynchronous circuits with a powerful tool from the synchronous designer's toolbox.
---
paper_title: An ultra low power system architecture for sensor network applications
paper_content:
Recent years have seen a burgeoning interest in embedded wireless sensor networks with applications ranging from habitat monitoring to medical applications. Wireless sensor networks have several important attributes that require special attention to device design. These include the need for inexpensive, long-lasting, highly reliable devices coupled with very low performance requirements. Ultimately, the "holy grail" of this design space is a truly untethered device that operates off of energy scavenged from the ambient environment. In this paper, we describe an application-driven approach to the architectural design and implementation of a wireless sensor device that recognizes the event-driven nature of many sensor-network workloads. We have developed a full-system simulator for our sensor node design to verify and explore our architecture. Our simulation results suggest one to two orders of magnitude reduction in power dissipation over existing commodity-based systems for an important class of sensor network applications. We are currently in the implementation stage of design, and plan to tape out the first version of our system within the next year.
---
paper_title: Compiling the language Balsa to delay insensitive hardware
paper_content:
A silicon compiler, Balsa-c, has been developed for the automatic synthesis of asynchronous, delay-insensitive circuits from the language Balsa. Balsa is derived from CSP with similar language constructs and a single-bit granularity type system.
---
paper_title: Bi-Synchronous FIFO for Synchronous Circuit Communication Well Suited for Network-on-Chip in GALS Architectures
paper_content:
The distribution of a synchronous clock in system-on-chip (SoC) has become a problem, because of wire length and process variation. Novel approaches such as the globally asynchronous, locally synchronous try to solve this issue by partitioning the SoC into isolated synchronous islands. This paper describes the bisynchronous FIFO used on the DSPIN network-on-chip capable to interface systems working with different clock signals (frequency and/or phase). Its interfaces are synchronous and its architecture is scalable and synthesizable in synchronous standard cells. The metastability situations and its latency are analyzed. Its throughput, maximum frequency, and area are evaluated in function of the FIFO depth.
---
paper_title: BitSNAP: dynamic significance compression for a low-energy sensor network asynchronous processor
paper_content:
We present a novel asynchronous processor architecture called BitSNAP that utilizes bit-serial datapaths with dynamic significance compression to yield extremely low-energy consumption. Based on the sensor network asynchronous processor (SNAP) ISA, BitSNAP can reduce datapath energy consumption by 50% over a comparable parallel-word processor, while still providing performance suited for powering low-energy sensor network nodes. In 180 nm CMOS, the processor is expected to run at between 6 and 54 MIPS while consuming 152 pJ/ins at 1.8 V and just 17 pJ/ins at 0.6 V.
---
paper_title: ARM996HS™ the first licensable, clockless 32-bit processor core
paper_content:
This article consists of a collection of slides from the author's conference presentation on the ARM-Handshake Solutions partnership. Some of the specific topics discussed include: the specifics of the partnership which involved the joint development of the ARM core implementations; potential application domains; ARM embedded processors and measured power efficiency; a description of Handshake technology; and an overview of joint development technologies; and power processing capabilities.
---
paper_title: Globally Asynchronous, Locally Synchronous Circuits: Overview and Outlook
paper_content:
This article provides a pragmatic survey on the state of the art in GALS architectural techniques, design flows, and applications. The authors also prescribe several industrial inventions and changes in methodology, tools, and design flow that would improve GALS-based integration of IP blocks.
---
paper_title: A Study of the Speedups and Competitiveness of FPGA Soft Processor Cores using Dynamic Hardware/Software Partitioning
paper_content:
Field programmable gate arrays (FPGAs) provide designers with the ability to quickly create hardware circuits. Increases in FPGA configurable logic capacity and decreasing FPGA costs have enabled designers to more readily incorporate FPGAs in their designs. FPGA vendors have begun providing configurable soft processor cores that can be synthesized onto their FPGA products. While FPGAs with soft processor cores provide designers with increased flexibility, such processors typically have degraded performance and energy consumption compared to hard-core processors. Previously, we proposed warp processing, a technique capable of optimizing a software application by dynamically and transparently re-implementing critical software kernels as custom circuits in on-chip configurable logic. In this paper, we study the potential of a MicroBlaze soft-core based warp processing system to eliminate the performance and energy overhead of a soft-core processor compared to a hard-core processor. We demonstrate that the soft-core based warp processor achieves average speedups of 5.8 and energy reductions of 57% compared to the soft core alone. Our data shows that a soft-core based warp processor yields performance and energy consumption competitive with existing hard-core processors, thus expanding the usefulness of soft processor cores on FPGAs to a broader range of applications.
---
paper_title: Reconfigurable computing: architectures and design methods
paper_content:
Reconfigurable computing is becoming increasingly attractive for many applications. This survey covers two aspects of reconfigurable computing: architectures and design methods. The paper includes recent advances in reconfigurable architectures, such as the Alters Stratix II and Xilinx Virtex 4 FPGA devices. The authors identify major trends in general-purpose and special-purpose design methods. It is shown that reconfigurable computing designs are capable of achieving up to 500 times speedup and 70% energy savings over microprocessor implementations for specific applications.
---
paper_title: Dynamo: a transparent dynamic optimization system
paper_content:
We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.
---
paper_title: Introduction to the cell multiprocessor
paper_content:
This paper provides an introductory overview of the Cell multiprocessor. Cell represents a revolutionary extension of conventional microprocessor architecture and organization. The paper discusses the history of the project, the program objectives and challenges, the disign concept, the architecture and programming models, and the implementation.
---
paper_title: Dynamic and Transparent Binary Translation
paper_content:
High-frequency design and instruction-level parallelism (ILP) are important for high-performance microprocessor implementations. The Binary-translation Optimized Architecture (BOA), an implementation of the IBM PowerPC family, combines binary translation with dynamic optimization. The authors use these techniques to simplify the hardware by bridging a semantic gap between the PowerPC's reduced instruction set and even simpler hardware primitives. Processors like the Pentium Pro and Power4 have tried to achieve high frequency and ILP by implementing a cracking scheme in hardware: an instruction decoder in the pipeline generates multiple micro-operations that can then be scheduled out of order. BOA relies on an alternative software approach to decompose complex operations and to generate schedules, and thus offers significant advantages over purely static compilation approaches. This article explains BOA's translation strategy, detailing system issues and architecture implementation.
---
paper_title: Power efficient processor architecture and the cell processor
paper_content:
This paper provides a background and rationale for some of the architecture and design decisions in the cell processor, a processor optimized for compute-intensive and broadband rich media applications, jointly developed by Sony Group, Toshiba, and IBM. The paper discusses some of the challenges microprocessor designers face and provides motivation for performance per transistor as a reasonable first-order metric for design efficiency. Common microarchitectural enhancements relative to this metric are provided. Also alternate architectural choices and some of its limitations are discussed and non-homogeneous SMP as a means to overcome these limitations is proposed.
---
paper_title: A Configurable Logic Architecture for Dynamic Hardware/Software Partitioning
paper_content:
In previous work, we showed the benefits and feasibility of having a processor dynamically partition its executing software such that critical software kernels are transparently partitioned to execute as a hardware coprocessor on configurable logic - an approach we call warp processing. The configurable logic place and route step is the most computationally intensive part of such hardware/software partitioning, normally running for many minutes or hours on powerful desktop processors. In contrast, dynamic partitioning requires place and route to execute in just seconds and on a lean embedded processor. We have therefore designed a configurable logic architecture specifically for dynamic hardware/software partitioning. Through experiments with popular benchmarks, we show that by specifically focusing on the goal of software kernel speedup when designing the FPGA architecture, rather than on the more general goal of ASIC prototyping, we can perform place and route for our architecture 50 times faster, using 10,000 times less data memory, and 1,000 times less code memory, than popular commercial tools mapping to commercial configurable logic. Yet, we show that we obtain speedups (2x on average, and as much as 4x) and energy savings (33% on average, and up to 74%) when partitioning even just one loop, which are comparable to commercial tools and fabrics. Thus, our configurable logic architecture represents a good candidate for platforms that will support dynamic hardware/software partitioning, and enables ultra-fast desktop tools for hardware/software partitioning, and even for fast configurable logic design in general.
---
paper_title: A decade of reconfigurable computing: a visionary retrospective
paper_content:
The paper surveys a decade of R&D on coarse grain reconfigurable hardware and related CAD, points out why this emerging discipline is heading toward a dichotomy of computing science, and advocates the introduction of a new soft machine paradigm to replace CAD by compilation.
---
paper_title: A dynamic instruction set computer
paper_content:
A dynamic instruction set computer (DISC) has been developed that supports demand-driven modification of its instruction set. Implemented with partially reconfigurable FPGAs, DISC treats instructions as removable modules paged in and out through partial reconfiguration as demanded by the executing program. Instructions occupy FPGA resources only when needed and FPGA resources can be reused to implement an arbitrary number of performance-enhancing application-specific instructions. DISC further enhances the functional density of FPGAs by physically relocating instruction modules to available FPGA space.
---
paper_title: Organization of computer systems: the fixed plus variable structure computer
paper_content:
The past decade has seen the development of productive fast electronic digital computers. Significant problems have been solved and significant numerical experiments have been executed. Moreover, as expected, a growing number of important problems have been recorded which are not practicably computable by existing systems. These latter problems have provided the incentive for the present development of several large scale digital computers with the goal of one or two orders of magnitude increase in overall computational speed.
---
paper_title: Montium - Balancing between Energy-Efficiency, Flexibility and Performance
paper_content:
Architectures for mobile multimedia devices need to find a balance between energy-efficiency, flexibility and performance. In this paper it is reasoned that this can be accomplished by way of a System-on-Chip (SoC) that comprises heterogeneous processing tiles. This heterogeneous SoC calls for domain specific coarse-grained reconfigurable processing tiles. A design of a domain specific coarse-grained reconfigurable architecture, the Montium , is discussed in detail.
---
paper_title: Dynamic hardware/software partitioning: a first approach
paper_content:
Partitioning an application among software running on a microprocessor and hardware co-processors in on-chip configurable logic has been shown to improve performance and energy consumption in embedded systems. Meanwhile, dynamic software optimization methods have shown the usefulness and feasibility of runtime program optimization, but those optimizations do not achieve as much as partitioning. We introduce a first approach to dynamic hardware/software partitioning. We describe our system architecture and initial on-chip tools, including profiler, decompiler, synthesis, and placement and routing tools for a simplified configurable logic fabric, able to perform dynamic partitioning of real benchmarks. We show speedups averaging 2.6 for five benchmarks taken from Powerstone, NetBench, and our own benchmarks.
---
paper_title: MATRIX: a reconfigurable computing architecture with configurable instruction distribution and deployable resources
paper_content:
MATRIX is a novel, coarse-grain, reconfigurable computing architecture which supports configurable instruction distribution. Device resources are allocated to controlling and describing the computation on a per task basis. Application-specific regularity allows us to compress the resources allocated to instruction control and distribution, in many situations yielding more resources for datapaths and computations. The adaptability is made possible by a multi-level configuration scheme, a unified configurable network supporting both datapaths and instruction distribution, and a coarse-grained building block which can serve as an instruction store, a memory element, or a computational element. In a 0.5 /spl mu/ CMOS process, the 8-bit functional unit at the heart of the MATRIX architecture has a footprint of roughly 1.5 mm/spl times/1.2 mm, making single dies with over a hundred function units practical today. At this process point, 100 MHz operation is easily achievable, allowing MATRIX components to deliver on the order of 10 Gop/s (8-bit ops).
---
paper_title: Warp Processors
paper_content:
We describe a new processing architecture, known as a warp processor, that utilizes a field-programmable gate array (FPGA) to improve the speed and energy consumption of a software binary executing on a microprocessor. Unlike previous approaches that also improve software using an FPGA but do so using a special compiler, a warp processor achieves these improvements completely transparently and operates from a standard binary. A warp processor dynamically detects the binary's critical regions, reimplements those regions as a custom hardware circuit in the FPGA, and replaces the software region by a call to the new hardware implementation of that region. While not all benchmarks can be improved using warp processing, many can, and the improvements are dramatically better than those achievable by more traditional architecture improvements. The hardest part of warp processing is that of dynamically reimplementing code regions on an FPGA, requiring partitioning, decompilation, synthesis, placement, and routing tools, all having to execute with minimal computation time and data memory so as to coexist on chip with the main processor. We describe the results of developing our warp processor. We developed a custom FPGA fabric specifically designed to enable lean place and route tools, and we developed extremely fast and efficient versions of partitioning, decompilation, synthesis, technology mapping, placement, and routing. Warp processors achieve overall application speedups of 6.3X with energy savings of 66p across a set of embedded benchmark applications. We further show that our tools utilize acceptably small amounts of computation and memory which are far less than traditional tools. Our work illustrates the feasibility and potential of warp processing, and we can foresee the possibility of warp processing becoming a feature in a variety of computing domains, including desktop, server, and embedded applications.
---
paper_title: Reconfigurable computer origins: the UCLA fixed-plus-variable (F+V) structure computer
paper_content:
Gerald Estrin and his group at the University of California at Los Angeles did the earliest work on reconfigurable computer architectures. The early research, described here, provides pointers to work on models and tools for reconfigurable systems design and analysis.
---
paper_title: Low Power Coarse-Grained Reconfigurable Instruction Set Processor
paper_content:
Current embedded multimedia applications have stringent time and power constraints. Coarse-grained reconfigurable processors have been shown to achieve the required performance. However, there is not much research regarding the power consumption of such processors. In this paper, we present a novel coarse-grained reconfigurable processor and study its power consumption using a power model derived from Wattch. Several processor configurations are evaluated using a set of multimedia applications. Results show that the presented coarse-grained processor can achieve on average 2.5x the performance of a RISC processor with an 18% increase in energy consumption.
---
paper_title: Designing Embedded Processors A Low Power Perspective
paper_content:
As we embrace the world of personal, portable, and perplexingly complex digital systems, it has befallen upon the bewildered designer to take advantage of the available transistors to produce a system which is small, fast, cheap and correct, yet possesses increased functionality. Increasingly, these systems have to consume little energy. Designers are increasingly turning towards small processors, which are low power, and customize these processors both in software and hardware to achieve their objectives of a low power system, which is verified, and has short design turnaround times. Designing Embedded Processorsexamines the many ways in which processor based systems are designed to allow low power devices. It looks at processor design methods, memory optimization, dynamic voltage scaling methods, compiler methods, and multi processor methods. Each section has an introductory chapter to give a breadth view, and have a few specialist chapters in the area to give a deeper perspective. The book provides a good starting point to engineers in the area, and to research students embarking upon the exciting area of embedded systems and architectures.
---
paper_title: Application-specific instruction generation for configurable processor architectures
paper_content:
Designing an application-specific embedded system in nanometer technologies has become more difficult than ever due to the rapid increase in design complexity and manufacturing cost. Efficiency and flexibility must be carefully balanced to meet different application requirements. The recently emerged configurable and extensible processor architectures offer a favorable tradeoff between efficiency and flexibility, and a promising way to minimize certain important metrics (e.g., execution time, code size, etc.) of the embedded processors. This paper addresses the problem of generating the application-specific instructions to improve the execution speed for configurable processors. A set of algorithms, including pattern generation, pattern selection, and application mapping, are proposed to efficiently utilize the instruction set extensibility of the target configurable processor. Applications of our approach to several real-life benchmarks on the Altera Nios processor show encouraging performance speedup (2.75X on average and up to 3.73X in some cases).
---
paper_title: Automatic Data Path Generation from C code for Custom Processors
paper_content:
The stringent performance constraints and short time to market of modern digital systems require automatic methods for design of high performance applicationspecific architectures. This paper presents a novel algorithm for automatic generation of custom pipelined data path for a given application from its C code. The data path optimization targets both resource utilization and performance. The input to this architecture generator includes application C code, operation execution frequencies obtained by the profile run and a component library consisting of functional units, busses, multiplexers etc. The output is data path specified as a net-list of resource instances and their connections. The algorithm starts with an architecture that supports maximum parallelism for implementation of the input C code and iteratively refines it until an efficient resource utilization is obtained while maintaining the performance constraint. This paper also presents an algorithm to choose the priority of application basic blocks for optimization. Our experimental results show that automatically generated data paths satisfy given performance criteria and can be obtained in a matter of minutes leading to significant productivity gains.
---
paper_title: NISC: The Ultimate Reconfigurable Component
paper_content:
With complexities of Systems-on-Chip rising almost daily, the design community has been searching for new methodology that can handle given complexities with increased productivity and decreased times-to-market. The obvious solution that comes to mind is increasing levels of abstraction, or in other words, increasing the size of the basic building blocks. However, it is not clear how many of these building blocks we need and what these basic blocks should be. Obviously, the necessary building blocks are processors and memories. One interesting question is: “Are they sufficient?”. The other interesting question is: “How many types of processors and memories we really need?”. In this report we try to answer both of these questions and argue that the No-instruction-set computer (NISC) is a single, necessary and sufficient processor component for design of any digital system.
---
paper_title: A cycle-accurate compilation algorithm for custom pipelined datapaths
paper_content:
Traditional high level synthesis (HLS) techniques generate a datapath and controller for a given behavioral description. The growing wiring cost and delay of today technologies require aggressive optimizations, such as interconnect pipelining, that cannot be done after generating the datapath and without invalidating the schedule. On the other hand, the increasing manufacturing complexities demand approaches that favor design for manufacturability (DFM).To address these problems we propose an approach in which the datapath of the architecture is fully allocated before scheduling and binding. We compile a C program directly to the datapath and generate the controller. We can support the entire ANSI C syntax because the datapath can be as complex as the datapath of a processor. Since there is no instruction abstraction in this architecture we call it No-Instruction-Set-Computer (NISC). As the first step towards realization of a NISC-based design flow, we present an algorithm that maps an application on a given datapath by performing scheduling and binding simultaneously. With this algorithm, we achieved up to 70% speedup on a NISC with a datapath similar to that of MIPS, compared to a MIPS gcc compiler. It also efficiently handles different datapath features such as pipelining, forwarding and multi-cycle units.
---
| Title: Low power processor architectures and contemporary techniques for power optimization—a review
Section 1: INTRODUCTION
Description 1: Provide an overview of the increasing need for low power system design due to the growth of battery-operated devices and outline the paper's structure.
Section 2: Relation between energy and power
Description 2: Discuss the relationship between energy and performance, and how power consumption can be categorized and optimized.
Section 3: Dynamic Voltage-Frequency Scaling (DVFS)
Description 3: Explain the DVFS technique and its classifications, including scheduling policies and real-time applications, with examples of proposed methods to save energy.
Section 4: Clock Gating
Description 4: Describe how clock gating can reduce dynamic power consumption by minimizing gate toggling and controlling when parts of the processor are active.
Section 5: Power Gating
Description 5: Outline the power gating technique to reduce leakage power by disconnecting inactive logic blocks from the power supply.
Section 6: COMPONENT LEVEL POWER REDUCTION TECHNIQUES
Description 6: Explore power optimization techniques for major power-consuming components like cache, pipeline, and buses. Detail specific methods to reduce power in these components.
Section 7: Cache
Description 7: Discuss various techniques to reduce power consumption in cache memory, such as buffers, turning off cache lines, sub-banking, and efficient tagging schemes.
Section 8: Pipelining
Description 8: Examine methods to reduce power consumption in pipelined processors, including controlling speculation, pipeline gating, and reducing pipeline depth.
Section 9: Low power Buses
Description 9: Highlight approaches to optimize power usage in bus systems, addressing bus swing reduction, encoding techniques, and bus structure redesigns.
Section 10: ADDITIONAL APPROACHES
Description 10: Introduce other methodologies for power optimization applied at different design stages, including compiler optimizations and code compression.
Section 11: LOW POWER ARCHITECTURES
Description 11: Provide an overview of novel low power architectures, including asynchronous processors, Reconfigurable Instruction Set Processors (RISP), Application Specific Instruction Set Processors (ASIP), extensible processors, and No Instruction Set Computer (NISC) architectures.
Section 12: SUMMARY & CONCLUSION
Description 12: Summarize the discussed techniques and architectures, emphasizing the importance of power efficiency in future embedded systems and potential combinations of the reviewed approaches. |
A survey on universal approximation and its limits in soft computing techniques | 13 | ---
paper_title: Implications and applications of Kolmogorov's superposition theorem
paper_content:
Applications of Kolmogorov's superposition theorem to non-linear circuit and system theory, statistical pattern recognition, and image and multidimensional signal processing are presented and discussed.
---
paper_title: Fuzzy systems are universal approximators
paper_content:
The author proves that fuzzy systems are universal approximators. The Stone-Weierstrass theorem is used to prove that fuzzy systems with product inference, centroid defuzzification, and a Gaussian membership function are capable of approximating any real continuous function on a compact set to arbitrary accuracy. This result can be viewed as an existence theorem of an optimal fuzzy system for a wide variety of problems. >
---
paper_title: Multilayer feedforward networks are universal approximators
paper_content:
Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.
---
paper_title: Kolmogorov's theorem and multilayer neural networks
paper_content:
Abstract Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof, we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation.
---
paper_title: Sufficient conditions on uniform approximation of multivariate functions by general Takagi-Sugeno fuzzy systems with linear rule consequent
paper_content:
We have constructively proved a general class of multi-input single-output Takagi-Sugeno (TS) fuzzy systems to be universal approximators. The systems use any types of continuous fuzzy sets, fuzzy logic AND, fuzzy rules with linear rule consequent and the generalized defuzzifier. We first prove that the TS fuzzy systems can uniformly approximate any multivariate polynomial arbitrarily well, and then prove they can also uniformly approximate any multivariate continuous function arbitrarily well. We have derived a formula for computing the minimal upper bounds on the number of fuzzy sets and fuzzy rules necessary to achieve the prespecified approximation accuracy for any given bivariate function. A numerical example is furnished. Our results provide a solid-theoretical basis for fuzzy system applications, particularly as fuzzy controllers and models.
---
paper_title: Approximation theory and feedforward networks
paper_content:
Approximation of real functions by feedforward networks of the usual kind is shown to be based on the fundamental principle of approximation by piecewise-constant functions. This principle underlies a simple construction given for three-layer networks and suggests possible difficulties in determining two-layer networks.
---
paper_title: Necessary conditions on minimal system configuration for general MISO Mamdani fuzzy systems as universal approximators
paper_content:
Recent studies have shown that both Mamdani-type and Takagi-Sugeno-type fuzzy systems are universal approximators in that they can uniformly approximate continuous functions defined on compact domains with arbitrarily high approximation accuracy. In this paper, we investigate necessary conditions for general multiple-input single-output (MISO) Mamdani fuzzy systems as universal approximators with as minimal system configuration as possible. The general MISO fuzzy systems employ almost arbitrary continuous input fuzzy sets, arbitrary singleton output fuzzy sets, arbitrary fuzzy rules, product fuzzy logic AND, and the generalized defuzzifier containing the popular centroid defuzzifier as a special case. Our necessary conditions are developed under the practically sensible assumption that only a finite set of extrema of the multivariate continuous function to be approximated is available. We have first revealed a decomposition property of the general fuzzy systems: A r-input fuzzy system can always be decomposed to the sum of r simpler fuzzy systems where the first system has only one input variable, the second one two input variables, and the last one r input variables. Utilizing this property, we have derived some necessary conditions for the fuzzy systems to be universal approximators with minimal system configuration. The conditions expose the strength as well as limitation of the fuzzy approximation: (1) only a small number of fuzzy rules may be needed to uniformly approximate multivariate continuous functions that have a complicated formulation but a relatively small number of extrema; and (2) the number of fuzzy rules must be large in order to approximate highly oscillatory continuous functions. A numerical example is given to demonstrate our new results.
---
paper_title: Fuzzy logic controllers are universal approximators
paper_content:
In this paper, we consider a fundamental theoretical question on why does fuzzy control have such a good performance for a wide variety of practical problems. We try to answer this fundamental question by proving that for each fixed fuzzy logic belonging to a wide class of fuzzy logics, and for each fixed type of membership function belonging to a wide class of membership functions, the fuzzy logic control systems using these two and any method of defuzzification are capable of approximating any real continuous function on a compact set to arbitrary accuracy. On the other hand, this result can be viewed as an existence theorem of an optimal fuzzy logic control system for a wide variety of problems. >
---
paper_title: A comparative study on sufficient conditions for Takagi-Sugeno fuzzy systems as universal approximators
paper_content:
Universal approximation is the basis of theoretical research and practical applications of fuzzy systems. Studies on the universal approximation capability of fuzzy systems have achieved great progress in recent years. In this paper, linear Takagi-Sugeno (TS) fuzzy systems that use linear functions of input variables as rule consequent and their special case, named simplified fuzzy systems that use fuzzy singletons as rule consequent, are investigated. On condition that overlapped fuzzy sets are employed, new sufficient conditions for simplified fuzzy systems and linear TS fuzzy systems as universal approximators are given, respectively. Then, a comparative study on existing sufficient conditions is carried out with numeric examples.
---
paper_title: Fuzzy systems as universal approximators
paper_content:
An additive fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. An additive fuzzy system approximates the function by covering its graph with fuzzy patches in the input-output state space and averaging patches that overlap. The fuzzy system computes a conditional expectation E|Y|X| if we view the fuzzy sets as random sets. Each fuzzy rule defines a fuzzy patch and connects commonsense knowledge with state-space geometry. Neural or statistical clustering systems can approximate the unknown fuzzy patches from training data. These adaptive fuzzy systems approximate a function at two levels. At the local level the neural system approximates and tunes the fuzzy rules. At the global level the rules or patches approximate the function. >
---
paper_title: ARE FUZZY SYSTEMS UNIVERSAL APPROXIMATORS?
paper_content:
Abstract This paper is a critical reflection on various results in lileralure claiming that fuzzy systems are universal approximators. For this purpose the most specific features of fuzzy systems are outlined and it is discussed to which extent they are incorporated in the formal definition of a Tuzzy system in literature. It is argued that fuzzy systems can only be universal approximators in a rather reduced sense where some crucial features are neglected. The goal is to give an impulse to investigate more adequate mathematical concepts of a fuzzy system thai also take into account such features as transparency and linguistic interpretability.
---
paper_title: Implications and applications of Kolmogorov's superposition theorem
paper_content:
Applications of Kolmogorov's superposition theorem to non-linear circuit and system theory, statistical pattern recognition, and image and multidimensional signal processing are presented and discussed.
---
paper_title: Multilayer feedforward networks are universal approximators
paper_content:
Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.
---
paper_title: Kolmogorov's theorem and multilayer neural networks
paper_content:
Abstract Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof, we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation.
---
paper_title: On the approximate realization of continuous mappings by neural networks
paper_content:
Abstract In this paper, we prove that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula recently proposed by Irie-Miyake and from this, the general case (for any number of hidden layers) can be proved by induction. The two hidden layers case is proved also by using the Kolmogorov-Arnold-Sprecher theorem and this proof also gives non-trivial realizations.
---
paper_title: Approximation capabilities of multilayer feedforward networks
paper_content:
Abstract We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.
---
paper_title: Approximation theory and feedforward networks
paper_content:
Approximation of real functions by feedforward networks of the usual kind is shown to be based on the fundamental principle of approximation by piecewise-constant functions. This principle underlies a simple construction given for three-layer networks and suggests possible difficulties in determining two-layer networks.
---
paper_title: Approximation by superpositions of sigmoidal functions
paper_content:
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.
---
paper_title: Neural networks with a continuous squashing function in the output are universal approximators
paper_content:
In 1989 Hornik as well as Funahashi established that multilayer feedforward networks without the squashing function in the output layer are universal approximators. This result has been often used improperly because it has been applied to multilayer feedforward networks with the squashing function in the output layer. In this paper, we will prove that also this kind of neural networks are universal approximators, i.e. they are capable of approximating any Borel measurable function from one finite dimensional space into (0,1)" to any desired degree of accuracy, provided sufficiently many hidden units are available.
---
paper_title: Hierarchical fuzzy control
paper_content:
In this paper, a simplified method for developing Takagi-Sugeno-Kang (TSK) style fuzzy controllers is described. The method takes advantage of developer's knowledge and understanding of the system being implemented. System attributes such as variable and nonlinearity independence are utilized to significantly reduce the required rule-base size when describing the system, without compromising robustness and performance. An example of a tracking system is used to convey the process of implementing a hierarchical fuzzy controller.
---
paper_title: Fuzzy systems are universal approximators
paper_content:
The author proves that fuzzy systems are universal approximators. The Stone-Weierstrass theorem is used to prove that fuzzy systems with product inference, centroid defuzzification, and a Gaussian membership function are capable of approximating any real continuous function on a compact set to arbitrary accuracy. This result can be viewed as an existence theorem of an optimal fuzzy system for a wide variety of problems. >
---
paper_title: Stability of a new interpolation method
paper_content:
Aims to complete the analysis of an /spl alpha/-cut based interpolation technique originating from the KH interpolation. Our goal is to investigate its stability behaviour. As it was shown in Joo et al. (1997) and Tikk et al. (1999) the original KH interpolation is stable in the sense that if inputs change slightly the output does not change much, as well. The main result of this paper shows that this significant feature of the KH interpolation can be carried over for the proposed method. A possible generalization of the proposed method is also presented.
---
paper_title: A note on universal approximation by hierarchical fuzzy systems
paper_content:
This paper proves, in a constructive manner, that the general n-dimensional hierarchical fuzzy systems are universal approximators. It is an extension of the results in [L.X. Wang, Fuzzy Sets and Systems 93 (1998) 223–230]. An upper bound of approximation error is also given.
---
paper_title: Multilayer feedforward networks are universal approximators
paper_content:
Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.
---
paper_title: General SISO Takagi-Sugeno fuzzy systems with linear rule consequent are universal approximators
paper_content:
Takagi-Sugeno (TS) fuzzy systems have been employed as fuzzy controllers and fuzzy models in successfully solving difficult control and modeling problems in practice. Virtually all the TS fuzzy systems use linear rule consequent. At present, there exist no results (qualitative or quantitative) to answer the fundamentally important question that is especially critical to TS fuzzy systems as fuzzy controllers and models, "Are TS fuzzy systems with linear rule consequent universal approximators?" If the answer is yes, then how can they be constructed to achieve prespecified approximation accuracy and what are the sufficient renditions on systems configuration? In this paper, we provide answers to these questions for a general class of single-input single-output (SISO) fuzzy systems that use any type of continuous input fuzzy sets, TS fuzzy rules with linear consequent and a generalized defuzzifier containing the widely used centroid defuzzifier as a special case. We first constructively prove that this general class of SISO TS fuzzy systems can uniformly approximate any polynomial arbitrarily well and then prove, by utilizing the Weierstrass approximation theorem, that the general TS fuzzy systems can uniformly approximate any continuous function with arbitrarily high precision. Furthermore, we have derived a formula as part of sufficient conditions for the fuzzy approximation that can compute the minimal upper bound on the number of input fuzzy sets and rules needed for any given continuous function and prespecified approximation error bound, An illustrative numerical example is provided.
---
paper_title: Sufficient conditions on uniform approximation of multivariate functions by general Takagi-Sugeno fuzzy systems with linear rule consequent
paper_content:
We have constructively proved a general class of multi-input single-output Takagi-Sugeno (TS) fuzzy systems to be universal approximators. The systems use any types of continuous fuzzy sets, fuzzy logic AND, fuzzy rules with linear rule consequent and the generalized defuzzifier. We first prove that the TS fuzzy systems can uniformly approximate any multivariate polynomial arbitrarily well, and then prove they can also uniformly approximate any multivariate continuous function arbitrarily well. We have derived a formula for computing the minimal upper bounds on the number of fuzzy sets and fuzzy rules necessary to achieve the prespecified approximation accuracy for any given bivariate function. A numerical example is furnished. Our results provide a solid-theoretical basis for fuzzy system applications, particularly as fuzzy controllers and models.
---
paper_title: Universal approximation by hierarchical fuzzy systems
paper_content:
Abstract A serious problem limiting the applicability of standard fuzzy controllers is the rule-explosion problem; that is, the number of rules increases exponentially with the number of input variables to the fuzzy controller. A way to deal with this “curse of dimensionality” is to use the hierarchical fuzzy systems. A hierarchical fuzzy system consists of a number of hierarchically connected low-dimensional fuzzy systems. It can be shown that the number of rules in the hierarchical fuzzy system increases linearly with the number of input variables. In this paper, we prove that the hierarchical fuzzy systems are universal approximators; that is, they can approximate any nonlinear function on a compact set to arbitrary accuracy. Our proof is constructive, that is, we first construct a hierarchical fuzzy system in a step-by-step manner, then prove that the constructed fuzzy system satisfies an error bound, and finally show that the error bound can be made arbitrarily small.
---
paper_title: General Takagi-Sugeno fuzzy systems with simplified linear rule consequent are universal controllers, models and filters
paper_content:
Abstract Takagi-Sugeno (TS) fuzzy systems have successfully been employed, mainly in a trial-and-error manner, to solve many control and modeling problems; but their applications as signal filters remain to be fully explored. Compared to their nonfuzzy counterparts, TS fuzzy controllers and models are difficult to be efficiently constructed because there is a large number of design parameters in the rule consequent. The number grows dramatically with the increase of the number of input fuzzy sets and input variables. Furthermore, there exists little published result on relationship between TS fuzzy controllers/models/filters and their nonfuzzy counterparts. In this paper, we investigate, in relation to some popular nonfuzzy controllers, models and filters, analytical structure of a general class of multi-input single-output (MISO) TS fuzzy systems that use arbitrary fuzzy rules with our recently introduced simplified linear rule consequent. Other components of the fuzzy systems in this study are general: arbitrary continuous input fuzzy sets, any type of fuzzy logic AND and the generalized defuzzifier containing the widely used centroid defuzzifier as a special case. We prove that the general MISO TS fuzzy systems are: (1) nonlinear variable gain controllers when implemented as controllers, or (2) nonlinear time-varying auto-regressive with the extra input (ARX) models when implemented as models, or (3) nonlinear infinite impulse response (IIR) or finite impulse response (FIR) filters when implemented as filters. Furthermore, we constructively prove that the general TS fuzzy systems with the simplified linear rule consequent are universal approximators and can approximate any continuous function in closed domain arbitrarily well. The practical implication of these results is that these fuzzy systems, with much less design parameters, are always able to produce solutions to various control, modeling and filtering problems. We also establish sufficient conditions that can be used to calculate the number of input fuzzy sets and rules needed for achieving prespecified approximation accuracy.
---
paper_title: Universal approximation by hierarchical fuzzy system with constraints on the fuzzy rule
paper_content:
This paper presents a special hierarchical fuzzy system where the outputs of the previous layer are not used in the IF-parts, but used only in the THEN-parts of the fuzzy rules of the current layer.The proposed scheme can be shown to be a universal approximator to any continuous function on a compact set if complete fuzzy sets are used in the IF-parts of the fuzzy rules with singleton fuzzifier and center average defuzzifier.From the simulation of ball and beam control system, it is demonstrated that the proposed scheme approximates with good accuracy the model nonlinear controller with fewer fuzzy rules than the centralized fuzzy system and its control performance is comparable to that of the nonlinear controller.
---
paper_title: Fuzzy logic controllers are universal approximators
paper_content:
In this paper, we consider a fundamental theoretical question on why does fuzzy control have such a good performance for a wide variety of practical problems. We try to answer this fundamental question by proving that for each fixed fuzzy logic belonging to a wide class of fuzzy logics, and for each fixed type of membership function belonging to a wide class of membership functions, the fuzzy logic control systems using these two and any method of defuzzification are capable of approximating any real continuous function on a compact set to arbitrary accuracy. On the other hand, this result can be viewed as an existence theorem of an optimal fuzzy logic control system for a wide variety of problems. >
---
paper_title: Approximation theory of fuzzy systems based upon genuine many-valued implications - SISO cases
paper_content:
It is proved that the single input and single output (SISO) fuzzy systems based upon genuine many-valued implications are universal approximators. It is shown theoretically that fuzzy control systems based upon genuine many-valued implications are equivalent to those based upon t-norm implications, the general approach to construct fuzzy systems is given. It is also shown that defuzzifier based upon center of areas is not appropriate to the fuzzy systems based upon genuine many-valued implications.
---
paper_title: A comparative study on sufficient conditions for Takagi-Sugeno fuzzy systems as universal approximators
paper_content:
Universal approximation is the basis of theoretical research and practical applications of fuzzy systems. Studies on the universal approximation capability of fuzzy systems have achieved great progress in recent years. In this paper, linear Takagi-Sugeno (TS) fuzzy systems that use linear functions of input variables as rule consequent and their special case, named simplified fuzzy systems that use fuzzy singletons as rule consequent, are investigated. On condition that overlapped fuzzy sets are employed, new sufficient conditions for simplified fuzzy systems and linear TS fuzzy systems as universal approximators are given, respectively. Then, a comparative study on existing sufficient conditions is carried out with numeric examples.
---
paper_title: Fuzzy systems as universal approximators
paper_content:
An additive fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. An additive fuzzy system approximates the function by covering its graph with fuzzy patches in the input-output state space and averaging patches that overlap. The fuzzy system computes a conditional expectation E|Y|X| if we view the fuzzy sets as random sets. Each fuzzy rule defines a fuzzy patch and connects commonsense knowledge with state-space geometry. Neural or statistical clustering systems can approximate the unknown fuzzy patches from training data. These adaptive fuzzy systems approximate a function at two levels. At the local level the neural system approximates and tunes the fuzzy rules. At the global level the rules or patches approximate the function. >
---
paper_title: Approximation by neural networks is not continuous
paper_content:
Abstract It is shown that in a Banach space X satisfying mild conditions, for its infinite, linearly independent subset G , there is no continuous best approximation map from X to the n -span, span n G . The hypotheses are satisfied when X is an L p -space, 1 p G is the set of functions computed by the hidden units of a typical neural network (e.g., Gaussian, Heaviside or hyperbolic tangent). If G is finite and span n G is not a subspace of X , it is also shown that there is no continuous map from X to span n G within any positive constant of a best approximation.
---
paper_title: Approximation of functions by perceptron networks with bounded number of hidden units
paper_content:
Abstract We examine the effect of constraining the number of hidden units. For one-hidden-layer networks with a fairly general type of units (including perceptrons with any bounded activation function and radial-basis-function units), we show that when also the size of parameters is bounded, the best approximation property is satisfied, which means that there always exists a parametrization achieving the global minimum of any error function generated by a supremum orLp-norm. We also show that the only functions that can be approximated with arbitrary accuracy by increasing parameters in networks with a fixed number of Heaviside perceptrons are functions equal almost everywhere to functions that can be exactly computed by such networks. We give a necessary condition on values that such piecewise constant functions must achieve.
---
paper_title: Polytopic and TS models are nowhere dense in the approximation model space
paper_content:
We show in this paper that the set of functions, consisting of polytopic or TS models constructed from finite number of components, is nowhere dense in the approximation model space, if that is defined as a subset of continuous functions. This topological notion means that the given set of functions lies "almost discretely" in the space of approximated functions. As a consequence, by means of the mentioned models we cannot approximate in general continuous functions arbitrarily well, if the number of components are restricted. Thus, only functions satisfying certain conditions can be approximated by such models, or alternatively, we need unbounded number of components. The possible solutions are outlined in the paper.
---
paper_title: Approximation capabilities of multilayer feedforward networks
paper_content:
Abstract We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.
---
paper_title: ARE FUZZY SYSTEMS UNIVERSAL APPROXIMATORS?
paper_content:
Abstract This paper is a critical reflection on various results in lileralure claiming that fuzzy systems are universal approximators. For this purpose the most specific features of fuzzy systems are outlined and it is discussed to which extent they are incorporated in the formal definition of a Tuzzy system in literature. It is argued that fuzzy systems can only be universal approximators in a rather reduced sense where some crucial features are neglected. The goal is to give an impulse to investigate more adequate mathematical concepts of a fuzzy system thai also take into account such features as transparency and linguistic interpretability.
---
paper_title: Universal approximation bounds for superpositions of a sigmoidal function
paper_content:
Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1/n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >
---
paper_title: A Simple Lemma on Greedy Approximation in Hilbert Space and Convergence Rates for Projection Pursuit Regression and Neural Network Training
paper_content:
A general convergence criterion for certain iterative sequences in Hilbert space is presented. For an important subclass of these sequences, estimates of the rate of convergence are given. Under very mild assumptions these results establish an 0(1/ F4n) nonsampling convergence rate for projection pursuit regression and neural network training; where n represents the number of ridge functions, neurons or coefficients in a greedy basis expansion.
---
paper_title: Convergence rates for single hidden layer feedforward networks
paper_content:
Abstract By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable m th order partial derivatives then the L 2 -norm estimation error approaches O p ( n − 1 2 ) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L 2 -norm approximation error. This bound is less than O ( q − 1 2 ) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.
---
paper_title: Neural networks with a continuous squashing function in the output are universal approximators
paper_content:
In 1989 Hornik as well as Funahashi established that multilayer feedforward networks without the squashing function in the output layer are universal approximators. This result has been often used improperly because it has been applied to multilayer feedforward networks with the squashing function in the output layer. In this paper, we will prove that also this kind of neural networks are universal approximators, i.e. they are capable of approximating any Borel measurable function from one finite dimensional space into (0,1)" to any desired degree of accuracy, provided sufficiently many hidden units are available.
---
paper_title: Kolmogorov's theorem and multilayer neural networks
paper_content:
Abstract Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof, we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation.
---
paper_title: Approximation theory and feedforward networks
paper_content:
Approximation of real functions by feedforward networks of the usual kind is shown to be based on the fundamental principle of approximation by piecewise-constant functions. This principle underlies a simple construction given for three-layer networks and suggests possible difficulties in determining two-layer networks.
---
paper_title: Kolmogorov's theorem and multilayer neural networks
paper_content:
Abstract Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof, we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation.
---
paper_title: On the approximate realization of continuous mappings by neural networks
paper_content:
Abstract In this paper, we prove that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula recently proposed by Irie-Miyake and from this, the general case (for any number of hidden layers) can be proved by induction. The two hidden layers case is proved also by using the Kolmogorov-Arnold-Sprecher theorem and this proof also gives non-trivial realizations.
---
paper_title: General SISO Takagi-Sugeno fuzzy systems with linear rule consequent are universal approximators
paper_content:
Takagi-Sugeno (TS) fuzzy systems have been employed as fuzzy controllers and fuzzy models in successfully solving difficult control and modeling problems in practice. Virtually all the TS fuzzy systems use linear rule consequent. At present, there exist no results (qualitative or quantitative) to answer the fundamentally important question that is especially critical to TS fuzzy systems as fuzzy controllers and models, "Are TS fuzzy systems with linear rule consequent universal approximators?" If the answer is yes, then how can they be constructed to achieve prespecified approximation accuracy and what are the sufficient renditions on systems configuration? In this paper, we provide answers to these questions for a general class of single-input single-output (SISO) fuzzy systems that use any type of continuous input fuzzy sets, TS fuzzy rules with linear consequent and a generalized defuzzifier containing the widely used centroid defuzzifier as a special case. We first constructively prove that this general class of SISO TS fuzzy systems can uniformly approximate any polynomial arbitrarily well and then prove, by utilizing the Weierstrass approximation theorem, that the general TS fuzzy systems can uniformly approximate any continuous function with arbitrarily high precision. Furthermore, we have derived a formula as part of sufficient conditions for the fuzzy approximation that can compute the minimal upper bound on the number of input fuzzy sets and rules needed for any given continuous function and prespecified approximation error bound, An illustrative numerical example is provided.
---
paper_title: Sufficient conditions on uniform approximation of multivariate functions by general Takagi-Sugeno fuzzy systems with linear rule consequent
paper_content:
We have constructively proved a general class of multi-input single-output Takagi-Sugeno (TS) fuzzy systems to be universal approximators. The systems use any types of continuous fuzzy sets, fuzzy logic AND, fuzzy rules with linear rule consequent and the generalized defuzzifier. We first prove that the TS fuzzy systems can uniformly approximate any multivariate polynomial arbitrarily well, and then prove they can also uniformly approximate any multivariate continuous function arbitrarily well. We have derived a formula for computing the minimal upper bounds on the number of fuzzy sets and fuzzy rules necessary to achieve the prespecified approximation accuracy for any given bivariate function. A numerical example is furnished. Our results provide a solid-theoretical basis for fuzzy system applications, particularly as fuzzy controllers and models.
---
paper_title: Approximation theory and feedforward networks
paper_content:
Approximation of real functions by feedforward networks of the usual kind is shown to be based on the fundamental principle of approximation by piecewise-constant functions. This principle underlies a simple construction given for three-layer networks and suggests possible difficulties in determining two-layer networks.
---
paper_title: General Takagi-Sugeno fuzzy systems with simplified linear rule consequent are universal controllers, models and filters
paper_content:
Abstract Takagi-Sugeno (TS) fuzzy systems have successfully been employed, mainly in a trial-and-error manner, to solve many control and modeling problems; but their applications as signal filters remain to be fully explored. Compared to their nonfuzzy counterparts, TS fuzzy controllers and models are difficult to be efficiently constructed because there is a large number of design parameters in the rule consequent. The number grows dramatically with the increase of the number of input fuzzy sets and input variables. Furthermore, there exists little published result on relationship between TS fuzzy controllers/models/filters and their nonfuzzy counterparts. In this paper, we investigate, in relation to some popular nonfuzzy controllers, models and filters, analytical structure of a general class of multi-input single-output (MISO) TS fuzzy systems that use arbitrary fuzzy rules with our recently introduced simplified linear rule consequent. Other components of the fuzzy systems in this study are general: arbitrary continuous input fuzzy sets, any type of fuzzy logic AND and the generalized defuzzifier containing the widely used centroid defuzzifier as a special case. We prove that the general MISO TS fuzzy systems are: (1) nonlinear variable gain controllers when implemented as controllers, or (2) nonlinear time-varying auto-regressive with the extra input (ARX) models when implemented as models, or (3) nonlinear infinite impulse response (IIR) or finite impulse response (FIR) filters when implemented as filters. Furthermore, we constructively prove that the general TS fuzzy systems with the simplified linear rule consequent are universal approximators and can approximate any continuous function in closed domain arbitrarily well. The practical implication of these results is that these fuzzy systems, with much less design parameters, are always able to produce solutions to various control, modeling and filtering problems. We also establish sufficient conditions that can be used to calculate the number of input fuzzy sets and rules needed for achieving prespecified approximation accuracy.
---
paper_title: Approximation by superpositions of sigmoidal functions
paper_content:
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.
---
paper_title: A comparative study on sufficient conditions for Takagi-Sugeno fuzzy systems as universal approximators
paper_content:
Universal approximation is the basis of theoretical research and practical applications of fuzzy systems. Studies on the universal approximation capability of fuzzy systems have achieved great progress in recent years. In this paper, linear Takagi-Sugeno (TS) fuzzy systems that use linear functions of input variables as rule consequent and their special case, named simplified fuzzy systems that use fuzzy singletons as rule consequent, are investigated. On condition that overlapped fuzzy sets are employed, new sufficient conditions for simplified fuzzy systems and linear TS fuzzy systems as universal approximators are given, respectively. Then, a comparative study on existing sufficient conditions is carried out with numeric examples.
---
| Title: A survey on universal approximation and its limits in soft computing techniques
Section 1: Introduction
Description 1: Provide background information and the historical context related to universal approximation, particularly focusing on its initial hypothesis and subsequent mathematical proofs.
Section 2: Mathematical Notions
Description 2: Summarize the mathematical concepts and notations that are utilized throughout the paper, ensuring readers have a clear understanding of the fundamental terms.
Section 3: Positive results on universal approximation
Description 3: Discuss the successful instances and demonstrations of universal approximation across various soft computing techniques.
Section 3.1: Universal approximation in neural networks
Description 3.1: Detail the positive results specific to neural networks, including key proofs and theorems that establish their universal approximation properties.
Section 3.2: Universal approximation in fuzzy systems
Description 3.2: Outline the findings and key results that showcase the universal approximation capabilities of fuzzy systems.
Section 4: Negative results on universal approximation
Description 4: Present the limitations and challenges of universal approximation, discussing discontinuity issues and nowhere denseness theorems.
Section 4.1: Discontinuity of the best approximation
Description 4.1: Explain the discontinuity problem in obtaining the best approximation and its practical implications.
Section 4.2: Nowhere denseness theorems
Description 4.2: Discuss the nowhere denseness theorems in the context of fuzzy systems and the implications for practical applications.
Section 5: The rate of approximation
Description 5: Analyze the rate at which the approximation converges to the actual function and discuss factors that influence this rate.
Section 6: Constructive results
Description 6: Provide methods and constructive approaches to achieve universal approximation in both neural networks and fuzzy systems.
Section 6.1: Results for neural networks
Description 6.1: Summarize constructive results and approaches specifically related to neural networks, highlighting methods for determining necessary number of hidden units.
Section 6.2: Results for fuzzy systems
Description 6.2: Detail the constructive results for fuzzy systems, including the methodologies for determining the optimal number of rules.
Section 7: Conclusions
Description 7: Summarize the key findings and insights from the survey, emphasizing the state-of-the-art in universal approximation and its practical implications in soft computing. |
A survey of power management techniques in mobile computing operating systems | 8 | ---
paper_title: The challenges of mobile computing
paper_content:
The technical challenges that mobile computing must surmount to achieve its potential are hardly trivial. Some of the challenges in designing software for mobile computing systems are quite different from those involved in the design of software for today's stationary networked systems. The authors focus on the issues pertinent to software designers without delving into the lower level details of the hardware realization of mobile computers. They look at some promising approaches under investigation and also consider their limitations. The many issues to be dealt with stem from three essential properties of mobile computing: communication, mobility, and portability. Of course, special-purpose systems may avoid some design pressures by doing without certain desirable properties. For instance portability would be less of a concern for mobile computers installed in the dashboards of cars than with hand-held mobile computers. However, the authors concentrate on the goal of large-scale, hand-held mobile computing as a way to reveal a wide assortment of issues. >
---
paper_title: Thwarting the Power Hungry Disk
paper_content:
Minimizing power consumption is important for mobile computers, and disks consume a significant portion of system-wide power. There is a large difference in power consumption between a disk that is spinning and one that is not, so systems try to keep the disk spinning only when it must. The system must trade off between the power that can be saved by spinning the disk down quickly after each access and the impact on response time from spinning it up again too often. We use trace-driven simulation to examine these trade-offs, and compare a number of different algorithms for controlling disk spin-down. We simulate disk accesses from a mobile computer (a Macintosh Powerbook Duo 230) and also from a desktop workstation (a Hewlett-Packard 9000/845 personal workstation running HP-UX), running on two disks used on mobile computers, the Hewlett-Packard Kittyhawk C3014A and the Quantum Go•Drive 120. We show that the "perfect" off-line algorithm--one that consumes minimum power without increasing response time relative to a disk that never spins down--can reduce disk power consumption by 35--50%, compared to the fixed threshold suggested by manufacturers. An on-line algorithm with a threshold of 10 seconds, running on the Powerbook trace and Go•Drive disk, reduces energy consumption by about 40% compared to the 5-minute threshold recommended by manufacturers of comparable disks; however, over a 4-hour trace period it results in 140 additional delays due to disk spin-ups.
---
paper_title: A Quantitative Analysis of Disk Drive Power Management in Portable Computers
paper_content:
With the advent and subsequent popularity of portable computers, power management of system components has become an important issue. Current portable computers implement a number of power reduction techniques to achieve a longer battery life. Included among these is spinning down a disk during long periods of inactivity. In this paper, we perform a quantitative analysis of the potential costs and benefits of spinning down the disk drive as a power reduction technique. Our conclusion is that almost all the energy consumed by a disk drive can be eliminated with little loss in performance. Although on current hardware, reliability can be impacted by our policies, the next generation of disk drives will use technology (such as dynamic head loading) which is virtually unaffected by repeated spinups. We found that the optimal spindown delay time, the amount of time the disk idles before it is spun down, is 2 seconds. This differs significantly from the 3-5 minutes in current practice by industry. We will show in this paper the effect of varying the spindown delay on power consumption; one conclusion is that a 3-5 minute delay results in only half of the potential benefit of spinning down a disk.
---
paper_title: A Quantitative Analysis of Disk Drive Power Management in Portable Computers
paper_content:
With the advent and subsequent popularity of portable computers, power management of system components has become an important issue. Current portable computers implement a number of power reduction techniques to achieve a longer battery life. Included among these is spinning down a disk during long periods of inactivity. In this paper, we perform a quantitative analysis of the potential costs and benefits of spinning down the disk drive as a power reduction technique. Our conclusion is that almost all the energy consumed by a disk drive can be eliminated with little loss in performance. Although on current hardware, reliability can be impacted by our policies, the next generation of disk drives will use technology (such as dynamic head loading) which is virtually unaffected by repeated spinups. We found that the optimal spindown delay time, the amount of time the disk idles before it is spun down, is 2 seconds. This differs significantly from the 3-5 minutes in current practice by industry. We will show in this paper the effect of varying the spindown delay on power consumption; one conclusion is that a 3-5 minute delay results in only half of the potential benefit of spinning down a disk.
---
paper_title: Predictive power conservation
paper_content:
In the case of a hard disk in a laptop, the restart cost is considerable: the disk has to be spun up to speed again, recalibratedl and made ready to execute commands. This takes between 2.5 and 6 seconds for typical 2.5" hard disk drives. During this time, the system is locked up waiting for the I/O to complete. The power costs are also high: on a representative Quantum 2.5" drive, spinningup the drive requires 5Wfor 1 second, followed by 1.5W for a 1 second calibration pass.
---
paper_title: Energy efficient indexing on air
paper_content:
We consider wireless broadcasting of data as a way of disseminating information to a massive number of users. Organizing and accessing information on wireless communication channels is different from the problem of organizing and accessing data on the disk. We describe two methods, (1, m ) Indexing and Distributed Indexing , for organizing and accessing broadcast data. We demonstrate that the proposed algorithms lead to significant improvement of battery life, while retaining a low access time.
---
paper_title: Dealing with Mobility: Issues and Research Challenges
paper_content:
Recent advances in hardware and communication technology have made mobile computing possible. It is expected, [BIV92], that in the near future, tens of millions of users will carry a portable computer with a wireless connection to a worldwide information network. This rapidly expanding technology poses new challenging problems. The mobile computing environment is an environment characterized by frequent disconnections, significant limitations of bandwidth and power, resource restrictions and fast-changing locations. The peculiarities of the new environment make old software systems inadequate and raise new challenging research questions. In this report we attempt to investigate the impact of mobility on the todays software systems, report on how research starts dealing with mobility and state some problems that remain open.
---
| Title: A survey of power management techniques in mobile computing operating systems
Section 1: Introduction
Description 1: Introduce the motivation and scope of power management techniques in mobile computing.
Section 2: CPU Power Consumption
Description 2: Discuss techniques and algorithms for managing and reducing CPU power consumption.
Section 3: Hard Drive Power Consumption
Description 3: Analyze methods for controlling and reducing power consumption of hard drives in mobile devices.
Section 4: Power and Wireless Communication
Description 4: Explore techniques for managing power consumption related to wireless communication and data broadcast.
Section 5: Scheduling for Reduced CPU Energy
Description 5: Examine the specifics of Weiser et al.'s work on algorithms for adjusting CPU clock speed to save power.
Section 6: Disk Drive Power Management
Description 6: Review different strategies and analyses for reducing power consumption through hard drive spindown and spinup mechanisms.
Section 7: Energy Efficient Indexing on Air
Description 7: Investigate approaches for organizing and accessing broadcast data to optimize power usage in wireless communication.
Section 8: Conclusion
Description 8: Summarize the key findings and implications of using various power management techniques in mobile computing operating systems. |
Mary Ann Liebert, Inc. Research on Presence in Virtual Reality: A Survey | 15 | ---
paper_title: A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments
paper_content:
This paper reviews the concepts of immersion and presence in virtual environments VEs. We propose that the degree of immersion can be objectively assessed as the characteristics of a technology, and has dimensions such as the extent to which a display system can deliver an inclusive, extensive, surrounding, and vivid illusion of virtual environment to a participant. Other dimensions of immersion are concerned with the extent of body matching, and the extent to which there is a self-contained plot in which the participant can act and in which there is an autonomous response. Presence is a state of consciousness that may be concomitant with immersion, and is related to a sense of being in a place. Presence governs aspects of autonomie responses and higher-level behaviors of a participant in a VE. The paper considers single and multiparticipant shared environments, and draws on the experience of ComputerSupported Cooperative Working CSCW research as a guide to understanding presence in shared environments. The paper finally outlines the aims of the FIVE Working Group, and the 1995 FIVE Conference in London, UK.
---
paper_title: A Virtual Presence Counter
paper_content:
This paper describes a new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience. At different times during an experience, a participant will occasionally switch between interpreting the totality of sensory inputs as forming the VE or the real world. The number of transitions from virtual to real is counted, and, using some simplifying assumptions, a probabilistic Markov chain model can be constructed to model these transitions. This model can be used to estimate the equilibrium probability of being “present” in the VE. This technique was applied in the context of an experiment to assess the relationship between presence and body movement in an immersive VE. The movement was that required by subjects to reach out and touch successive pieces on a three-dimensional chess board. The experiment included twenty subjects, ten of whom had to reach out to touch the chess pieces (the active group) and ten of whom only had to click a handheld mouse button (the control group). The results revealed a significant positive association in the active group between body movement and presence. The results lend support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.
---
paper_title: Embodied Presence in Virtual Environments
paper_content:
Presence, the sense of being in a virtual environment (VE), is analysed in an embodied cognition framework. We propose that VEs are mentally represented as meshed sets of patterns of actions and that presence is experienced when these actions include the perceived possibility to navigate and move the own body in the VE. A factor analyses of survey data shows 3 different presence components: spatial presence, involvement, and judgement of realness. A path analysis shows that spatial presence is mostly determined by sources of meshed patterns of actions: interaction with the VE, understanding of dynamics, and perception of dramatic meaning.
---
paper_title: Measuring Presence in Virtual Environments: A Presence Questionnaire
paper_content:
The effectiveness of virtual environments (VEs) has often been linked to the sense of presence reported by users of those VEs. (Presence is defined as the subjective experience of being in one place or environment, even when one is physically situated in another.) We believe that presence is a normal awareness phenomenon that requires directed attention and is based in the interaction between sensory stimulation, environmental factors that encourage involvement and enable immersion, and internal tendencies to become involved. Factors believed to underlie presence were described in the premier issue of Presence: Teleoperators and Virtual Environments. We used these factors and others as the basis for a presence questionnaire (PQ) to measure presence in VEs. In addition we developed an immersive tendencies questionnaire (ITQ) to measure differences in the tendencies of individuals to experience presence. These questionnaires are being used to evaluate relationships among reported presence and other research variables. Combined results from four experiments lead to the following conclusions: ::: the PQ and ITQ are internally consistent measures with high reliability; ::: there is a weak but consistent positive relation between presence and task performance in VEs; ::: individual tendencies as measured by the ITQ predict presence as measured by the PQ; and ::: individuals who report more simulator sickness symptoms in VE report less presence than those who report fewer symptoms.
---
paper_title: Depth of Presence in Virtual Environments
paper_content:
This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of “stacking depth,” that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of “being there,” the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.
---
paper_title: Real Presence: How Different Ontologies Generate Different Criteria for Presence, Telepresence, and Virtual Presence
paper_content:
This article claims that the meaning of presence is closely linked to the concept we have of reality, i.e., to the ontology that we more or less explicitly adopt. Different ontological stances support different criteria for presence, telepresence, and virtual presence. We propose a cultural conception of presence that challenges the current idea that experiencing a real or simulated environment deals essentially with perceiving its “objective” physical features. We reject commonsense ingenuous realism and its dualism opposing external reality and internal ideas. In our perspective, presence in an environment, real or simulated, means that individuals can perceive themselves, objects, and other people not only as situated in an external space but also as immersed in a sociocultural web connecting objects, people, and their interactions. This cultural web---structured by artifacts both physical (e.g., the physical components of the computer networks) and ideal (e.g., the social norms that shape the organizational use of the computer networks)---makes possible communication and cooperation among different social actors by granting them a common reference grid. Environments, real and virtual, are not private recesses but public places for meaningful social interaction mediated by artifacts. Experiencing presence in a social environment such as a shared virtual office requires more than the reproduction of the physical features of external reality; it requires awareness of the cultural web that makes meaningful---and therefore visible---both people and objects populating the environment.
---
paper_title: The Reality of Experience: Gibson's Way
paper_content:
This paper considers some first principles that might provide a basis for an objective science of experience (presence or immersion). Dimensions that are considered include classical Newtonian measures of the distal stimulus, changes in neural mechanisms reflecting the proximal stimulus, information theoretic measures of the statistical properties of events, and functional properties related to intentions and abilities. Gibson's ecological framework is suggested as a promising functional approach for defining the reality of experience in relation to the problem of designing virtual environments. This approach emphasizes the tight coordination between perception and action and fixes the measurement coordinate system relative to the capacity for action.
---
paper_title: Leadership and collaboration in shared virtual environments
paper_content:
We present an experiment that investigates the behaviour of small groups of participants in a wide-area distributed collaborative virtual environment (CVE). This is the third and largest study in a series of experiments that have examined trios of participants carrying out a highly collaborative puzzle-solving task. The results reproducing those of earlier studies suggest a positive relationship between place-presence and co-presence, between co-presence and group accord, with evidence supporting the notion that immersion confers leadership advantage.
---
paper_title: Telepresence via Television: Two Dimensions of Telepresence May Have Different Connections to Memory and Persuasion.
paper_content:
To be truly useful for media theory, the concept of presence should be applicable to all forms of virtual environments including those of traditional media like television and traditional content such as advertising. This study reports the results of an experiment on the effects of the visual angle of the display (sensory saturation) and room illumination (sensory suppression) on the sensation of telepresence during normal television viewing. A self-report measure of presence yielded two factors. Using [Gerrig's (1993)] terminology for the sense of being transported to a mediated environments, we labeled the two factors “arrival,” for the feeling of being there in the virtual environment, and “departure,” for the feeling of not being there in the in physical environment. It appears that being in the virtual environment is not equivalent to not being in the physical environment. A path analysis found that these two factors have very different relationships to viewer memory for the experience and for attitude change (i.e., buying intention and confidence in product decision). We theorize that the departure factor may be measuring the feeling that the medium has disappeared and may constitute a deeper absorption into the virtual environment. The study did not find evidence that visual angle and room illumination affected the sensation of telepresence
---
paper_title: Presence of Mind:A Reaction to Thomas Sheridan's Further Musings on the Psychophysics of Presence
paper_content:
An operators' sense of remote presence during teleoperation or use of virtual environment interfaces is analyzed as to what characteristics it should have to qualify it as an explanatory scientific construct. But the implicit goal of designing virtual environment interfaces to maximize presence is itself questioned in a second section in which examples of human-machine interfaces beneficially designed to avoid a strong sense of egocentric presence are cited. In conclusion, it is argued that the design of a teleoperation or virtual environment system should generally focus on the efficient communication of causal interaction. In this view the sense of presence, that is of actually being at the simulated or remote workplace, is an epiphenomena of secondary importance for design.
---
paper_title: Using Behavioral Realism to Estimate Presence: A Study of the Utility of Postural Responses to Motion Stimuli
paper_content:
We recently reported that direct subjective ratings of the sense of presence are potentially unstable and can be biased by previous judgments of the same stimuli (Freeman et al., 1999). Objective measures of the behavioral realism elicited by a display offer an alternative to subjective ratings. Behavioral measures and presence are linked by the premise that, when observers experience a mediated environment (VE or broadcast) that makes them feel present, they will respond to stimuli within the environment as they would to stimuli in the real world. The experiment presented here measured postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation. Results demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited. Posttest subjective ratings of presence, vection, and involvement were also higher for stereoscopically presented stimuli. The postural and subjective measures were not significantly correlated, indicating that nonproprioceptive postural responses are unlikely to provide accurate estimates of presence. Such postural responses may prove useful for the evaluation of displays for specific applications and in the corroboration of group subjective ratings of presence, but cannot be taken in place of subjective ratings.
---
paper_title: Measuring the Sense of Presence and its Relations to Fear of Heights in Virtual Environments
paper_content:
This article describes a study in which a genuine effect of presence--the development of fear of virtual stimuli--was provoked. Using a self-report questionnaire, the sense of presence within this situation was measured. It was shown that fear increased with higher presence. The method, which involved 37 participants, was tested and validated with user tests at the Bauhaus University. A growing body of research in human-computer interface design for virtual environments (VE) concentrates on the problem of how to involve the user in the VE. This effect, usually called immersion or the sense of presence, has been the subject of much research activity. This research focuses on the influence of technical and technological parameters on the sense of presence. However, little work has been done on the effects of experienced sense of presence. One field in which a sense of presence is necessary for the successful application of VEs is the treatment of acrophobic patients. Our goals are to (a) create a theory-bas...
---
paper_title: Virtual reality therapy: an effective treatment for phobias.
paper_content:
Behavioral therapy techniques for treating phobias often includes graded exposure of the patient to anxiety-producing stimuli (Systematic Desensitization). However, in utilizing systematic desensitization, research reviews demonstrate that many patients appear to have difficulty in applying imaginative techniques. This chapter describes the Virtual Reality Therapy (VRT), a new therapeutical approach that can be used to overcome some of the difficulties inherent in the traditional treatment of phobias. VRT, like current imaginal and in vivo modalities, can generate stimuli that could be utilized in desensitization therapy. Like systematic desensitization therapy, VRT can provide stimuli for patients who have difficulty in imagining scenes and/or are too phobic to experience real situations. As far as we know, the idea of using virtual reality technology to combat psychological disorders was first conceived within the Human-Computer Interaction Group at Clark Atlanta University in November 1992. Since then, we have successfully conducted the first known pilot experiments in the use of virtual reality technologies in the treatment of specific phobias: fear of flying, fear of heights, fear of being in certain situations (such as a dark barn, an enclosed bridge over a river, and in the presence of an animal [a black cat] in a dark room), and fear of public speaking. The results of these experiments are described.
---
paper_title: The virtual treadmill: a naturalistic metaphor for navigation in immersive virtual environments
paper_content:
This paper describes a metaphor that allows people to move around an immersive virtual environment by “walking in place”. Positional data of participants’ head movements are obtained from a tracking sensor on a head-mounted display during a training session, where they alternate between walking in place and a range of other activities. The data is fed to a neural net pattern recogniser that learns to recognise the person’s walking in place behaviour. This is used in a virtual reality system to allow people to move through the virtual environment by simulating the kinds of kinesthetic actions and sensory perceptions involved in walking. An experiment was carried out to compare this method of navigation with the familiar alternative that involves using a hand-held pointing device, such as a 3D mouse. The experiment suggests that the walking in place method may enhance the participant’s sense of presence, but that it is not advantageous with respect to the efficiency of navigation.
---
paper_title: Evaluating the importance of multi-sensory input on memory and the sense of presence in virtual environments
paper_content:
322 subjects participated in an experimental study to investigate the effects of tactile, olfactory, audio and visual sensory cues on a participant's sense of presence in a virtual environment and on their memory for the environment and the objects in that environment. Results strongly indicate that increasing the modalities of sensory input in a virtual environment can increase both the sense of presence and memory for objects in the environment. In particular, the addition of tactile, olfactory and auditory cues to a virtual environment increased the user's sense of presence and memory of the environment. Surprisingly, increasing the level of visual detail did not result in an increase in the user's sense of presence or memory of the environment.
---
paper_title: Presence in Text-Based Networked Virtual Environments or MUDS
paper_content:
A text-based networked virtual environment represents to a user a system of rooms joined by exits and entrances. When navigating this system of rooms, a user can communicate in real time with other connected users occupying the same room. Hence, these virtual environments are aptly suited for networked conferencing and teaching. Anecdotal information suggested that some people feel a sense of “being there” or presence when connected to one of these environments. To determine how many people feel this sense of presence, we surveyed 207 people from 6 different groups of users of text-based networked virtual environments. The results indicated that 69% of these subjects felt a sense of presence. Experiments with people in text-based networked virtual environments may be helpful in understanding the contribution to presence by social interaction in other virtual environments.
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Measuring Presence: A Response to the Witmer and Singer Presence Questionnaire
paper_content:
Witmer and Singer recently published a questionnaire for eliciting presence in virtual environments together with a questionnaire for measuring a person’s immersive tendencies (Witmer & Singer, 1998). The authors mentioned that they did not agree with my notion of immersion: ‘Though the VE equipment is instrumental in enabling immersion, we do not agree with Slater’s view that immersion is an objective description of the VE technology’. On first reading I was happy to take this as simply a difference of terminology which is what it is. I had defined the term immersion to mean the extent to which the actual system delivers a surrounding environment, one which shuts out sensations from the ‘real world’, which accommodates many sensory modalities, has rich representational capability, and so on (described, for example, in Slater & Wilbur, 1997). These are obviously measurable aspects of a VE system. For example, given two VE systems, and other things being equal, if one allows the participant to turn their head in any direction at all and still receive visual information only from within the VE then this is called (in my definition) a more ‘immersive’ system than one where the participant can only see VE visual signals along one fixed direction. Given two systems, if one has a larger field of view than the other, then the first is (in my definition) more immersive than the second. As a last example, if one generates shadows in realtime and the other does not, then again, the first is called (by me) more immersive. These are examples of what I mean by more or less ‘immersion’. Clearly for all of these types of things metrics can be established which are descriptions of the system, and not descriptions of people’s responses to the system. Witmer and Singer, however, define immersion as the person’s response to the VE system. This difference in terminology is unfortunate, but not a matter of any great concern. In order to
---
paper_title: Presence within Virtual Environments as a Function of Visual Display Parameters
paper_content:
This paper reports the results of three studies, each of which investigated the sense of presence within virtual environments as a function of visual display parameters. These factors included the presence or absence of head tracking, the presence or absence of stereoscopic cues, and the geometric field of view used to create the visual image projected on the visual display. In each study, subjects navigated a virtual environment and completed a questionnaire designed to ascertain the level of presence experienced by the participant within the virtual world. Specifically, two aspects of presence were evaluated: 1 the sense of “being there” and 2 the fidelity of the interaction between the virtual environment participant and the virtual world. Not surprisingly, the results of the first and second study indicated that the reported level of presence was significantly higher when head tracking and stereoscopic cues were provided. The results from the third study showed that the geometric field of view used to design the visual display highly influenced the reported level of presence, with more presence associated with a 50 and 90i¾° geometric field of view when compared to a narrower 10i¾° geometric field of view. The results also indicated a significant positive correlation between the reported level of presence and the fidelity of the interaction between the virtual environment participant and the virtual world. Finally, it was shown that the survey questions evaluating several aspects of presence produced reliable responses across questions and studies, indicating that the questionnaire is a useful tool when evaluating presence in virtual environments.
---
paper_title: Using Presence Questionnaires in Reality
paper_content:
A between-group experiment was carried out to assess whether two different presence questionnaires can distinguish between real and virtual experiences. One group of ten subjects searched for a box in a real office environment. A second group of ten subjects carried out the same task in a virtual environment that simulated the same office. Immediately after their experience, subjects were given two different presence questionnaires in randomized order: the Witmer and Singer Presence (WS), and the questionnaire developed by Slater, Usoh, and Steed (SUS). The paper argues that questionnaires should be able to pass a “reality test” whereby under current conditions the presence scores should be higher for real experiences than for virtual ones. Nevertheless, only the SUS had a marginally higher mean score for the real compared to the virtual, and there was no significant difference at all between the WS mean scores. It is concluded that, although such questionnaires may be useful when all subjects experience the same type of environment, their utility is doubtful for the comparison of experiences across environments, such as immersive virtual compared to real, or desktop compared to immersive virtual.
---
paper_title: Telepresence via Television: Two Dimensions of Telepresence May Have Different Connections to Memory and Persuasion.
paper_content:
To be truly useful for media theory, the concept of presence should be applicable to all forms of virtual environments including those of traditional media like television and traditional content such as advertising. This study reports the results of an experiment on the effects of the visual angle of the display (sensory saturation) and room illumination (sensory suppression) on the sensation of telepresence during normal television viewing. A self-report measure of presence yielded two factors. Using [Gerrig's (1993)] terminology for the sense of being transported to a mediated environments, we labeled the two factors “arrival,” for the feeling of being there in the virtual environment, and “departure,” for the feeling of not being there in the in physical environment. It appears that being in the virtual environment is not equivalent to not being in the physical environment. A path analysis found that these two factors have very different relationships to viewer memory for the experience and for attitude change (i.e., buying intention and confidence in product decision). We theorize that the departure factor may be measuring the feeling that the medium has disappeared and may constitute a deeper absorption into the virtual environment. The study did not find evidence that visual angle and room illumination affected the sensation of telepresence
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Measuring the Sense of Presence and its Relations to Fear of Heights in Virtual Environments
paper_content:
This article describes a study in which a genuine effect of presence--the development of fear of virtual stimuli--was provoked. Using a self-report questionnaire, the sense of presence within this situation was measured. It was shown that fear increased with higher presence. The method, which involved 37 participants, was tested and validated with user tests at the Bauhaus University. A growing body of research in human-computer interface design for virtual environments (VE) concentrates on the problem of how to involve the user in the VE. This effect, usually called immersion or the sense of presence, has been the subject of much research activity. This research focuses on the influence of technical and technological parameters on the sense of presence. However, little work has been done on the effects of experienced sense of presence. One field in which a sense of presence is necessary for the successful application of VEs is the treatment of acrophobic patients. Our goals are to (a) create a theory-bas...
---
paper_title: Using Presence Questionnaires in Reality
paper_content:
A between-group experiment was carried out to assess whether two different presence questionnaires can distinguish between real and virtual experiences. One group of ten subjects searched for a box in a real office environment. A second group of ten subjects carried out the same task in a virtual environment that simulated the same office. Immediately after their experience, subjects were given two different presence questionnaires in randomized order: the Witmer and Singer Presence (WS), and the questionnaire developed by Slater, Usoh, and Steed (SUS). The paper argues that questionnaires should be able to pass a “reality test” whereby under current conditions the presence scores should be higher for real experiences than for virtual ones. Nevertheless, only the SUS had a marginally higher mean score for the real compared to the virtual, and there was no significant difference at all between the WS mean scores. It is concluded that, although such questionnaires may be useful when all subjects experience the same type of environment, their utility is doubtful for the comparison of experiences across environments, such as immersive virtual compared to real, or desktop compared to immersive virtual.
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Development of a New Cross-Media Presence Questionnaire : The ITC-Sense of Presence Inventory
paper_content:
Summary • Previous studies have attempted to measure presence using simple post-test rating scales (e.g., Slater, Usoh & Steed, 1994; Barfield & Hendrix, 1996). • The stability of simple post-test ratings has been questioned (Freeman Avons, Pearson & IJsselsteijn, 1999). • More detailed, carefully piloted and psychometrically-sound questionnaires offer a solution to potential instabilities in simple post-test ratings. This approach has been adopted by Schubert, Friedmann and Regenbrecht (1999), Witmer and Singer (1998), and Kim and Biocca (1997). • These attempts have limitations, such as restricted media applications. • We present research documenting the development of a new cross-media presence questionnaire - the ITC-Sense of Presence Inventory (ITC-SOPI). • Preliminary results indicate four components: Physical Space, Engagement, Naturalness and Negative Effects. 1 Introduction Until recently, the subjective state of presence has been measured using between one and three simple post-test rating scales that require judgements comparing either: (i) the mediated experience to real life (Hendrix & Barfield, 1996; Slater & Usoh, 1994; Slater, Usoh & Steed, 1994), or (ii) one mediated environment to another (e.g., Welch, Blackmon, Liu, Mellers & Stark, 1996). Typically, the rating scales have been comprised of statements relating to the extent to which an individual: (i) feels physically located in a given mediated space, (ii) senses that a mediated environment "becomes more real, or present, compared to the real [
---
paper_title: Measuring Presence in Virtual Environments: A Presence Questionnaire
paper_content:
The effectiveness of virtual environments (VEs) has often been linked to the sense of presence reported by users of those VEs. (Presence is defined as the subjective experience of being in one place or environment, even when one is physically situated in another.) We believe that presence is a normal awareness phenomenon that requires directed attention and is based in the interaction between sensory stimulation, environmental factors that encourage involvement and enable immersion, and internal tendencies to become involved. Factors believed to underlie presence were described in the premier issue of Presence: Teleoperators and Virtual Environments. We used these factors and others as the basis for a presence questionnaire (PQ) to measure presence in VEs. In addition we developed an immersive tendencies questionnaire (ITQ) to measure differences in the tendencies of individuals to experience presence. These questionnaires are being used to evaluate relationships among reported presence and other research variables. Combined results from four experiments lead to the following conclusions: ::: the PQ and ITQ are internally consistent measures with high reliability; ::: there is a weak but consistent positive relation between presence and task performance in VEs; ::: individual tendencies as measured by the ITQ predict presence as measured by the PQ; and ::: individuals who report more simulator sickness symptoms in VE report less presence than those who report fewer symptoms.
---
paper_title: Effects of Sensory Information and Prior Experience on Direct Subjective Ratings of Presence
paper_content:
We report three experiments using a new form of direct subjective presence evaluation that was developed from the method of continuous assessment used to assess television picture quality. Observers were required to provide a continuous rating of their sense of presence using a handheld slider. The first experiment investigated the effects of manipulating stereoscopic and motion parallax cues within video sequences presented on a 20 in. stereoscopic CRT display. The results showed that the presentation of both stereoscopic and motion parallax cues was associated with higher presence ratings. One possible interpretation of Experiment 1 is that CRT displays that contain the spatial cues of stereoscopic disparity and motion parallax are more interesting or engaging. To test this, observers in Experiment 2 rated the same stimuli first for interest and then for presence. The results showed that variations in interest did not predict the presence ratings obtained in Experiment 1. However, the subsequent ratings of presence differed significantly from those obtained in Experiment 1, suggesting that prior experience with interest ratings affected subsequent judgments of presence. To test this, Experiment 3 investigated the effects of prior experience on presence ratings. Three groups of observers rated a training sequence for interest, presence, and 3-Dness before rating the same stimuli as used for Experiments 1 and 2 for presence. The results demonstrated that prior ratings sensitize observers to different features of a display resulting in different presence ratings. The implications of these results for presence evaluation are discussed, and a combination of more-refined subjective measures and a battery of objective measures is recommended.
---
paper_title: The Effects of Pictorial Realism, Delay of Visual Feedback, and Observer Interactivity on the Subjective Sense of Presence
paper_content:
Two experiments examined the effects of pictorial realism, observer interactivity, and delay of visual feedback on the sense of “presence.” Subjects were presented pairs of virtual environments a simulated driving task that differed In one or more ways from each other. After subjects had completed the second member of each pair they reported which of the two had produced the greater amount of presence and indicated the size of this difference by means of a 1-100 scale. As predicted, realism and interactivity increased presence while delay of visual feedback diminished it. According to subjects' verbal responses to a postexperiment Interview, pictorial realism was the least influential of the three variables examined. Further, although some subjects reported an increase in the sense of presence over the course of the experiment, most said that it had remained unchanged or become weaker.
---
paper_title: Using Presence Questionnaires in Reality
paper_content:
A between-group experiment was carried out to assess whether two different presence questionnaires can distinguish between real and virtual experiences. One group of ten subjects searched for a box in a real office environment. A second group of ten subjects carried out the same task in a virtual environment that simulated the same office. Immediately after their experience, subjects were given two different presence questionnaires in randomized order: the Witmer and Singer Presence (WS), and the questionnaire developed by Slater, Usoh, and Steed (SUS). The paper argues that questionnaires should be able to pass a “reality test” whereby under current conditions the presence scores should be higher for real experiences than for virtual ones. Nevertheless, only the SUS had a marginally higher mean score for the real compared to the virtual, and there was no significant difference at all between the WS mean scores. It is concluded that, although such questionnaires may be useful when all subjects experience the same type of environment, their utility is doubtful for the comparison of experiences across environments, such as immersive virtual compared to real, or desktop compared to immersive virtual.
---
paper_title: A Virtual Presence Counter
paper_content:
This paper describes a new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience. At different times during an experience, a participant will occasionally switch between interpreting the totality of sensory inputs as forming the VE or the real world. The number of transitions from virtual to real is counted, and, using some simplifying assumptions, a probabilistic Markov chain model can be constructed to model these transitions. This model can be used to estimate the equilibrium probability of being “present” in the VE. This technique was applied in the context of an experiment to assess the relationship between presence and body movement in an immersive VE. The movement was that required by subjects to reach out and touch successive pieces on a three-dimensional chess board. The experiment included twenty subjects, ten of whom had to reach out to touch the chess pieces (the active group) and ten of whom only had to click a handheld mouse button (the control group). The results revealed a significant positive association in the active group between body movement and presence. The results lend support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.
---
paper_title: Using Behavioral Realism to Estimate Presence: A Study of the Utility of Postural Responses to Motion Stimuli
paper_content:
We recently reported that direct subjective ratings of the sense of presence are potentially unstable and can be biased by previous judgments of the same stimuli (Freeman et al., 1999). Objective measures of the behavioral realism elicited by a display offer an alternative to subjective ratings. Behavioral measures and presence are linked by the premise that, when observers experience a mediated environment (VE or broadcast) that makes them feel present, they will respond to stimuli within the environment as they would to stimuli in the real world. The experiment presented here measured postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation. Results demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited. Posttest subjective ratings of presence, vection, and involvement were also higher for stereoscopically presented stimuli. The postural and subjective measures were not significantly correlated, indicating that nonproprioceptive postural responses are unlikely to provide accurate estimates of presence. Such postural responses may prove useful for the evaluation of displays for specific applications and in the corroboration of group subjective ratings of presence, but cannot be taken in place of subjective ratings.
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Further musings on the psychophysics of presence
paper_content:
This is an extension of the author's earlier paper (1992) which considered alternative meanings and significance of "presence", the experience of "being there", commonly called "telepresence" in the case of remote control or teleoperation, and called "virtual presence" in the case of computer-generated simulation. In both cases presence can include feedback to the human senses of vision, hearing and haptics, both kinesthetic and cutaneous. Presence is discussed here in terms of alternative subjective meanings, operational measurements, and meaningful experimental comparisons. Three practical approaches to measurement of presence are discussed, including elicitation of "natural" neuromuscular or vocal responses, single or multidimensional subjective scaling, and ability to discriminate the real and immediate environment from that which is recorded transmitted or synthesized, under varying levels of constraint. The author also opines on the stimulus magnitude, space and time attributes of human interactions with a tele- or virtual environment. >
---
paper_title: Foreground/Background Manipulations Affect Presence:
paper_content:
A possible relation between vection and presence is discussed. Two experiments examined the hypothesis that “presence” is enhanced by manipulations which facilitate interpreting visual scenes as “background.” A total of 39 participants in two experiments engaged in a pursuit game while in a virtual visual environment generated by an HMD and rated their experience of “presence” on 5 questions. Experiment 1 compared two viewing conditions: visual scene masking at the eye and a paper mask mounted on the screen with the same 60° FOV, and showed that presence was enhanced by eye masking relative to screen masking. Experiment 2 replicated these findings with a double-blind experimental design.
---
paper_title: Using Behavioral Realism to Estimate Presence: A Study of the Utility of Postural Responses to Motion Stimuli
paper_content:
We recently reported that direct subjective ratings of the sense of presence are potentially unstable and can be biased by previous judgments of the same stimuli (Freeman et al., 1999). Objective measures of the behavioral realism elicited by a display offer an alternative to subjective ratings. Behavioral measures and presence are linked by the premise that, when observers experience a mediated environment (VE or broadcast) that makes them feel present, they will respond to stimuli within the environment as they would to stimuli in the real world. The experiment presented here measured postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation. Results demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited. Posttest subjective ratings of presence, vection, and involvement were also higher for stereoscopically presented stimuli. The postural and subjective measures were not significantly correlated, indicating that nonproprioceptive postural responses are unlikely to provide accurate estimates of presence. Such postural responses may prove useful for the evaluation of displays for specific applications and in the corroboration of group subjective ratings of presence, but cannot be taken in place of subjective ratings.
---
paper_title: Evaluating the importance of multi-sensory input on memory and the sense of presence in virtual environments
paper_content:
322 subjects participated in an experimental study to investigate the effects of tactile, olfactory, audio and visual sensory cues on a participant's sense of presence in a virtual environment and on their memory for the environment and the objects in that environment. Results strongly indicate that increasing the modalities of sensory input in a virtual environment can increase both the sense of presence and memory for objects in the environment. In particular, the addition of tactile, olfactory and auditory cues to a virtual environment increased the user's sense of presence and memory of the environment. Surprisingly, increasing the level of visual detail did not result in an increase in the user's sense of presence or memory of the environment.
---
paper_title: Telepresence via Television: Two Dimensions of Telepresence May Have Different Connections to Memory and Persuasion.
paper_content:
To be truly useful for media theory, the concept of presence should be applicable to all forms of virtual environments including those of traditional media like television and traditional content such as advertising. This study reports the results of an experiment on the effects of the visual angle of the display (sensory saturation) and room illumination (sensory suppression) on the sensation of telepresence during normal television viewing. A self-report measure of presence yielded two factors. Using [Gerrig's (1993)] terminology for the sense of being transported to a mediated environments, we labeled the two factors “arrival,” for the feeling of being there in the virtual environment, and “departure,” for the feeling of not being there in the in physical environment. It appears that being in the virtual environment is not equivalent to not being in the physical environment. A path analysis found that these two factors have very different relationships to viewer memory for the experience and for attitude change (i.e., buying intention and confidence in product decision). We theorize that the departure factor may be measuring the feeling that the medium has disappeared and may constitute a deeper absorption into the virtual environment. The study did not find evidence that visual angle and room illumination affected the sensation of telepresence
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Presence in virtual environments as a function of type of input device and display update rate
paper_content:
Abstract Presence in virtual environments can be defined as the participant's feeling or sense of “being there” in the virtual environment. Two factors which may influence the level of presence experienced by a participant within a virtual environment are the display update rate and the type of input device used for navigating within the virtual environment. This paper presents the results of a study examining the relationship between two types of input device and three display update rates on the user's sense of presence within a virtual environment. In the experiment, eight subjects used either a joystick or a SpaceBall to navigate through a virtual representation of Stonehenge at update rates of 10, 15, and 20 Hz. The task was to search for an object hidden within the virtual environment. It was found that although the type of input device had no effect on the user's sense of presence, an update rate of at least 15 Hz was the critical value for the user feeling present in the virtual environment. Implications of the results for the design of virtual environments and for creating a sense of presence within virtual environments are discussed.
---
paper_title: The Sense of Presence within Auditory Virtual Environments
paper_content:
Two studies were performed to investigate the sense of presence within stereoscopic virtual environments as a function of the addition or absence of auditory cues. The first study examined the presence or absence of spatialized sound, while the second study compared the use of nonspatialized sound to spatialized sound. Sixteen subjects were allowed to navigate freely throughout several virtual environments and for each virtual environment, their level of presence, the virtual world realism, and interactivity between the participant and virtual environment were evaluated using survey questions. The results indicated that the addition of spatialized sound significantly increased the sense of presence but not the realism of the virtual environment. Despite this outcome, the addition of a spatialized sound source significantly increased the realism with which the subjects interacted with the sound source, and significantly increased the sense that sounds emanated from specific locations within the virtual environment. The results suggest that, in the context of a navigation task, while presence in virtual environments can be improved by the addition of auditory cues, the perceived realism of a virtual environment may be influenced more by changes in the visual rather than auditory display media. Implications of these results for presence within auditory virtual environments are discussed.
---
paper_title: The Effects of Pictorial Realism, Delay of Visual Feedback, and Observer Interactivity on the Subjective Sense of Presence
paper_content:
Two experiments examined the effects of pictorial realism, observer interactivity, and delay of visual feedback on the sense of “presence.” Subjects were presented pairs of virtual environments a simulated driving task that differed In one or more ways from each other. After subjects had completed the second member of each pair they reported which of the two had produced the greater amount of presence and indicated the size of this difference by means of a 1-100 scale. As predicted, realism and interactivity increased presence while delay of visual feedback diminished it. According to subjects' verbal responses to a postexperiment Interview, pictorial realism was the least influential of the three variables examined. Further, although some subjects reported an increase in the sense of presence over the course of the experiment, most said that it had remained unchanged or become weaker.
---
paper_title: Virtual Chess: Meaning Enhances Users' Sense of Presence in Virtual Environments
paper_content:
Presence refers to the sensation of going into a computer-simulated environment. We investigated whether presence and memory accuracy are affected by the meaningfulness of the information encountered in the virtual environment (VE). Non-chess players and three levels of chess players studied meaningful and meaningless chess positions in VEs. They rated the level of presence experienced in each and took an old-new recognition memory test. Non-chess players reported no difference in presence for meaningful compared with meaningless positions, yet even weak chess players reported feeling more present with meaningful compared with meaningless positions. Thus, only modest levels of expertise were needed to enhance presence. In contrast, tournament-level chess-playing ability was required before meaningful chess positions were remembered significantly more accurately than meaningless chess positions. Tournament players' memory accuracy was very high for meaningful positions but was the same as non-chess players...
---
paper_title: Presence in virtual environments as a function of type of input device and display update rate
paper_content:
Abstract Presence in virtual environments can be defined as the participant's feeling or sense of “being there” in the virtual environment. Two factors which may influence the level of presence experienced by a participant within a virtual environment are the display update rate and the type of input device used for navigating within the virtual environment. This paper presents the results of a study examining the relationship between two types of input device and three display update rates on the user's sense of presence within a virtual environment. In the experiment, eight subjects used either a joystick or a SpaceBall to navigate through a virtual representation of Stonehenge at update rates of 10, 15, and 20 Hz. The task was to search for an object hidden within the virtual environment. It was found that although the type of input device had no effect on the user's sense of presence, an update rate of at least 15 Hz was the critical value for the user feeling present in the virtual environment. Implications of the results for the design of virtual environments and for creating a sense of presence within virtual environments are discussed.
---
paper_title: Presence within Virtual Environments as a Function of Visual Display Parameters
paper_content:
This paper reports the results of three studies, each of which investigated the sense of presence within virtual environments as a function of visual display parameters. These factors included the presence or absence of head tracking, the presence or absence of stereoscopic cues, and the geometric field of view used to create the visual image projected on the visual display. In each study, subjects navigated a virtual environment and completed a questionnaire designed to ascertain the level of presence experienced by the participant within the virtual world. Specifically, two aspects of presence were evaluated: 1 the sense of “being there” and 2 the fidelity of the interaction between the virtual environment participant and the virtual world. Not surprisingly, the results of the first and second study indicated that the reported level of presence was significantly higher when head tracking and stereoscopic cues were provided. The results from the third study showed that the geometric field of view used to design the visual display highly influenced the reported level of presence, with more presence associated with a 50 and 90i¾° geometric field of view when compared to a narrower 10i¾° geometric field of view. The results also indicated a significant positive correlation between the reported level of presence and the fidelity of the interaction between the virtual environment participant and the virtual world. Finally, it was shown that the survey questions evaluating several aspects of presence produced reliable responses across questions and studies, indicating that the questionnaire is a useful tool when evaluating presence in virtual environments.
---
paper_title: The Effects of Pictorial Realism, Delay of Visual Feedback, and Observer Interactivity on the Subjective Sense of Presence
paper_content:
Two experiments examined the effects of pictorial realism, observer interactivity, and delay of visual feedback on the sense of “presence.” Subjects were presented pairs of virtual environments a simulated driving task that differed In one or more ways from each other. After subjects had completed the second member of each pair they reported which of the two had produced the greater amount of presence and indicated the size of this difference by means of a 1-100 scale. As predicted, realism and interactivity increased presence while delay of visual feedback diminished it. According to subjects' verbal responses to a postexperiment Interview, pictorial realism was the least influential of the three variables examined. Further, although some subjects reported an increase in the sense of presence over the course of the experiment, most said that it had remained unchanged or become weaker.
---
paper_title: The Influence of Dynamic Shadows on Presence in Immersive Virtual Environments
paper_content:
This paper describes an experiment where the effect of dynamic shadows in an immersive virtual environment is measured with respect to spatial perception and presence. Eight subjects were given tasks to do in a virtual environment. Each subject carried out five experimental trials, and the extent of dynamic shadow phenomena varied between the trials. Two measurements of presence were used — a subjective one based on a questionnaire, and a more objective behavioural measure. The experiment was inconclusive with respect to the effect of shadows on depth perception. However, the experiment suggests that for visually dominant subjects, the greater the extent of shadow phenomena in the virtual environment, the greater the sense of presence.
---
paper_title: Depth of Presence in Virtual Environments
paper_content:
This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of “stacking depth,” that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of “being there,” the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.
---
paper_title: Representations Systems, Perceptual Position, and Presence in Immersive Virtual Environments
paper_content:
This paper discusses factors that may contribute to the participant's sense of presence in immersive virtual environments. We distinguish between external factors, that is those wholly determined by the hardware and software technology employed to generate the environment, and subjective factors, that is how sensory inputs to the human participant are processed internally. The therapeutic technique known as neurolinguistic programming NLP is used as a basis for measuring such internal factors. NLP uses the idea of representation systems visual, auditory, and kinesthetic and perceptual position egocentric or exocentric to code subjective experience. The paper also considers one external factor, that is how the virtual environment represents a participant-either as a complete body, or just an arrow cursor that responds to hand movements. A case-control pilot experiment is described, where the controls have self-representation as an arrow cursor, and the experimental group subjects as a simple virtual body. Measurements of subjects' preferred representation systems and perceptual positions are obtained based on counts of types of predicates and references used in essays written after the experiment. These, together with the control variable possession/absence of a virtual body, are used as explanatory variables in a regression analysis, with reported sense of presence as the dependent variable. Although tentative and exploratory in nature, the data analysis does suggest a relationship between reported sense of presence, preferred representation system, perceptual position, and an interaction effect between these and the virtual body factor.
---
| Title: Mary Ann Liebert, Inc. Research on Presence in Virtual Reality: A Survey
Section 1: INTRODUCTION
Description 1: Introduce the significance of virtual reality (VR) in psychiatry and therapy, and explain the paper's main goal to investigate the concept of presence.
Section 2: ON THE NATURE OF PRESENCE
Description 2: Describe the various definitions and theories of presence in immersive VR as proposed by different researchers.
Section 3: Definitions
Description 3: Provide an overview of the different definitions of presence, including terms like social richness, realism, transportation, immersion, and others as explained by Lombard and Ditton, Sheridan, Heeter, Slater and Wilbur, and Zeltzer.
Section 4: Theories
Description 4: Detail the various theories on the nature of presence such as presence as non-mediation, exclusive presence, presence by involvement, ecological view, social/cultural view, estimation theory, and embodied presence.
Section 5: RESULTS OF PRESENCE
Description 5: Review the theories and empirical studies on the consequences and usefulness of presence, including subjective sensation, task performance, responses and emotions, and simulator sickness.
Section 6: MEASURING PRESENCE
Description 6: Discuss the different methods for measuring presence, distinguishing between subjective and objective measures, and provide examples of questionnaires and other subjective and objective methods.
Section 7: Subjective measures: Questionnaires
Description 7: Describe the most commonly used subjective measures through questionnaires, including those developed by Slater and colleagues, Witmer and Singer, Igroup Presence Questionnaire (IPQ), Kim and Biocca, ITC Sense of Presence Inventory (ITC-SOPI), and Lombard and Ditton.
Section 8: Other subjective measures
Description 8: Present other subjective measurement techniques such as continuous measure, presence counter, and focus group exploration.
Section 9: Objective measures: Behavioral
Description 9: Explain the objective behavioral measures of presence by examining people's reactions to mediated stimuli.
Section 10: Objective measures: Physiological
Description 10: Elaborate on physiological measures of presence, including heart rate, skin temperature, and skin conductance.
Section 11: CAUSES OF PRESENCE
Description 11: Summarize research on factors contributing to presence, categorized into system characteristics, interaction elements, and user characteristics.
Section 12: Vividness
Description 12: Provide an overview of empirical studies examining the relationship between VE vividness factors (such as FOV, stereoscopy, sound, tactile cues, etc.) and presence.
Section 13: Interactivity
Description 13: Examine the role of interactivity, including control factors, body movement, and interaction between users on experienced presence in VR.
Section 14: User characteristics
Description 14: Discuss individual differences in experiencing presence, including preferences for sensory modalities, age, and other psychological perceptual systems.
Section 15: DISCUSSION
Description 15: Highlight the gaps in current research on presence in VR, especially its relationship to emotional responses and task performance in therapeutic applications. Suggest areas for future research. |
Laser Induced Breakdown Spectroscopy for Elemental Analysis in Environmental, Cultural Heritage and Space Applications: A Review of Methods and Results | 12 | ---
paper_title: Laser induced breakdown spectroscopy of soils, rocks and ice at subzero temperatures in simulated martian conditions
paper_content:
Abstract We applied Laser Induced Breakdown Spectroscopy (LIBS) on moist soil/rock samples in simulated Martian conditions. The signal behavior as a function of the surface temperature in the range from + 25 °C to − 60 °C was studied at pressure of 7 mbar. We observed the strong signal oscillations below 0 °C with different negative peaks, whose position, width and magnitude depend on the surface roughness. In some cases, the signal was reduced for one order of magnitude with consequences for the LIBS analytical capability. We attribute such a signal behavior to the presence of supercooled water inside the surface pores, which freezing point depends on the pore size. On a same rock samples with different grades of the surface polishing, the signal has different temperature dependence. Its decrease was always registered close to 0 °C, corresponding to the freezing/melting of normal disordered ice, which can be present inside larger pores and scratching. An amount of the signal reduction at the phase transition temperatures does not seem to change with the laser energy density in the examined range. Comparative measurements were performed on a frozen water solution. A large depression, for two orders of magnitude, of the LIBS intensity was observed close to − 50 °C. The same negative peak, but with a smaller magnitude, was also registered on some rock/soil samples. Ablation rates and plasma parameters as a function of the sample temperature are also discussed, and their consequences for in-situ analyses.
---
paper_title: Study of sub-mJ-excited laser-induced plasma combined with Raman spectroscopy under Mars atmosphere-simulated conditions
paper_content:
Laser-Induced Breakdown Spectroscopy (LIBS) and Raman spectroscopy are complimentary techniques. LIBS yields elemental information while Raman spectroscopy yields molecular information about a sample, and both share similar instrumentation configurations. The combination of LIBS and Raman spectroscopy in a single instrument for planetary surface exploration has been proposed, however challenges exist for developing a combined instrument. We present LIBS and Raman spectroscopy results obtained using a diode pumped, intracavity doubled, Q-switched, Nd:YLF laser operating at 523 nm, which overcomes some of the difficulties associated with a combined instrument. LIBS spectra were obtained with 170 μJ per pulse at 4 Hz repetition rate in a low pressure Mars-simulated atmosphere and Raman spectra produced with 200 mW at 100 kHz. The Nd:YLF laser is switchable between LIBS and Raman spectroscopy modes only by a change in Q-switch repetition rate. Emissions from Ca, Ca II, Fe, Fe II, Mg, Na, and O atom were identified in the μ-LIBS spectrum of oolithic hematite. Evidence was found for a change in plasma dynamics between 7 and 5 Torr that could be explained as a decrease in plasma temperature and electron density below 5 Torr. This is relevant to future Mars exploration using LIBS as the mean surface pressure on Mars varies from 3.75 to 6 Torr. LIBS plasma dynamics should be carefully evaluated at the pressures that will be encountered at the specific Mars landing site.
---
paper_title: Laser-induced breakdown spectroscopy for space exploration applications: Influence of the ambient pressure on the calibration curves prepared from soil and clay samples
paper_content:
Abstract Recently, there has been an increasing interest in the laser-induced breakdown spectroscopy (LIBS) technique for stand-off detection of geological samples for use on landers and rovers to Mars, and for other space applications. For space missions, LIBS analysis capabilities must be investigated and instrumental development is required to take into account constraints such as size, weight, power and the effect of environmental atmosphere (pressure and ambient gas) on flight instrument performance. In this paper, we study the in-situ LIBS method at reduced pressure (7 Torr CO 2 to simulate the Martian atmosphere) and near vacuum (50 mTorr in air to begin to simulate the Moon or asteroids' pressure) as well as at atmospheric pressure in air (for Earth conditions and comparison). Here in-situ corresponds to distances on the order of 150 mm in contrast to stand-off analysis at distance of many meters. We show the influence of the ambient pressure on the calibration curves prepared from certified soil and clay pellets. In order to detect simultaneously all the elements commonly observed in terrestrial soils, we used an Echelle spectrograph. The results are discussed in terms of calibration curves, measurement precision, plasma light collection system efficiency and matrix effects.
---
paper_title: Laser induced breakdown spectroscopy on soils and rocks: Influence of the sample temperature, moisture and roughness
paper_content:
Abstract ExoMars, ESA's next mission to Mars, will include a combined Raman/LIBS instrument for the comprehensive in-situ mineralogical and elemental analyses of Martian rocks and soils. It is inferred that water exists in the upper Martian surface as ice layers, “crystal” water or adsorbed pore water. Thus, we studied Laser Induced Breakdown Spectroscopy (LIBS) on wet and dry rocks under Martian environmental conditions in the temperature range − 60 °C to + 20 °C and in two pressure regimes, above and below the water triple point. Above this point, the LIBS signals from the rock forming elements have local minima that are accompanied by hydrogen (water) emission maxima at certain temperatures that we associate with phase transitions of free or confined water/ice. At these sample temperatures, the plasma electron density and its temperature are slightly lowered. In contrast to powder samples, a general increase of the electron density upon cooling was observed on rock samples. By comparing the LIBS signal behavior from the same rock with different grades of polishing, and different rocks with the same surface treatment, it was possible to distinguish between the influence of surface roughness and the bulk material structure (pores and grains). Below the triple point of water, the LIBS signal from the major sample elements is almost independent of the sample temperature. However, at both considered pressures we observed a hydrogen emission peak close to − 50 °C, which is attributed to a phase transition of supercooled water trapped inside bulk pores.
---
paper_title: Detection of carbon content in a high-temperature and high-pressure environment using laser-induced breakdown spectroscopy
paper_content:
Abstract A laser-induced breakdown spectroscopy (LIBS) technique has been applied to detect the carbon content in fly ash, char and pulverized coal under high-pressure and high-temperature conditions. An automated LIBS unit has been developed and applied in this experiment to demonstrate its capability in actual power plant monitoring. Gas composition effects were examined to obtain the best operating parameters under actual plant conditions. The results were compared to those obtained using the conventional method, showing satisfactory agreement. LIBS can detect carbon content even under the high-pressure conditions typical of gasification thermal power plants. LIBS is capable of a detection time of 1 min, as compared to over 30 min of sampling and analysis time required by the conventional methods (JIS-M8814 and JIS-M8815), and offers various merits as a tool for actual power-plant monitoring.
---
paper_title: Rapid in-situ analysis of liquid steel by laser-induced breakdown spectroscopy☆
paper_content:
Abstract Laser-induced breakdown spectroscopy (LIBS) denotes a technique where a pulsed laser beam is used to ablate small amounts of the target material. The characteristic optical emission line intensities of the excited species in the laser-generated plasma allow a quantitative chemical analysis of the target material. LIBS is a fast, non-contact method allowing large working distances between the sample under investigation and the detection system. These properties make LIBS applicable to process control in metallurgy. We describe an apparatus designed for rapid in-situ analysis of solid and molten metals at variable distances of up to 1.5 m. A variable lens system allows compensation for varying positions of the liquid steel surface. The LIBS signal is guided by a fiber optic bundle of 12-m length to the spectrometer. Analysis of an element's concentration takes 7 s. Laboratory experiments using an induction furnace showed that the addition of admixtures to liquid steel results in rapid response of the system. Results including the in-situ monitoring of Cr, Cu, Mn and Ni within certain concentration ranges are presented (Cr: 0.11–13.8 wt.%; Cu: 0.044–0.54 wt.%; Mn: 1.38–2.5 wt.%; Ni: 0.049–5.92 wt.%).
---
paper_title: Real time and in situ determination of lead in road sediments using a man-portable laser-induced breakdown spectroscopy analyzer
paper_content:
Abstract In situ, real time levels of lead in road sediments have been measured using a man-portable laser-induced breakdown spectroscopy analyzer. The instrument consists of a backpack and a probe housing a Q-switched Nd:YAG laser head delivering 50 mJ per pulse at 1064 nm. Plasma emission was collected and transmitted via fiber optic to a compact cross Czerny-Turner spectrometer equipped with a linear CCD array allocated in the backpack together with a personal computer. The limit of detection (LOD) for lead and the precision measured in the laboratory were 190 μg g −1 (calculated by the 3σ method) and 9% R.S.D. (relative standard deviation), respectively. During the field campaign, averaged Pb concentration in the sediments were ranging from 480 μg g −1 to 660 μg g −1 depending on the inspected area, i.e. the entrance, the central part and the exit of the tunnel. These results were compared with those obtained with flame-atomic absorption spectrometry (flame-AAS). The relative error, expressed as [100(LIBS result − flame AAS result)/(LIBS result)], was approximately 14%.
---
paper_title: Double-pulse LIBS in bulk water and on submerged bronze samples
paper_content:
In this work laser-induced breakdown spectroscopy (LIBS) has been applied in bulk water using a double-pulse laser source. As in the case of former experiments in air, the use of the double-pulse technique allows for enhancing line emission intensity and reducing the duration of the continuum spectrum, thus increasing the overall analytical performances of the technique. Tap water analysis of Na and Mg dissolved cations has been performed to investigate the capability of the technique, but most significant results have been obtained in determining the composition of submerged bronze targets by laser ablation of their surface in seawater. When the plasma is generated by double-pulse laser, the ablated matter is strongly confined by the water vapor inside the cavitations bubble. The confinement of the plasma leads to higher values of excitation temperature and holds the conditions suitable for chemical analysis (homogeneity and LTE) longer than what happens in gaseous media. The double-pulse experiments performed directly in bulk water point out the features of LIBS technique for real analytical applications in situ, such as the water quality assessment and the investigation of irremovable submerged objects.
---
paper_title: On-line analysis of ambient air aerosols using laser-induced breakdown spectroscopy
paper_content:
Abstract Laser-induced breakdown spectroscopy is developed for the detection of aerosols in ambient air, including quantitative mass concentration measurements and size/composition measurements of individual aerosol particles. Data are reported for ambient air aerosols containing aluminum, calcium, magnesium and sodium for a 6-week sampling period spanning the Fourth of July holiday period. Measured mass concentrations for these four elements ranged from 1.7 parts per trillion (by mass) to 1.7 parts per billion. Ambient air concentrations of magnesium and aluminum revealed significant increases during the holiday period, which are concluded to arise from the discharge of fireworks in the lower atmosphere. Real-time conditional data analysis yielded increases in analyte spectral intensity approaching 3 orders of magnitude. Analysis of single particles yielded composition-based aerosol size distributions, with measured aerosol diameters ranging from 100 nm to 2 μm. The absolute mass detection limits for single particle analysis exceeded sub-femtogram values for calcium-containing particles, and was on the order of 2–3 femtograms for magnesium and sodium-based particles. Overall, LIBS-based analysis of ambient air aerosols is a promising technique for the challenging issues associated with the real-time collection and analysis of ambient air particulate matter data.
---
paper_title: Nanosecond and femtosecond Laser Induced Breakdown Spectroscopic analysis of bronze alloys
paper_content:
Abstract In the present work we are studying the influence of pulse duration (nanosecond (ns) and femtosecond (fs)) at λ = 248 nm on the laser-induced plasma parameters and the quantitative analysis results for elements such as Sn, Zn and Pb, in different types of bronze alloys adopting LIBS in ambient atmosphere. Binary (Sn–Cu), ternary (Sn–Zn–Cu or Sn–Pb–Cu) and quaternary (Sn–Zn–Pb–Cu) reference alloys characterized by a chemical composition and metallurgical features similar to those used in Roman times, were employed in the study. Calibration curves, featuring linear regression coefficients over 98%, were obtained for tin, lead and zinc, the minor elements in the bronze alloys (using the internal standardization method) as well as for copper, the major element. The effects of laser pulse duration and energy on laser-induced plasma parameters, namely the excitation temperature and the electron density have been studied in our effort to optimize the analysis. Finally, LIBS analysis was carried on three real metal objects and the spectra obtained have been used to estimate the type and elemental composition of the alloys based on the calibration curves produced with the reference alloys. The results obtained are very useful in the future use of portable LIBS systems for in situ qualitative and quantitative elemental analysis of bronze artifacts in museums and archaeological sites.
---
paper_title: Laser-induced breakdown spectroscopy for detection of explosives residues: a review of recent advances, challenges, and future prospects
paper_content:
In this review we discuss the application of laser-induced breakdown spectroscopy (LIBS) to the problem of detection of residues of explosives. Research in this area presented in open literature is reviewed. Both laboratory and field-tested standoff LIBS instruments have been used to detect explosive materials. Recent advances in instrumentation and data analysis techniques are discussed, including the use of double-pulse LIBS to reduce air entrainment in the analytical plasma and the application of advanced chemometric techniques such as partial least-squares discriminant analysis to discriminate between residues of explosives and non-explosives on various surfaces. A number of challenges associated with detection of explosives residues using LIBS have been identified, along with their possible solutions. Several groups have investigated methods for improving the sensitivity and selectivity of LIBS for detection of explosives, including the use of femtosecond-pulse lasers, supplemental enhancement of the laser-induced plasma emission, and complementary orthogonal techniques. Despite the associated challenges, researchers have demonstrated the tremendous potential of LIBS for real-time detection of explosives residues at standoff distances.
---
paper_title: Development of a mobile system based on laser-induced breakdown spectroscopy and dedicated to in situ analysis of polluted soils ☆
paper_content:
Principal Components Analysis (PCA) is successfully applied to the full laser-induced breakdown spectroscopy (LIBS) spectra of soil samples, defining classes according to the concentrations of the major elements. The large variability of the LIBS data is related to the heterogeneity of the samples and the representativeness of the data is finally discussed. Then, the development of a mobile LIBS system dedicated to the in-situ analysis of soils polluted by heavy metals is described. Based on the use of ten-meter long optical fibers, the mobile system allows deported measurements. Finally, the laser-assisted drying process studied by the use of a customized laser has not been retained to overcome the problem of moisture.
---
paper_title: Laser-Induced Breakdown Spectroscopy in open-path configuration for the analysis of distant objects
paper_content:
A review of recent results on stand-off Laser-Induced Breakdown Spectroscopy (LIBS) analysis and applications is presented. Stand-off LIBS was suggested for elemental analysis of materials located in environments where any physical access was not possible but optical access could be envisaged. This review only refers to the use of the open-path LIBS configuration in which the laser beam and the returning plasma light are transmitted through the atmosphere. It does not present the results obtained with a transportation of the laser pulses to the target through an optical fiber. Open-path stand-off LIBS has mainly been used with nanosecond laser pulses for solid sample analysis at distances of tens of meters. Liquid samples have also been analyzed at distances of a few meters. The distances achievable depend on many parameters including the laser characteristics (pulse energy and power, beam divergence, spatial profile) and the optical system used to focus the pulses at a distance. A large variety of laser focusing systems have been employed for stand-off analysis comprising refracting or reflecting telescope. Efficient collection of the plasma light is also needed to obtain analytically useful signals. For stand-off LIBS analysis, a lens or a mirror is required to increase the solid angle over which the plasma light can be collected. The light collection device can be either at an angle from the laser beam path or collinear with the optical axis of the system used to focus the laser pulses on the target surface. These different configurations have been used depending on the application such as rapid sorting of metal samples, identification of material in nuclear industry, process control and monitoring in metallurgical industry, applications in future planetary missions, detection of environmental contamination or cleaning of objects of cultural heritage. Recent stand-off analyses of metal samples have been reported using femtosecond laser pulses to extend LIBS capabilities to very long distances. The high-power densities achievable with these laser pulses can also induce self-guided filaments in the atmosphere which produce LIBS excitation of a sample. The first results obtained with remote filament-induced breakdown spectroscopy predict sample analysis at kilometer ranges.
---
paper_title: Characterization of jewellery products by laser-induced breakdown spectroscopy
paper_content:
Abstract The suitability of laser-induced breakdown spectroscopy (LIBS) for the characterization of jewellery products is demonstrated by the development of a method based on the use of an Nd-YAG laser (operating at 532 nm) which induces ablation of the material and the production of a plasma whose emission reaches 1/8 m spectrograph (connected to a coupled charge detector (CCD)) through an optic fiber. The treatment of the instrumental signal provides enough analytical information, both for identifying and quantifying the major metals present in this type of material. The method proposed has been developed both by multivariate optimization and calibration procedures with application of the appropriate quality criteria. The chemometric analysis of the data and the use of PLS regression for calibration guarantee the ruggedness of the proposed method. The study of the emission spectra allows characterization of the most common noble metals (gold and silver) as well as other metals present in jewellery pieces.
---
paper_title: Single Pulse-Laser Induced Breakdown Spectroscopy in aqueous solution
paper_content:
In this paper the flexibility of Laser Induced Breakdown Spectroscopy (LIBS) has been proved for the analysis of water solutions. The plasma is generated directly in the bulk of a water solution by a Q-switched Nd:YAG laser (1064). The emission signal of four different solutions has been studied: AlCl3, NaCl, CaCO3 and LiF. The basic mechanisms influencing the emission signal and the experimental tricks for the optimization of the detection mode have been pointed out.
---
paper_title: From LASER to LIBS, the path of technology development☆
paper_content:
Abstract Laser-induced breakdown spectroscopy has made significant progress towards becoming a commercial, deployed technology. Its historical development will be reviewed, using the transformation of the laser into commercial technology as a parallel.
---
paper_title: Methodologies for laboratory Laser Induced Breakdown Spectroscopy semi-quantitative and quantitative analysis—A review ☆
paper_content:
Abstract Since its early applications, Laser Induced Breakdown Spectroscopy has been recognized as a useful tool for solid state chemical analysis. However the quantitative accuracy of the technique depends on the complex processes involved in laser induced plasma formation, ablation, atomization, excitation and ion recombination. Problems arising from laser target coupling, matrix effect, fractionation in target vaporization, local thermodynamic equilibrium assumption and interferences from additional air ionization should be properly addressed in order to obtain reliable quantitative results in laboratory to be used as starting point during field campaigns. As selected case studies carried out within the authors' research team show, a proper selection of laser parameters and, in general of experimental conditions, for laboratory data acquisition is required in order to minimize the mentioned problems both in case of calibration curves and calibration free approaches. In particular, the choice of reference samples for measuring calibration curves is of crucial importance in laboratory experiments, in relation both to matrix effect and local thermodynamic equilibrium, to be carried out at comparable conditions in terms of temperature and electron density. A model for the ablation process aimed to the optimization of experimental conditions in some case studies (copper alloys) has been specifically developed in order to account for the target stoichiometry in the plasma. Problems related to the limit of detection for quantitative trace analysis have been considered in analyzing data collected both inside and outside the local thermodynamic equilibrium window, in cases characterized by a fixed contamination threshold.
---
paper_title: Determination of heavy metals in soils by Laser Induced Breakdown Spectroscopy
paper_content:
Laser Induced Breakdown Spectroscopy (LIBS) is a recent analytical technique that is based upon the measurement of emission lines generated by atomic species close to the surface of the sample, thus allowing their chemical identification. In this work, the LIBS technique has been applied to the determination of total contents of heavy metals in a number of reference soil samples. In order to validate the technique, LIBS data were compared with data obtained on the same soil samples by application of conventional Inductively Coupled Plasma (ICP) spectroscopy. The partial agreement obtained between the two sets of data suggested the potential applicability of the LIBS technique to the measurement of heavy metals in soils.
---
paper_title: Accurate quantitative analysis of gold alloys using multi-pulse laser induced breakdown spectroscopy and a correlation-based calibration method
paper_content:
Abstract Multi-pulse laser induced breakdown spectroscopy (LIBS), in combination with the generalized linear correlation calibration method (GLCM), was applied to the quantitative analysis (fineness determination) of quaternary gold alloys. Accuracy and precision on the order of a few thousandths (‰) was achieved. The analytical performance is directly comparable to that of the standard cupellation method (fire assay), but provides results within minutes and is virtually non-destructive, as it consumes only a few micrograms of the sample.
---
paper_title: From single pulse to double pulse ns-Laser Induced Breakdown Spectroscopy under water : Elemental analysis of aqueous solutions and submerged solid samples
paper_content:
Abstract In this paper the developments of Laser Induced Breakdown Spectroscopy (LIBS) underwater have been reviewed to clear up the basic aspects of this technique as well as the main peculiarities of the analytical approach. The strong limits of Single-Pulse (SP) LIBS are discussed on the basis of plasma emission spectroscopy observations, while the fundamental improvements obtained by means of the Double-Pulse (DP) technique are reported from both the experimental and theoretical point of view in order to give a complete description of DP-LIBS in bulk water and on submerged solid targets. Finally a detailed description of laser–water interaction and laser-induced bubble evolution is reported to point out the effect of the internal conditions (radius, pressure and temperature) of the first pulse induced bubble on the second pulse producing plasma. The optimization of the DP-LIBS emission signal and the determination of the lower detection limit, in a set of experiments reported in the current scientific literature, clearly demonstrate the feasibility and the advantages of this technique for underwater applications.
---
paper_title: A compact and portable laser-induced breakdown spectroscopy instrument for single and double pulse applications
paper_content:
Abstract We present LIBS experimental results that demonstrate the use of a newly developed, compact, versatile pulsed laser source in material analysis related to art and archaeological applications in view of research aiming at the development of portable LIBS instrumentation. LIBS qualitative analysis measurements were performed on various samples and objects, and the spectra were recorded in gated and non-gated modes. The latter is important because of advantages arising from size and cost reduction when using simple, compact spectrograph-CCD detection systems over the standard ICCD-based configurations. The new laser source exhibited a very reliable performance in terms of laser pulse repeatability, autonomy and interface. Having the ability to work in double pulse mode it provided versatility in the measurements leading to increased LIBS signal intensities, improved the signal-to-noise ratio and the RSD of the spectra. The first test results are encouraging and demonstrate that this new laser is suitable for integration in compact, portable LIBS sensors with a wide spectrum of materials analysis applications.
---
paper_title: Analysis of heavy metals in soils using laser-induced breakdown spectrometry combined with laser-induced fluorescence
paper_content:
Abstract The investigation of a hyphenated technique combining laser-induced breakdown spectrometry (LIBS) with laser-induced fluorescence (LIF) for the analysis of heavy metals in soils is described. In order to evaluate the applicability of the technique for fast in-situ analytical purposes, measurements were performed at atmospheric pressure. The plasma radiation was detected using a Paschen–Runge spectrometer equipped with photomultipliers for the simultaneous analysis of 22 different elements. The photomultiplier signals were processed by a fast gateable multichannel integrator. Calibration curves were recorded using a set of spiked soil samples. Limits of detection were derived from these curves for As (3.3 μg/g), Cd (6 μg/g), Cr (2.5 μg/g), Cu (3.3 μg/g), Hg (84 μg/g), Ni (6.8 μg/g), Pb (17 μg/g), Tl (48 μg/g) and Zn (98 μg/g) using the LIBS signals. LIBS-LIF measurements were performed for Cd and Tl. The excitation wavelength as well as the detected fluorescence wavelength for Cd was 228.8 nm. Alternatively, Tl was excited at 276.8 nm, where the observed fluorescence wavelength was 351.9 nm. The calibration curves based on the LIF signals showed significantly improved limits of detection of 0.3 and 0.5 μg/g for Cd and Tl, respectively.
---
paper_title: Rank correlation of laser-induced breakdown spectroscopic data for the identification of alloys used in jewelry manufacture
paper_content:
Abstract The aim of the present study was the rapid identification of alloys used in the manufacture of jewelry pieces with the help of a spectral library. The laser-induced breakdown spectra of 32 alloys were stored, with 25 of them chosen as library standards; the remaining seven spectra were used as samples. The composition of the alloys was obtained by flame atomic absorption spectrometry. A rank correlation method was applied for comparison between spectra, providing good correlation coefficients for the alloys studied. The composition of the samples was also predicted by partial least-squares regression to demonstrate the capability of this technique for the rapid analysis of this type of material.
---
paper_title: Laser-induced breakdown spectroscopy applications in the steel industry: Rapid analysis of segregation and decarburization ☆
paper_content:
Abstract Rapid chemical analysis is increasingly a prerequisite in the steel making industry, either to check that a steel product complies with customers' specifications, or to investigate the presence of defects that may lead to mechanical property failure of the product. Methods conventionally used for assessment, such as the monitoring of decarburization and segregation, performed by chemical etching of a polished surface followed by optical observation, tend to be relatively fast, simple and applicable to large sample areas; however, the information obtained is limited to the spatial extent of the defect. Other techniques, such as electron probe microscopy and scanning electron microscopy — energy dispersive X-ray, can be used for providing detailed chemical composition at the micro-scale, for a better understanding of the mechanisms involved; however, their use is limited to analyzing comparatively very small sample areas (typically a few mm 2 ). The ability to rapidly generate chemical concentration maps at the micro-scale is one of the many positive attributes of laser-induced breakdown spectroscopy (LIBS) that makes it a useful tool for the steel industry as a laboratory or near-the-line analysis facility. Parameters that influence the detailed mapping of large sample areas were determined and optimized. LIBS scanning measurements were performed on samples displaying segregation and decarburization. A 60 × 60 mm 2 area, with a step size of 50 μm, was measured in 35 min on segregation samples, and a 4 × 1 mm 2 area with a step size of 20 μm in 2 min on a decarburization sample. The resulting quantified elemental maps correlated very well with data from the methods used conventionally. In the two examples above, the application of LIBS as a micro-analysis technique proved to bring very valuable information that was not accessible previously with other techniques on such large areas in such a short time.
---
paper_title: Laser induced plasma spectroscopy for local equivalence ratio measurements in an oscillating combustion environment
paper_content:
Abstract Equivalence ratios measured with a laser induced plasma spectroscopy (LIPS, also referred as LIBS) are reported in two different setups. First, a small premixed turbulent burner is used to address fundamental issues concerning the LIPS technique. It is shown that hydrogen excitation within the created plasma is the key parameter to measure in order to retrieve correctly equivalence ratio measurements. Results compared with a spark energy classification strategy show better results with excitation classification, as variations in ratio between the different lines come not only from gaseous concentration but also from plasma’s characteristics. Using spectra from 450 to 800 nm allows the determination of two independent emission ratios to improve single shot accuracy. The developed approach is afterwards applied to phase-locked measurements of equivalence ratio in a lean premixed combustor, for which strong thermo-acoustics oscillations exist. This combustor runs with methane–air, preheated at 700 K and with a typical equivalence ratio of 0.50, for which the sound pressure levels of the oscillations are 170 dB. Measurements at the inlet of the combustor reveal strong correlations between fluctuations of the incoming stoichiometry and pressure fluctuations. It is shown that stoichiometry changes within one oscillating cycle of about 3%. Those changes are crucial for the flame dynamics as dealing with very lean mixtures.
---
paper_title: Laser-induced breakdown spectroscopy of bulk aqueous solutions at oceanic pressures: evaluation of key measurement parameters.
paper_content:
The development of in situ chemical sensors is critical for present-day expeditionary oceanography and the new mode of ocean observing systems that we are entering. New sensors take a significant amount of time to develop; ::: therefore, validation of techniques in the laboratory for use in the ocean environment is necessary. Laser-induced breakdown spectroscopy (LIBS) is a promising in situ technique for oceanography. Laboratory investigations on the feasibility of using LIBS to detect analytes in bulk liquids at oceanic pressures were carried out. LIBS was successfully used to detect dissolved Na, Mn, Ca, K, and Li at pressures up to ::: 2.76×107 Pa. The effects of pressure, laser-pulse energy, interpulse delay, gate delay, temperature, and NaCl concentration on the LIBS signal were examined. An optimal range of laser-pulse energies was found to exist for analyte detection in bulk aqueous solutions at both low and high pressures. No pressure effect was seen on the emission intensity for Ca and Na, and an increase in emission intensity with increased pressure was seen for Mn. Using the dual-pulse technique for several analytes, a very short interpulse delay resulted in the greatest emission intensity. The presence of NaCl enhanced the emission intensity for Ca, but had no effect on peak intensity of Mn or K. Overall, increased pressure, the addition of NaCl to a solution, and temperature did not inhibit detection of analytes in solution and sometimes even enhanced the ability to detect the analytes. The results suggest that LIBS is a viable chemical sensing method for in situ analyte detection in high-pressure environments such as the deep ocean.
---
paper_title: Quantitative micro-analysis by laser-induced breakdown spectroscopy: a review of the experimental approaches
paper_content:
Abstract The laser-induced breakdown spectroscopy (LIBS) technique has shown in recent years its great potential for rapid qualitative analysis of materials. Because of the lack of pre-treatment of the material, as well as the speed of analysis, not mentioning the possibility of in situ analysis, this technique offers an attractive solution for a wide range of industrial applications. As a consequence, a lot of work has been devoted towards the application of LIBS technique for quantitative micro-analysis. The purpose of this paper is to give a review of the current experimental approaches used for obtaining quantitative micro-analysis using the LIBS technique. The influence on LIBS analytical performances of laser power, wavelength and pulse length, the proper choice of experimental geometry, the importance of ambient gas choice and the role of detectors for improving the precision of LIBS analysis are among the topics discussed in this paper.
---
paper_title: Chronocultural sorting of archaeological bronze objects using laser-induced breakdown spectrometry
paper_content:
This work discusses the capability of laser-induced breakdown spectrometry (LIBS) for characterization and cataloging of metallic objects belonging to the Bronze and Iron Ages. A set of 37 metallic objects from different locations of the South East of Iberian Peninsula has been sorted according to their metal content. Arsenic concentration in metallic objects has been found a key factor for distinguishing between Bronze and Iron Ages objects, allowing the chronocultural sorting of each piece. For this study, a pulsed Q-switched Nd:YAG laser was used to generate a microplasma onto the sample surface. To quantify and catalogue these metallic objects, calibration curves for copper, arsenic, tin, lead and iron were established. The quantitative results demonstrate that the chronological sorting carried out by LIBS matches agreeably with archaeological dating criteria.
---
paper_title: Use of LIBS for rapid characterization of parchment.
paper_content:
Parchment from different sources has been analyzed by laser-induced breakdown spectroscopy (LIBS) for determination of Ca, Na, K, Mg, Fe, Cu, and Mn. The LIBS results were compared with results from inductively coupled plasma spectroscopy (ICP) and good correlation was obtained. Rapid distinction between modern and historical samples was achieved by discriminant analysis of the LIBS data. Animal type recognition was also possible on the basis of Mg/Cu emission peak ratio and Mg depth profiling.
---
paper_title: Time-resolved characterization of laser-induced plasma from fresh potatoes
paper_content:
Abstract Optical emission of laser-induced plasma on the surface of fresh vegetables provides sensitive analysis of trace elements for in situ or online detection of these materials. This emergent technique promises applications with expected outcomes in food security or nutrition quality, as well as environment pollution detection. Characterization of the plasma induced on such soft and humid materials represents the first step towards quantitative measurement using this technique. In this paper, we present the experimental setup and protocol that optimize the plasma generation on fresh vegetables, potatoes for instance. The temporal evolution of the plasma properties are investigated using time-resolved laser-induced breakdown spectroscopy (LIBS). In particular, the electron density and the temperatures of the plasma are reported as functions of its decay time. The temperatures are evaluated from the well known Boltzmann and Saha-Boltzmann plot methods. These temperatures are further compared to that of the typical molecular species, CN, for laser-induced plasma from plant materials. This comparison validates the local thermodynamic equilibrium (LTE) in the specific case of fresh vegetables ablated in the typical LIBS conditions. A study of the temporal evolution of the signal to noise ratio also provides practical indications for an optimized detection of trace elements. We demonstrate finally that, under certain conditions, the calibration-free LIBS procedure can be applied to determine the concentrations of trace elements in fresh vegetables.
---
paper_title: Sequential-Pulse Laser-Induced Breakdown Spectroscopy of High-Pressure Bulk Aqueous Solutions
paper_content:
Sequential-pulse (or dual-pulse) laser-induced breakdown spectroscopy (DP-LIBS) with an orthogonal spark orientation is described for elemental analysis of bulk aqueous solutions at pressures up to approximately 138 x 10(5) Pa (138 bar). The use of sequential laser pulses for excitation, when compared to single-pulse LIBS excitation (SP-LIBS), provides significant emission intensity enhancements for a wide range of elements in bulk solution and allows additional elements to be measured using LIBS. Our current investigations of high-pressure solutions reveal that increasing solution pressure leads to a significant decrease in DP-LIBS emission enhancements for all elements examined, such that we see little or no emission enhancements for pressures above 100 bar. Observed pressure effects on DP-LIBS enhancements are thought to result from pressure effects on the laser-induced bubble formed by the first laser pulse. These results provide insight into the feasibility and limitations of DP-LIBS for in situ multi-elemental detection in high-pressure aqueous environments like the deep ocean.
---
paper_title: Laser-induced breakdown spectrometry — applications for production control and quality assurance in the steel industry
paper_content:
Abstract Recent progress in sensitivity and signal processing opened a broad field of application for laser-induced breakdown spectrometry (LIBS) in the steel making and processing industry. Analyzed substances range from top gas of the blast furnace, via liquid steel up to finished products. This paper gives an overview of R&D activities and first routine industrial applications of LIBS. The continuous knowledge of the topgas composition yields information about the blast furnace process. An online monitoring method using LIBS is currently under investigation to measure alkali metals, which influence energy and mass flow in the furnace. Direct analysis of liquid steel reduces processing times in secondary metallurgy. By using sensitivity-enhanced LIBS, limits of detection of approximately 10 μg/g and below were achieved for light and heavy elements in liquid steel. The process control in steel production relies on the results from the chemical analysis of the slag. A prototype of an analytical system was developed using LIBS to analyze slag samples two times faster than with conventional methods. The cleanness of steel is a key issue in the manufacturing of spring steel, thin foils and wires. Microscopic inclusions have to be determined quickly. A scanning microanalysis system based on LIBS was developed with measuring frequencies up to 1 kHz and a spatial resolution of
---
paper_title: Particle size limits for quantitative aerosol analysis using laser-induced breakdown spectroscopy: Temporal considerations
paper_content:
Abstract The temporal evolution of the Si atomic emission signal produced from individual silica microspheres in an aerosolized air stream was investigated using laser-induced breakdown spectroscopy (LIBS). Specifically, the temporal evolution of Si emission from 2.47 and 4.09-micrometer-sized particles is evaluated over discrete delay times ranging from 15 to 70 µs following plasma initiation. The analyte signal profile from the microspheres, taken as the silicon atomic emission peak-to-continuum ratio, was observed to follow the same profile of silicon-rich nanoparticles over the range of delay times. The ratio of analyte signals for the 2.47 and 4.09-micrometer particles was observed to be approximately constant with plasma decay time and less than the expected mass ratio, leading to the conclusion that further vaporization and enhanced analyte response do not continue with increasing delay times for these microsphere sizes. While recent research suggests that the temporal component of analyte response is important for quantitative LIBS analysis, the current study does confirm earlier research demonstrating an upper size limit for quantitative aerosol particle analysis in the diameter range of 2 to 2.5 µm for silica microspheres.
---
paper_title: Laser-induced breakdown spectroscopy (LIBS) in archaeological science—applications and prospects
paper_content:
Laser-induced breakdown spectroscopy (LIBS) has emerged in the past ten years as a promising technique for analysis and characterization of the composition of a broad variety of objects of cultural heritage including painted artworks, icons, polychromes, pottery, sculpture, and metal, glass, and stone artifacts. This article describes in brief the basic principles and technological aspects of LIBS, and reviews several test cases that demonstrate the applicability and prospects of LIBS in the field of archaeological science.
---
paper_title: LIBS-spectroscopy for monitoring and control of the laser cleaning process of stone and medieval glass
paper_content:
Abstract On-line monitoring or even closed-loop control is necessary to avoid over-cleaning in case the ablation process is not self-limiting. Therefore, the laser-induced breakdown spectroscopy (LIBS) was used. Basic investigations were carried out on original sandstone samples ( Elbsandstein ) with strong encrustations as well as medieval stained glass samples (13th century from Cologne Cathedral). The spectroscopic study has shown that the plasma emission can be used for determination of the elemental composition of the ablated material. The plasma was initiated by 248-nm pulses of an KrF-excimer laser (30 ns FWHM). For the spectroscopic analysis, a grating spectrograph in combination with an optical multichannel analyser was used. For the glass and stone samples we obtained a continual alteration of the LIBS spectrum (vanishing of peaks and generating of new element peaks) during the removal process. Thus, certain element peaks can be used to distinguish between encrustation layer and valuable underlying material. To show the potential of LIBS we designed an experimental laser cleaning set-up including closed-loop LIBS control and demonstrated successful automatic cleaning of an original glass fragment.
---
paper_title: Dual-pulse Laser Induced Breakdown Spectroscopy for analysis of gaseous and aerosol systems: Plasma-analyte interactions
paper_content:
Abstract Dual-pulse LIBS has been previously investigated to a large extent on solid and liquid phase analytes, where it has been demonstrated to significantly enhance atomic emission signal intensity, and more importantly, to enhance the analyte peak-to-base and signal-to-noise ratios. This study focuses on the effects of an orthogonal dual-pulse laser configuration on the atomic emission response for both purely gaseous and calcium-based aerosol samples. The gaseous sample consisted of purified (i.e. aerosol free) air, from which nitrogen and oxygen spectral emission lines were analyzed. Measurements for the gaseous system resulted in no notable improvements with the dual-pulse configuration as compared to the single-pulse LIBS. Experiments were also conducted in purified air seeded with calcium-rich particles, which revealed a marked improvement in calcium atomic emission peak-to-base (∼ 2-fold increase) and signal-to-noise ratios (∼ 4-fold increase) with the dual-pulse configuration. In addition to increased analyte response, dual-pulse LIBS yielded an enhanced single-particle sampling rate when compared to conventional LIBS. Transmission measurements with respect to the plasma-creating laser pulse were recorded for both single and dual-pulse methods over a range of temporal delays. In consideration of the spectroscopic and transmission data, the plasma-analyte interactions realized with a dual-pulse methodology are explained in terms of the interaction with the initially expanding plasma shock wave, which differs between gaseous and particulate phase analytes, as reported in a recent study [V. Hohreiter, D.W. Hahn, Calibration effects for laser-induced breakdown spectroscopy of gaseous sample streams: analyte response of gas-phase species versus solid-phase species, Anal. Chem. 77 (2005) 1118–1124].
---
paper_title: Feasibility of generating a useful laser-induced breakdown spectroscopy plasma on rocks at high pressure: preliminary study for a Venus mission
paper_content:
Abstract Laser-induced breakdown spectroscopy (LIBS) is being developed for future use on landers and rovers to Mars. The method also has potential for use on probes to other planets, the Moon, asteroids and comets. Like Mars, Venus is of strong interest because of its proximity to earth, but unlike Mars, conditions at the surface are far more hostile with temperatures in excess of 700 K and pressures on the order of 9.1 MPa (90 atm). These conditions present a significant challenge to spacecraft design and demand that rapid methods of chemical data gathering be implemented. The advantages of LIBS (e.g. stand-off and very rapid analysis) make the method particularly attractive for Venus exploration because of the expected short operational lifetimes (≈2 h) of surface instrumentation. Although the high temperature of Venus should pose no problem to the analytical capabilities of the LIBS spark, the demonstrated strong dependence of laser plasma characteristics on ambient gas pressures below earth atmospheric pressure requires that LIBS measurements be evaluated at the high Venus surface pressures. Here, we present a preliminary investigation of LIBS at 9.1 MPa for application to the analysis of a basalt rock sample. The results suggest the feasibility of the method for a Venus surface probe and that further study is justified.
---
paper_title: ns- and fs-LIBS of copper-based-alloys: A different approach
paper_content:
A self-calibrated analytical technique, based on plasmas induced by either 250 fs or 7 ns laser pulses, is presented. This approach is comparable to other calibration-free methods based on LTE assumption. In order to apply this method to very different laser pulse durations, the partial-local thermodynamic equilibrium (p-LTE) has been considered within the energy range of 30,000-50,000 cm -1 . In order to obtain the neutral species densities, the detected plasma species emission lines intensities have been treated together with the experimental evaluated background black-body Planck-like emission distribution. For validating the followed method, three certified copper-based-alloys standards were employed and their minor components (Ni, Pb and Sn) amounts were determined. As a result, it arises, that this standardless method, independently from the laser source pulse durations, provides good quantitative analysis, and, consequently, that the composition of the plasma plume emitting species induced is not affected by the laser pulse time width.
---
paper_title: Laser Induced Breakdown Spectroscopy methodology for the analysis of copper-based-alloys used in ancient artworks☆
paper_content:
Abstract In this paper Laser Induced Breakdown Spectroscopy has been applied for determining the elemental composition of a set of ancient bronze artworks coming from archaeological site of Minervino Murge — Southern of Italy (dated around VII b.C.). Before carrying on the analysis of the archaeological samples, the characterization of the analytical technique has been accomplished by investigating the trueness of the typical assumptions adopted in LIBS, such as Local Thermodynamic Equilibrium, congruent ablation and plasma homogeneity. With this purpose, two different laser pulse durations, 7 ns and 350 fs, have been used. We have focused our attention on LIBS analysis of bronze standards by considering and discussing the bases of both methodology and analytical approach to be followed for the analysis of ancient copper-based-alloy samples. Unexpectedly, regardless from the laser pulse duration, the LIBS technique has shown, by considering an adequate approach on the emitting plasma features, that its peculiarities are anyway preserved so that a fast analysis of ancient copper-based-alloys can be achieved. After verifying the suitability of the methodology, it has been possible to fulfill the typical assumptions considered for the LIBS calibration curves method and use it for ancient bronze artworks analysis.
---
paper_title: On board LIBS analysis of marine sediments collected during the XVI Italian campaign in Antarctica
paper_content:
Abstract The Laser-induced Breakdown Spectroscopy technique was applied on board the R/V Italica during the XVI Antarctic campaign (2000–2001) to carry out elemental chemical analysis of marine sediments collected using different sampling systems. To this end, a compact system has been built, which was suitable to operate also in the presence of mechanical vibrations, induced by the ship motion. Qualitative and quantitative analyses were performed on dried samples, without any further pre-treatment. Qualitative analyses have shown similar elemental composition among different collected sediments, except for significant differences in the case of rock fragments and manganese nodule. The latter also contains some heavy metals that in sediment layers were detected only in traces. The methodology to retrieve relative or absolute elemental concentration in heterogenous samples has been optimized and is scarcely sensitive to variations of sediment physical properties with depth, and to experimental parameters such as laser defocusing because of surface irregularities, and laser energy fluctuations. The relative distribution of the major elemental constituents, both from a bio-organic and mineral origin, was measured as a function of sediment depth. Measurements, once limited to specific spectral sections, and data analyses are fast and very reproducible. Most of the elements show a gradually varying distribution along the sampled core, except for silicon and barium, whose steep decrease with depth is strongly related to their biogenic origin. Quantitative LIBS analyses were performed on a limited number of samples and the results reported here, are comparable to the certified element contents in a reference sample of Antarctic sediments.
---
paper_title: Effect of Pulse Delay Time on a Pre-Ablation Dual-Pulse LIBS Plasma
paper_content:
In this paper, we investigate the effect of dual-pulse timing on material ablation, plasma temperature, and plasma size for pre-ablation spark dual-pulse laser-induced breakdown spectroscopy (LIBS). Although the plasma temperature increases for dual-pulse excitation, the signal enhancement is most easily attributed to increased sample ablation. Plasma images show that the magnitude of the enhancement can be affected by the collection optic and by the collection geometry. Enhancements calculated using the total integrated intensity of the plasma are comparable to those measured using fiber-optic collection.
---
paper_title: Nanosecond and femtosecond Laser Induced Breakdown Spectroscopic analysis of bronze alloys
paper_content:
Abstract In the present work we are studying the influence of pulse duration (nanosecond (ns) and femtosecond (fs)) at λ = 248 nm on the laser-induced plasma parameters and the quantitative analysis results for elements such as Sn, Zn and Pb, in different types of bronze alloys adopting LIBS in ambient atmosphere. Binary (Sn–Cu), ternary (Sn–Zn–Cu or Sn–Pb–Cu) and quaternary (Sn–Zn–Pb–Cu) reference alloys characterized by a chemical composition and metallurgical features similar to those used in Roman times, were employed in the study. Calibration curves, featuring linear regression coefficients over 98%, were obtained for tin, lead and zinc, the minor elements in the bronze alloys (using the internal standardization method) as well as for copper, the major element. The effects of laser pulse duration and energy on laser-induced plasma parameters, namely the excitation temperature and the electron density have been studied in our effort to optimize the analysis. Finally, LIBS analysis was carried on three real metal objects and the spectra obtained have been used to estimate the type and elemental composition of the alloys based on the calibration curves produced with the reference alloys. The results obtained are very useful in the future use of portable LIBS systems for in situ qualitative and quantitative elemental analysis of bronze artifacts in museums and archaeological sites.
---
paper_title: Excitation equilibria in plasmas; a classification
paper_content:
The rich nature of the plasma state manifests itself in the large variety of studies focused on the atomic state distribution function (ASDF). Many specific calculations dedicated to various exemplary plasma situations can be found in literature. In this study we follow and continue the line of Bibermanl, Fujimoto2, and Seaton3 by trying to find a classification in these results4. The central item is to find a (analytical) relation between the ASDF and the underlying plasma properties. We confine ourselves to those situations in which the excitation kinetics is ruled by (Maxwellian) electrons in atomic plasmas. Molecular processes will not be considered.
---
paper_title: From single pulse to double pulse ns-Laser Induced Breakdown Spectroscopy under water : Elemental analysis of aqueous solutions and submerged solid samples
paper_content:
Abstract In this paper the developments of Laser Induced Breakdown Spectroscopy (LIBS) underwater have been reviewed to clear up the basic aspects of this technique as well as the main peculiarities of the analytical approach. The strong limits of Single-Pulse (SP) LIBS are discussed on the basis of plasma emission spectroscopy observations, while the fundamental improvements obtained by means of the Double-Pulse (DP) technique are reported from both the experimental and theoretical point of view in order to give a complete description of DP-LIBS in bulk water and on submerged solid targets. Finally a detailed description of laser–water interaction and laser-induced bubble evolution is reported to point out the effect of the internal conditions (radius, pressure and temperature) of the first pulse induced bubble on the second pulse producing plasma. The optimization of the DP-LIBS emission signal and the determination of the lower detection limit, in a set of experiments reported in the current scientific literature, clearly demonstrate the feasibility and the advantages of this technique for underwater applications.
---
paper_title: Experimental characterization of metallic titanium-laser induced plasma by time and space resolved optical emission spectroscopy
paper_content:
Abstract Time and space resolved optical emission spectroscopy has been successfully employed to investigate the evolution of the plasma produced by the interaction of UV laser beam with a metallic target of titanium at two different pressures (10 −5 and 3.4×10 −2 torr) and at distances up to 3 mm from the target. By time of flight measurements and Boltzmann plots both the dynamic and the kinetic aspects have been discussed. The quasi-equilibrium state of the laser-induced plasma has been established on the basis of the failure of Saha balance equation. The effect of three-body recombination on atomic titanium temporal distribution has been explained. Temporal evolution of electron number density, as determined by Stark effect, has been used for the estimation of the three-body recombination rate constant.
---
paper_title: Excitation equilibria in plasmas; a classification
paper_content:
The rich nature of the plasma state manifests itself in the large variety of studies focused on the atomic state distribution function (ASDF). Many specific calculations dedicated to various exemplary plasma situations can be found in literature. In this study we follow and continue the line of Bibermanl, Fujimoto2, and Seaton3 by trying to find a classification in these results4. The central item is to find a (analytical) relation between the ASDF and the underlying plasma properties. We confine ourselves to those situations in which the excitation kinetics is ruled by (Maxwellian) electrons in atomic plasmas. Molecular processes will not be considered.
---
paper_title: Theoretical Modeling of Laser Ablation of Quaternary Bronze Alloys: Case Studies Comparing Femtosecond and Nanosecond LIBS Experimental Data†
paper_content:
A model, formerly proposed and utilized to understand the formation of laser induced breakdown spectroscopy (LIBS) plasma upon irradiation with nanosecond laser pulses at different fluences and wavelengths, has been extended to the irradiation with femtosecond laser pulses in order to control the fractionation mechanisms which heavily affect the application of laser-ablation-based microanalytical techniques. The model takes into account the different chemico-physical processes occurring during the interaction of an ultrashort laser pulse with a metallic surface. In particular, a two-temperature description, relevant to the electrons and lattice of the substrate, respectively, has been introduced and applied to different ternary and quaternary copper-based alloys subjected to fs and ns ablation both in the visible (527 nm) and in the UV (248 nm). The model has been found able to reproduce the shorter plasma duration experimentally found upon fs laser ablation. Kinetic decay times of several copper (major e...
---
paper_title: From LASER to LIBS, the path of technology development☆
paper_content:
Abstract Laser-induced breakdown spectroscopy has made significant progress towards becoming a commercial, deployed technology. Its historical development will be reviewed, using the transformation of the laser into commercial technology as a parallel.
---
paper_title: Characterization of laser induced plasmas by optical emission spectroscopy: A review of experiments and methods
paper_content:
Advances in characterization of laser induced plasmas by optical emission spectroscopy are reviewed in this article. The review is focused on the progress achieved in the determination of the physical parameters characteristic of the plasma, such as electron density, temperature and densities of atoms and ions. The experimental issues important for characterization by optical emission spectroscopy, as well as the different measurement methods are discussed. The main assumptions of the methods, namely the optical thin emission of spectral lines and the existence of local thermodynamic equilibrium in the plasma are evaluated. For dense and inhomogeneous sources of radiation such as laser induced plasmas, the characterization methods are classified in terms of the optical depth and the spatial resolution of the emission used for the measurements. The review deals firstly with optically thin spatially integrated measurements. Next, local measurements and characterization in not optically thin conditions are discussed. Two tables are included that provide reference to the works reporting measurements of electron density and temperature of laser induced plasmas generated with diverse samples.
---
paper_title: Calibration-Free Laser-Induced Breakdown Spectroscopy: State of the art
paper_content:
The aim of this paper is offering a critical review of Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS), the approach of multi-elemental quantitative analysis of LIBS spectra, based on the measurement of line intensities and plasma properties (plasma electron density and temperature) and on the assumption of a Boltzmann population of excited levels, which does not require the use of calibration curves or matrix-matched standards. The first part of this review focuses on the applications of the CF-LIBS method. Quantitative results reported in the literature, obtained in the analysis of various materials and in a wide range of experimental conditions, are summarized, with a special emphasis on the departure from nominal composition values. The second part is a discussion of the simplifying assumptions which lie at the basis of the CF-LIBS algorithm (stoichiometric ablation and complete atomization, thermal equilibrium, homogeneous plasma, thin radiation, detection of all elements). The inspection of the literature suggests that the CF-LIBS method is more accurate in analyzing metallic alloys rather than dielectrics. However, the full exploitation of the method seems to be still far to come, especially for the lack of a complete characterization of the effects of experimental constraints. However, some general directions can be suggested to help the analyst in designing LIBS measurements in a way which is more suited for CF-LIBS analysis.
---
paper_title: Particle size distributions and compositions of aerosols produced by near-IR femto- and nanosecond laser ablation of brass
paper_content:
Particle size distributions and compositions of primary aerosols produced by means of near-IR femtosecond laser ablation (λ ::: = 775 nm) of brass in He or Ar at atmospheric pressure have been measured. Aerosols were characterized using a 13-stage low-pressure impactor covering a size range from 5 nm up to 5 μm and subsequently analyzed applying total reflection X-ray fluorescence spectrometry. The results indicate, that for femtosecond laser ablation in the low-fluence regime (<5 J cm−2) ultra-fine aerosols (mean diameter dp ::: ≈ 10 nm/peak width wp ::: ≈ 35 nm) are produced. Furthermore, the total Cu/Zn ratio of these aerosols corresponds to the composition of the bulk material. In contrast, ablation above 10 J cm−2 results in the formation of polydisperse, bimodal aerosols, which are distributed around dp1 ::: ≈ 20 nm (wp1 ::: ≈ 50 nm) and dp2 ::: ≈ 1 μm (wp2 ::: ≈ 5 µm), respectively, and whose total Cu/Zn ratio slightly deviates from the bulk composition. In order to examine the influence of pulse duration on particle size distribution and aerosol composition, comparative measurements by means of near-IR nanosecond ablation were also performed. The data show that nanosecond ablation generally leads to an intensified formation of particles in the micrometer range. Moreover, the composition of these aerosols strongly departs from the stoichiometry of the bulk. Aspects concerning the formation of particles during ablation as well as implications for the element-selective analysis by inductively coupled plasma spectrometry are discussed.
---
paper_title: Non-equilibrium and equilibrium problems in laser-induced plasmas
paper_content:
Several problems which can determine non-equilibrium effects in laser-induced breakdown plasmas are analysed in this study. In particular, we focus our attention on problems associated with the fluid dynamics of the expanding plume, with time-dependent collisional-radiative models for describing the population densities of excited states, with the time-dependent Boltzmann equation for characterizing the electron energy distribution function in laser-induced breakdown spectroscopy (LIBS) plasmas. The results show that these problems should be carefully taken into account for developing a calibration-free LIBS methodology. Finally, problems associated with equilibrium plasmas, and in particular the dependence of the partition function on the particular environment surrounding the different components of a plasma are discussed.
---
paper_title: Correction of self-absorption spectral line and ratios of transition probabilities for homogeneous and LTE plasma
paper_content:
Abstract Expressions for fundamental spectral line parameters are reported for Gaussian and Lorentzian shape profiles for homogeneous plasma at equilibrium in presence of the self-absorption. These expressions for Lorentzian profiles are applied to the determination of the ratios of transition probabilities and the ratios of optical thicknesses by a new method that we propose in this article. The self-absorption is computed by spectral line profiles with a Simplex algorithm program fitting. Applications to some experimental lines illustrate the appropriate corrections.
---
paper_title: On the usefulness of a duplicating mirror to evaluate self-absorption effects in laser induced breakdown spectroscopy
paper_content:
Abstract This paper illustrates the application of the well-known approach of duplicating the emission from a plasma by placing a spherical mirror behind it in order to characterize the degree of self-absorption of atomic transitions. It is shown that this simple expedient provides a quick check for the existence of optically thick plasma conditions, and allows one to follow the temporal evolution of the plasma optical depth from the early decay of the continuum emission to the end of the plasma lifetime. The method is applied to a plasma induced at atmospheric pressure by focusing an Nd:YAG laser on different Al-alloy targets. Moreover, if the resolution of the monochromator allows one to obtain the true physical profiles of the lines investigated, a self-absorption correction factor can be calculated, following a methodology described in the plasma diagnostic literature. It is shown that this correction can be used to improve the linearity of calibration curves and to identify outliers in the Saha–Boltzmann plot for temperature evaluation. The data obtained are still the result of line-of-sight measurements, and therefore can only be interpreted in terms of some space-averaged values of the parameters evaluated. Despite this limitation, it is argued that the simple addition of a mirror to a laser induced plasma emission experiment has many advantageous features and should find a more widespread use when performing laser induced breakdown spectroscopy experiments.
---
paper_title: A comparison of nanosecond and femtosecond laser-induced plasma spectroscopy of brass samples
paper_content:
The ablation of brass samples in argon shield gas by 170 fs and 6 ns laser pulses has been studied by optical emission spectroscopy of the evolving plasmas. Differences observed in the temporal behavior of the spectral line intensities are explained by the shielding effect of the Ar plasma for ns-pulses and the free expansion of the plasma of the ablated material in case of fs-pulses. Brass with different ZnrCu ratios were used as samples. Different types of crater formation mechanisms in the case of ns- and fs-pulses were observed. At 40 mbar argon pressure the thresholds of ablation were found to be ; 0.1 and ; 1.5 J cm y2 for fs- and ns-pulses, respectively. With an internal standardization of zinc to copper it is possible to correct for differences in the ablation rates and to obtain linear calibration curves. For optimum experimental conditions, narrower confidence intervals for the determination of unknown concentrations were found in case of fs-pulses. Within the range of the laser intensities used, no dependence of the ZnrCu line intensity ratio on the number of laser pulses applied to the same ablation spot was observed, neither for fs- nor for ns-pulses, which is interpreted as the absence of fractional vaporization. Q 2000 Elsevier Science B.V. All rights reserved.
---
paper_title: A procedure for correcting self-absorption in calibration free-laser induced breakdown spectroscopy
paper_content:
Abstract A model of the self-absorption effect in laser-induced plasma has been developed, with the aim of providing a tool for its automatic correction in the Calibration-Free algorithm recently developed for standardless analysis of materials by LIBS (Laser Induced Breakdown Spectroscopy). As a test of the model, the algorithm for self-absorption correction is applied to three different certified steel NIST samples and to three ternary alloys (Au, Ag, Cu) of known composition. The experimental results show that the self-absorption corrected Calibration-Free method gives reliable results, improving the precision and the accuracy of the CF-LIBS procedure by approximately one order of magnitude.
---
paper_title: Effects of crater development on fractionation and signal intensity during laser ablation inductively coupled plasma mass spectrometry
paper_content:
Abstract The effects of crater development on ICP-MS signal intensities and elemental fractionation have been presented in this work. Craters formed after repetitive 266-nm Nd/YAG laser ablation with 1.0-mJ pulses had a cone-like shape. The laser ablation rate (ng/s) depended on the laser irradiance (laser pulse energy per unit time and unit area), decreasing as irradiance increased. In contrast, the particle entrainment/transport efficiency did not significantly change with irradiance. As the crater-aspect ratio (depth/diameter) increased above some threshold value of six, the Pb/U elemental ratio departed from the stoichiometric value. However, good stoichiometry of ablated mass could be achieved when experimental conditions were carefully selected. The exact mechanism of how crater development affects fractionation is not well understood. In this work, actual irradiance was introduced instead of a nominal value. Actual irradiance decreased as the crater deepened due to changes of the effective area, sampled by the laser beam.
---
paper_title: New Procedure for Quantitative Elemental Analysis by Laser-Induced Plasma Spectroscopy
paper_content:
A new procedure, based on the laser-induced plasma spectroscopy (LIPS) technique, is proposed for calibration-free quantitative elemental analysis of materials. The method here presented, based on an algorithm developed and patented by IFAM-CNR, allows the matrix effects to be overcome, yielding precise and accurate quantitative results on elemental composition of materials without use of calibration curves. Some applications of the method are illustrated, for quantitative analysis of the composition of metallic alloys and quantitative determination of the composition of the atmosphere.
---
paper_title: A numerical study of expected accuracy and precision in Calibration-Free Laser-Induced Breakdown Spectroscopy in the assumption of ideal analytical plasma ☆
paper_content:
Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) has been proposed several years ago as an approach for quantitative analysis of Laser-Induced Breakdown Spectroscopy spectra. Recently developed refinement of the spectral processing method is described in the present work. Accurate quantitative results have been demonstrated for several metallic alloys. However, the degree of accuracy that can be achieved with Calibration-Free Laser-Induced Breakdown Spectroscopy analysis of generic samples still needs to be thoroughly investigated. The authors have undertaken a systematic study of errors and biasing factors affecting the calculation in the Calibration-Free Laser-Induced Breakdown Spectroscopy spectra processing. These factors may be classified in three main groups: 1) experimental aberrations (intensity fluctuations and inaccuracy in the correction for spectral efficiency of a detection system), 2) inaccuracy in theoretical parameters used for calculations (Stark broadening coefficients and partition functions) and 3) plasma non-ideality (departure from thermal equilibrium, spatial and temporal inhomogeneities, optical thickness, etc.). In this study, the effects of experimental aberrations and accuracy of spectral data were investigated, assuming that the analytical plasma is ideal. Departure of the plasma conditions from ideality will be the object of future work. The current study was based on numerical simulation. Two kinds of metallic alloys, iron-based and aluminum-based, were studied. The relative weight of the error contributions was found to depend on the sample composition. For the here-investigated samples, the experimental aberrations contribute to the overall uncertainty on the quantitative results more than theoretical parameters. The described simulation method can be applied to the Calibration-Free Laser-Induced Breakdown Spectroscopy analysis of any other kind of sample.
---
paper_title: Partial least squares regression for problem solving in precious metal analysis by laser induced breakdown spectrometry
paper_content:
The application of laser induced breakdown spectrometry for the quantitative determination of gold and silver in Au–Ag–Cu alloys is proposed. Laser induced plasma emission spectra in the ultraviolet region were studied in order to characterize the spectral information from time-integrated data. The multivariate calibration method known as partial least squares regression type 1 was used for calibration and prediction purposes. Satisfactory results were obtained for the determination of gold and silver without temporal resolution strategies. Since the employment of this chemometric algorithm was a good alternative for simplification of the experimental setup, a rapid, simple and low cost method is thus proposed for the determination of noble metals in jewelry pieces.
---
paper_title: Laser-induced breakdown spectroscopy of composite samples: comparison of advanced chemometrics methods.
paper_content:
Laser-induced breakdown spectroscopy is used to measure chromium concentration in soil samples. A comparison is carried out between the calibration curve method and two chemometrics techniques: partial least-squares regression and neural networks. The three quantitative techniques are evaluated in terms of prediction accuracy, prediction precision, and limit of detection. The influence of several parameters specific to each method is studied in detail, as well as the effect of different pretreatments of the spectra. Neural networks are shown to correctly model nonlinear effects due to self-absorption in the plasma and to provide the best results. Subsequently, principal components analysis is used for classifying spectra from two different soils. Then simultaneous prediction of chromium concentration in the two matrixes is successfully performed through partial least-squares regression and neural networks.
---
paper_title: Development of a mobile system based on laser-induced breakdown spectroscopy and dedicated to in situ analysis of polluted soils ☆
paper_content:
Principal Components Analysis (PCA) is successfully applied to the full laser-induced breakdown spectroscopy (LIBS) spectra of soil samples, defining classes according to the concentrations of the major elements. The large variability of the LIBS data is related to the heterogeneity of the samples and the representativeness of the data is finally discussed. Then, the development of a mobile LIBS system dedicated to the in-situ analysis of soils polluted by heavy metals is described. Based on the use of ten-meter long optical fibers, the mobile system allows deported measurements. Finally, the laser-assisted drying process studied by the use of a customized laser has not been retained to overcome the problem of moisture.
---
paper_title: Calibration-Free Laser-Induced Breakdown Spectroscopy: State of the art
paper_content:
The aim of this paper is offering a critical review of Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS), the approach of multi-elemental quantitative analysis of LIBS spectra, based on the measurement of line intensities and plasma properties (plasma electron density and temperature) and on the assumption of a Boltzmann population of excited levels, which does not require the use of calibration curves or matrix-matched standards. The first part of this review focuses on the applications of the CF-LIBS method. Quantitative results reported in the literature, obtained in the analysis of various materials and in a wide range of experimental conditions, are summarized, with a special emphasis on the departure from nominal composition values. The second part is a discussion of the simplifying assumptions which lie at the basis of the CF-LIBS algorithm (stoichiometric ablation and complete atomization, thermal equilibrium, homogeneous plasma, thin radiation, detection of all elements). The inspection of the literature suggests that the CF-LIBS method is more accurate in analyzing metallic alloys rather than dielectrics. However, the full exploitation of the method seems to be still far to come, especially for the lack of a complete characterization of the effects of experimental constraints. However, some general directions can be suggested to help the analyst in designing LIBS measurements in a way which is more suited for CF-LIBS analysis.
---
paper_title: Multivariate calibration of spectra obtained by Laser Induced Breakdown Spectroscopy of plutonium oxide surrogate residues
paper_content:
Laser Induced Breakdown Spectroscopy (LIBS) was used to determine elemental concentration of plutonium oxide surrogate (cerium oxide) residue for monitoring the fabrication of lanthanide borosilicate glass. Quantitative analysis by LIBS is affected by the severe limitation of variation in the induced plasma due to changes in the matrix. Multivariate calibration was applied to LIBS data to predict the concentrations of Ce, Cr, Fe, Mo, and Ni. A total of 18 different samples were prepared to compare calibration from univariate data analysis and from multivariate data analysis. Multivariate calibration was obtained using Principal Component Regression (PCR) and Partial Least Squares (PLS). Univariate calibration was obtained from background-corrected atomic emission lines. Calibration results show improvement in the coefficient of determination from 0.87 to 0.97 for Ce compared to univariate calibration. The root mean square error also reduced from 7.46 to 2.93%. A similar trend was obtained for Cr, Fe, Mo, and Ni also. These results clearly demonstrate the feasibility of using LIBS for online process monitoring in a hazardous waste management environment.
---
paper_title: Investigation of statistics strategies for improving the discriminating power of laser-induced breakdown spectroscopy for chemical and biological warfare agent simulants
paper_content:
Abstract Laser-induced breakdown spectroscopy spectra of bacterial spores, molds, pollens and nerve agent simulants have been acquired. The performance of several statistical methodologies–linear correlation, principal components analysis, and soft independent model of class analogy–has been evaluated for their ability to differentiate between the various samples. The effect of data selection (total spectra, peak intensities, and intensity ratios) and pre-treatments (e.g., averaging) on the statistical models have also been studied. Results indicate the use of spectral averaging and weighting schemes may be used to significantly improve sample differentiation.
---
paper_title: Quantitative micro-analysis by laser-induced breakdown spectroscopy: a review of the experimental approaches
paper_content:
Abstract The laser-induced breakdown spectroscopy (LIBS) technique has shown in recent years its great potential for rapid qualitative analysis of materials. Because of the lack of pre-treatment of the material, as well as the speed of analysis, not mentioning the possibility of in situ analysis, this technique offers an attractive solution for a wide range of industrial applications. As a consequence, a lot of work has been devoted towards the application of LIBS technique for quantitative micro-analysis. The purpose of this paper is to give a review of the current experimental approaches used for obtaining quantitative micro-analysis using the LIBS technique. The influence on LIBS analytical performances of laser power, wavelength and pulse length, the proper choice of experimental geometry, the importance of ambient gas choice and the role of detectors for improving the precision of LIBS analysis are among the topics discussed in this paper.
---
paper_title: Nd:YAG laser double wavelength ablation of pollution encrustation on marble and bonding glues on duplicated painting canvas
paper_content:
Abstract In the present study, a newly developed one-beam IR–UV laser cleaning system is presented. This system may be used for different applications in diverse fields, such as outdoors stonework conservation and canvas paintings restoration. The simultaneous use of the fundamental radiation of a Q-switched Nd:YAG laser at 1064 nm and its third harmonic at 355 nm was found appropriate to clean pollution crusts, while ensuring that no discoloration (“yellowing”) would occur. The optimum ratio of UV to IR wavelengths in the final cleaning beam was investigated. In parallel, the same system was tested in diverse applications, such as the removal of bonding glues from duplicated canvases. The optimum laser parameters were investigated both on technical samples as well as on original paintings.
---
paper_title: Identification of inks and structural characterization of contemporary artistic prints by laser-induced breakdown spectroscopy
paper_content:
Identification of the inks used in artistic prints and the order in which different ink layers have been applied on a paper substrate are important factors to complement the classical stylistic aspects for the authentication of this type of objects. Laser-induced breakdown spectroscopy (LIBS) is investigated to determine the chemical composition and structural distribution of the constituent materials of model prints made by applying one or two layers of several blue and black inks on an Arches paper substrate. By using suitable laser excitation conditions, identification of the inks was possible by virtue of emissions from key elements present in their composition. Analysis of successive spectra on the same spot allowed the identification of the order in which the inks were applied on the paper. The results show the potential of laser-induced breakdown spectroscopy for the chemical and structural characterization of artistic prints.
---
paper_title: Nanosecond-to-femtosecond laser-induced breakdown in dielectrics
paper_content:
We report extensive laser-induced damage threshold measurements on dielectric materials at wavelengths of 1053 and 526 nm for pulse durations {tau} ranging from 140 fs to 1 ns. Qualitative differences in the morphology of damage and a departure from the diffusion-dominated {tau}{sup 1/2} scaling of the damage fluence indicate that damage occurs from ablation for {tau}{le}10 ps and from conventional melting, boiling, and fracture for {tau}{approx_gt}50 ps. We find a decreasing threshold fluence associated with a gradual transition from the long-pulse, thermally dominated regime to an ablative regime dominated by collisional and multiphoton ionization, and plasma formation. A theoretical model based on electron production via multiphoton ionization, Joule heating, and collisional (avalanche) ionization is in quantitative agreement with the experimental results.
---
paper_title: Morphology and mechanisms of formation of natural patinas on archaeological Cu-Sn alloys
paper_content:
Abstract Natural patinas on archaeological bronzes (Cu–Sn alloys) have been classified and characterized in order to get a deeper insight into their formation mechanisms. From examinations of cross-sections on archaeological artefacts, two classes of corrosion structures were defined (Type I and Type II), using both optical and electron microscopies, EDSX, XRD, IRS and a statistical treatment of data (Principal Components Analysis). A Type I structure (even surface) is defined as a two-layer passivating deposit due to an internal oxidation with a decuprification process (i.e. selective dissolution of copper). A Type II structure (coarse surface) corresponds to more severe attacks, such as pitting but also general uneven corrosion; it is modelled by a three-layer structure, characterized by the presence of cuprous oxide and by an increase in the chloride content at the internal layer/alloy interface related with selective dissolution of copper. A phenomenological model to explain the formation of bronze patinas is developed on the basis of a decuprification phenomenon.
---
paper_title: Experimental investigations of stained paper documents cleaned by the Nd:YAG laser pulses
paper_content:
Abstract The historical paper samples from XIII–XIX c. are characterised by means of techniques of the optical spectroscopy. The influence of pulsed laser cleaning by means of the Q-switched Nd:YAG laser at 532 nm on the spectra and also cleaning results of stained paper documents are reported and considered. In the absorption spectra, the minima around 280 and 370 nm are identified and luminescence reveals a characteristic band centred around 430 nm. The laser cleaning diagnosed by the recording of the LIF spectra with 266 nm excitation shows a profile of increasing intensity and preserved structure. The LIPS spectra reveal sharp emission lines recorded at 612.5, 644.2, 646.5, 671, 714.9, 720.2 nm (Ca I), 589.4, 616.4, 780 nm (Na I), and 766.5: 769.9 nm (Mg I) which are ascribed to the surface contaminations. The intensity decrease of these peaks is in accordance with successive laser pulses and monitors the cleaning progress of the stained paper.
---
paper_title: Double-pulse LIBS in bulk water and on submerged bronze samples
paper_content:
In this work laser-induced breakdown spectroscopy (LIBS) has been applied in bulk water using a double-pulse laser source. As in the case of former experiments in air, the use of the double-pulse technique allows for enhancing line emission intensity and reducing the duration of the continuum spectrum, thus increasing the overall analytical performances of the technique. Tap water analysis of Na and Mg dissolved cations has been performed to investigate the capability of the technique, but most significant results have been obtained in determining the composition of submerged bronze targets by laser ablation of their surface in seawater. When the plasma is generated by double-pulse laser, the ablated matter is strongly confined by the water vapor inside the cavitations bubble. The confinement of the plasma leads to higher values of excitation temperature and holds the conditions suitable for chemical analysis (homogeneity and LTE) longer than what happens in gaseous media. The double-pulse experiments performed directly in bulk water point out the features of LIBS technique for real analytical applications in situ, such as the water quality assessment and the investigation of irremovable submerged objects.
---
paper_title: The potential of laser-induced breakdown spectrometry for real time monitoring the laser cleaning of archaeometallurgical objects☆
paper_content:
In this work, an orthogonal double pulse (DP) laser-induced breakdown spectroscopy configuration as a diagnostic tool for the restoration of archaeometallurgical samples has been developed and evaluated. Although laser-induced breakdown spectroscopy has been extensively tested in this kind of applications, this study presents an alternative method in terms of controlling the laser cleaning process of metallic object as well as real time laser-induced breakdown spectroscopy monitoring of the emission signal of the ablated material (pollutants and the structural materials). Several experimental parameters such as interpulses delay time, second laser to target distance and second pulse energy delay have also been accomplished in ancient Alexandrian coins. An enhancement of the signal emission is observed when both cleaning and analyzing lasers are combined, while no spectra signal is achieved when both lasers are operating independently. The restoration of ancient object by means of both conventional and double pulse laser cleaning arrangements is also discussed.
---
paper_title: LIPS and linear correlation analysis applied to the classification of Roman pottery Terra Sigillata
paper_content:
Archaeological ceramics Terra Sigillata manufactured in different production centres have been studied by laser-induced plasma spectroscopy (LIPS). The aim of this work was to establish a procedure for the rapid classification of these archaeological ceramics in function of their provenance through combination of LIPS and statistical methodologies. Representative emission spectra of the Hispanic, Gaulish and African groups of pottery were selected as references. The use of linear correlation allowed one to cluster the samples by quantitative comparison of LIP spectra, leading to a reliable assignment of Terra Sigillata pieces to origin centres.
---
paper_title: Controlled laser cleaning of painted artworks using accurate beam manipulation and on-line LIBS-detection
paper_content:
Abstract An innovative laser restoration tool for non-contact cleaning of painted artworks is developed. Accurate beam manipulation techniques in combination with on-line detection make the system suitable for selective cleaning of delicate surfaces. The utilisation of lasers obviates the use of various chemicals, and provides a method to remove layers that are untreatable using conventional methods. The first professional laser cleaning station for paintings is equipped with a modern mechatronic engineering tool for accurate beam manipulation (‘optical arm’). An intelligent combination of software and hardware enables accurate control, necessary to deal with the variable properties of the artworks to be treated. An on-line monitoring system is incorporated, using laser-induced breakdown spectroscopy. The user interface plays an important role in simulating the ‘hands-on’ treatment. In January 1999, the 2-year European co-operative research project ‘Advanced workstations for controlled laser cleaning of artworks’ started. The research objective is to define the boundary conditions in which laser cleaning with the present technology can be safely applied.
---
paper_title: Theoretical Modeling of Laser Ablation of Quaternary Bronze Alloys: Case Studies Comparing Femtosecond and Nanosecond LIBS Experimental Data†
paper_content:
A model, formerly proposed and utilized to understand the formation of laser induced breakdown spectroscopy (LIBS) plasma upon irradiation with nanosecond laser pulses at different fluences and wavelengths, has been extended to the irradiation with femtosecond laser pulses in order to control the fractionation mechanisms which heavily affect the application of laser-ablation-based microanalytical techniques. The model takes into account the different chemico-physical processes occurring during the interaction of an ultrashort laser pulse with a metallic surface. In particular, a two-temperature description, relevant to the electrons and lattice of the substrate, respectively, has been introduced and applied to different ternary and quaternary copper-based alloys subjected to fs and ns ablation both in the visible (527 nm) and in the UV (248 nm). The model has been found able to reproduce the shorter plasma duration experimentally found upon fs laser ablation. Kinetic decay times of several copper (major e...
---
paper_title: From LASER to LIBS, the path of technology development☆
paper_content:
Abstract Laser-induced breakdown spectroscopy has made significant progress towards becoming a commercial, deployed technology. Its historical development will be reviewed, using the transformation of the laser into commercial technology as a parallel.
---
paper_title: Laser assisted removal of synthetic painting-conservation materials using UV radiation of ns and fs pulse duration: Morphological studies on model samples
paper_content:
Abstract In an effort to establish the optimal parameters for the cleaning of complex layers of polymers (mainly based on acrylics, vinyls, epoxys known as Elvacite, Laropal, Paraloid B72, among others) applied during past conservation treatments on the surface of wall paintings, laser cleaning tests were performed with particular emphasis on the plausible morphological modifications induced in the remaining polymeric material. Pulse duration effects were studied using laser systems of different pulse durations ( ns and fs ) at 248 nm. Prior to tests on real fragments from the Monumental Cemetery in Pisa (Italy) which were coated with different polymers, attention was focused on the study of model samples consisting of analogous polymer films cast on quartz disks. Ultraviolet irradiation is strongly absorbed by the studied materials both in ns and fs irradiation regimes. However, it is demonstrated that ultrashort laser pulses result in reduced morphological alterations in comparison to ns irradiation. In addition, the dependence of the observed alterations on the chemical composition of the consolidation materials in both regimes was examined. Most importantly, it was shown that in this specific conservation problem, an optimum cleaning process may rely not only on the minimization of laser-induced morphological changes but also on the exploitation of the conditions that favour the disruption of the adhesion between the synthetic material and the painting.
---
paper_title: Laser-induced breakdown spectroscopy for semi-quantitative and quantitative analyses of artworks—application on multi-layered ceramics and copper based alloys
paper_content:
Abstract In the present work, we report on the analyses of different types of artworks, such as medieval glazed Umbrian pottery and copper based alloys from Roman and modern periods, performed by means of Laser Induced Breakdown Spectroscopy (LIBS). The semi-quantitative analyses on the multi-layered ceramic findings regard glaze, luster and pigment decorations present on the surface. The composition for each decorative layer was determined by estimating the contribution of the ceramic layer beneath the examined one to the whole plasma emission. Two types of ancient luster have been considered: red and gold, while the pigments examined include painted decorations of different blue tonalities. The measured elemental composition of the decorative layers resulted partially correlated with the color of the painted surface, measured by a standard UV-VIS spectrometer. In LIBS analyses of bronze samples, a procedure was developed, which improves data repeatability and extends quantitative measurements to minor elemental constituents. Results of the quantitative analyses gave indications about the manufacturing process of the artwork, its actual degree of conservation and the presence of residual surface decorations.
---
paper_title: Laser cleaning of terracotta decorations of the portal of Palos of the Cathedral of Seville
paper_content:
Abstract Laser cleaning has been used to restore the soiled terracotta statues and decorations of the tympanum of the portal of Palos of the Cathedral of Seville in Spain. A simultaneous laboratory study performed on a representative sample helped to identify the optimum laser conditions to remove the dark soiling layer produced by air pollution. It was found that irradiation at 1064 nm with a Q-switched Nd:YAG laser was more effective than the harmonic wavelengths of 532 or 266 nm. LIBS and Raman microscopy gave information on the composition of terracotta and identified the presence of a protective layer made of gypsum and calcite. As detected by Raman spectroscopy, laser irradiation caused the elimination of the carbon component of the soiling layer and the appearance of an anhydrite component in the laser irradiated gypsum layer applied over the terracotta substrate for protective purposes. Local heating of the surface caused by laser irradiation at 1064 nm, the laser wavelength used for restoration of the portal, might be responsible for a process of partial dehydration of gypsum into anhydrite.
---
paper_title: Particle size distributions and compositions of aerosols produced by near-IR femto- and nanosecond laser ablation of brass
paper_content:
Particle size distributions and compositions of primary aerosols produced by means of near-IR femtosecond laser ablation (λ ::: = 775 nm) of brass in He or Ar at atmospheric pressure have been measured. Aerosols were characterized using a 13-stage low-pressure impactor covering a size range from 5 nm up to 5 μm and subsequently analyzed applying total reflection X-ray fluorescence spectrometry. The results indicate, that for femtosecond laser ablation in the low-fluence regime (<5 J cm−2) ultra-fine aerosols (mean diameter dp ::: ≈ 10 nm/peak width wp ::: ≈ 35 nm) are produced. Furthermore, the total Cu/Zn ratio of these aerosols corresponds to the composition of the bulk material. In contrast, ablation above 10 J cm−2 results in the formation of polydisperse, bimodal aerosols, which are distributed around dp1 ::: ≈ 20 nm (wp1 ::: ≈ 50 nm) and dp2 ::: ≈ 1 μm (wp2 ::: ≈ 5 µm), respectively, and whose total Cu/Zn ratio slightly deviates from the bulk composition. In order to examine the influence of pulse duration on particle size distribution and aerosol composition, comparative measurements by means of near-IR nanosecond ablation were also performed. The data show that nanosecond ablation generally leads to an intensified formation of particles in the micrometer range. Moreover, the composition of these aerosols strongly departs from the stoichiometry of the bulk. Aspects concerning the formation of particles during ablation as well as implications for the element-selective analysis by inductively coupled plasma spectrometry are discussed.
---
paper_title: LIBS as a diagnostic tool during the laser cleaning of copper based alloys: experimental results
paper_content:
In spite of difficulties in quantitative LIBS analysis on copper based alloys, the very low invasiveness of the technique strongly sustains attempts to use it with cultural heritage materials, including ancient bronzes. Analytical results obtained with calibration curves and a calibration free model are compared here on a set of ancient roman coins. An attempt is presented to monitor the laser ablation process on bronze coins and artificially aged standards during the cleaning. The double pulse technique showed that LIBS analytical results could benefit from synchronization between the UV laser sources used, respectively, for cleaning (266 nm) and for LIBS analysis (335 nm).
---
paper_title: Pigment identification in paintings employing laser induced breakdown spectroscopy and Raman microscopy
paper_content:
Abstract Laser-induced breakdown spectroscopy (LIBS) was used in combination with Raman microscopy, for the identification of pigments in different types of painted works of art. More specifically, a 19th century post-Byzantine icon from Greece and two miniature paintings from France were examined and detailed spectral data are presented which lead to the identification of the pigments used. LIBS measurements yielded information on the presence of pigments or mixtures of pigments based on the characteristic emission from specific elements. Identification of most pigments was performed by Raman microscopy. As demonstrated in this work, the combined use of LIBS and Raman microscopy, two complementary techniques, leads to a detailed characterization of the paintings examined with respect to the pigments used.
---
paper_title: High energy ions generated by laser driven Coulomb explosion of cluster
paper_content:
Abstract We present an analytical model and three dimensional particle simulations of intense laser interaction with a cluster of overdense plasma. When laser intensity is above a critical value, it blows off all of electrons from the cluster and forms a non-neutral ion cloud. During the Coulomb explosion of the ion cloud, ions acquire their energy. Ion energy spectra are discussed in detail for different densities and sizes of clusters with various laser intensities. It is shown that ultra-fast ions are produced for relatively large clusters, and that the ion energy becomes three times greater than the maximum electrostatic potential energy of the ion cloud. The laser driven Coulomb explosion of a cluster may provide a new high energy ion source.
---
paper_title: Er:YAG laser: an innovative tool for controlled cleaning of old paintings: testing and evaluation
paper_content:
Abstract A cleaning method based on an Er:YAG laser system at 2.94 μm, highly absorbed by OH bonds, was tested for removal of over-paintings, varnishes and patina top-layers from various painted surfaces, including laboratory paint models and old paintings. The aim was to evaluate the efficiency, selectivity and safety of the laser cleaning method using various pulse energies and various OH containing wetting agents to enhance the efficacy and limit the penetration of the laser beam. A large number of paint models were prepared with known characteristics (type and number of layers, thickness, composition) simulating old masters’ techniques. A set of diagnostic controls was designed to study the effects of the laser radiation on the surface components, including morphological, optical and chemical examination and analyses. The aim was also to compare the laser method with the traditional solvent based procedures. Thresholds of safe energy were found for each type of surface layer such as varnishes and over-paintings. The results confirmed the suitability of the Er:YAG laser when used by qualified and expert conservators, especially in combination with traditional chemical and mechanical cleaning methods.
---
paper_title: Pigment analysis in Bronze Age Aegean and Eastern Mediterranean painted plaster by laser-induced breakdown spectroscopy (LIBS)
paper_content:
Laser-Induced Breakdown Spectroscopy (LIBS) was used in the examination of Bronze Age painted plaster samples from several sites in the Aegean and Eastern Mediterranean. The elemental content of paint materials was determined in most cases leading to the identification of the pigment used in agreement with data from analyses of the same samples with other established techniques. The analyses demonstrate that a virtually non-destructive technique such as LIBS provides sufficient data for the elemental characterisation of painting materials while in parallel has the capability for routine, rapid analysis of archaeological objects enabling the quick characterisation or screening of different types of artefacts. This certainly shows an important way forward in technological studies of fragile and scarce archaeological material.
---
paper_title: Near-crater discoloration of white lead in wall paintings during laser induced breakdown spectroscopy analysis
paper_content:
Abstract During Laser-Induced Breakdown Spectroscopy (LIBS) analysis of white lead pigment (basic lead carbonate, 2PbCO3·Pb(OH)2), used in wall paintings of historical interest, a yellow–brown discoloration has been observed around the crater. This phenomenon faded after a few days exposure under ambient atmosphere. It was established that the mechanism of this discoloration consists in lead oxides (PbO) formation. It was verified by further experiments under argon atmosphere that recombination of lead with oxygen in the plasma plume produces the oxides, which settle around the crater and induce this discoloration. The impact of discoloration on the artwork's aesthetic aspect and the role of atmosphere on discoloration attenuation are discussed. The mechanism is studied on three other pigments (malachite, Prussian blue and ultramarine blue) and threshold for discoloration occurrence is estimated.
---
paper_title: Short free running Nd:YAG laser to clean different encrustations on Pentelic marble: procedure and evaluation of the effects
paper_content:
Abstract On ancient Greek monuments of Pentelic marble, environmentally induced encrustation (black dendritic and thin) along with layers with ancient treatments (patina) were irradiated with a Nd:YAG laser system operating at the fundamental mode ( λ = 1064 nm) with t d = 20 μs (short free running Nd:YAG laser). Laser experiments were coupled with the spraying of small quantities of distilled water on the encrustation before the irradiation. The effects of the laser-assisted cleaning were investigated using thin section analysis, optical microscopy, scanning electron microscopy coupled to energy dispersive X-ray analysis, infrared spectroscopic analysis, and X-ray diffraction analysis, as well as color measurements and imaging analysis using multi-spectral imaging. Based on the results, the main evaluation criteria were achieved for the application of the short free running Nd:YAG laser system for cleaning purposes. Multi-spectral imaging enables the evaluation of color and textural changes and, therefore, can be considered as an appropriate tool for the in situ monitoring of the cleaning process.
---
paper_title: Chronocultural sorting of archaeological bronze objects using laser-induced breakdown spectrometry
paper_content:
This work discusses the capability of laser-induced breakdown spectrometry (LIBS) for characterization and cataloging of metallic objects belonging to the Bronze and Iron Ages. A set of 37 metallic objects from different locations of the South East of Iberian Peninsula has been sorted according to their metal content. Arsenic concentration in metallic objects has been found a key factor for distinguishing between Bronze and Iron Ages objects, allowing the chronocultural sorting of each piece. For this study, a pulsed Q-switched Nd:YAG laser was used to generate a microplasma onto the sample surface. To quantify and catalogue these metallic objects, calibration curves for copper, arsenic, tin, lead and iron were established. The quantitative results demonstrate that the chronological sorting carried out by LIBS matches agreeably with archaeological dating criteria.
---
paper_title: Use of LIBS for rapid characterization of parchment.
paper_content:
Parchment from different sources has been analyzed by laser-induced breakdown spectroscopy (LIBS) for determination of Ca, Na, K, Mg, Fe, Cu, and Mn. The LIBS results were compared with results from inductively coupled plasma spectroscopy (ICP) and good correlation was obtained. Rapid distinction between modern and historical samples was achieved by discriminant analysis of the LIBS data. Animal type recognition was also possible on the basis of Mg/Cu emission peak ratio and Mg depth profiling.
---
paper_title: Spectroscopic analysis of works of art using a single LIBS and pulsed Raman setup.
paper_content:
A nanosecond pulsed laser setup has been optimized to perform laser-induced breakdown spectroscopy (LIBS) and pulsed Raman spectroscopy measurements in the field of cultural heritage. Three different samples of artistic/architectural interest with different typologies have been analyzed. The results from the two techniques allowed the identification of the materials used in their manufacture or contaminating them, probably coming from atmospheric pollution and biological activity. No sampling and sample preparation was required before the measurements, and no visual or structural damage was observed. Depth profiling using LIBS was performed in one of the samples, providing elemental information along the different layers composing the object and covering its surface. The quality of the results and the rather short time needed for the measurements and for switching between techniques confirmed the instrument's capabilities and specificity for dealing with objects of artistic or historical interest.
---
paper_title: Remote imaging laser-induced breakdown spectroscopy and remote cultural heritage ablative cleaning
paper_content:
We report, for what we believe to be the first time, on remote imaging laser-induced breakdown spectroscopy (LIBS). Measurements have been performed by using a tripled Nd:YAG laser working at 355 nm with 170 mJ pulse energy, with an expanded beam that is focused onto a target at 60 m distance. The LIBS signal is detected by using an on-axis Newtonian telescope and an optical multichannel analyzer. The imaging is performed by scanning the laser beam on the target. The same setup is also used in demonstrations of remote laser ablation for cleaning of contaminated objects with applications toward cultural heritage.
---
paper_title: Pigment identification by spectroscopic means: an arts/science interface
paper_content:
Abstract Pigment identification on manuscripts, paintings, ceramics and papyri is critical in finding solutions to problems of restoration, conservation, dating and authentication in the art world. The techniques (molecular and elemental) used for these purposes are reviewed and compared, particular attention being given to Raman microscopy and laser-induced breakdown spectroscopy. These give excellent results in respect of reproducibility, sensitivity, non-destructiveness, immunity to interference from adjacent materials, and depth-profile analysis. New advances in optics provide powerful and long needed links for Arts- and Science-based projects. To cite this article: R.J.H. Clark, C. R. Chimie 5 (2002) 7–20
---
paper_title: Analysis of pigments in polychromes by use of laser induced breakdown spectroscopy and Raman microscopy
paper_content:
Abstract Two laser-based analytical techniques, Laser Induced Breakdown Spectroscopy (LIBS) and Raman microscopy, have been used for the identification of pigments on a polychrome from the Rococo period. Detailed spectral data are presented from analyses performed on a fragment of a gilded altarpiece from the church of Escatron, Zaragoza, Spain. LIBS measurements yielded elemental analytical data which suggest the presence of certain pigments and, in addition, provide information on the stratigraphy of the paint layers. Identification of most pigments and of the materials used in the preparation layer was performed by Raman microscopy.
---
paper_title: Application of the laser ablation for conservation of historical paper documents
paper_content:
Abstract Laser ablation was applied for surface cleaning and spectroscopic diagnostics of historical paper documents and model samples in the framework of the conservation projects. During cleaning the spectra of ablation products were recorded by means of the LIBS technique which allowed for nearly non-destructive identification of surface layers such as contaminants, substrate and pigments. For consecutive laser pulses a strong decrease of band intensities of the emission lines of Ca, Na, K, Al and Fe ascribed to contaminants were observed. The effect was used for monitoring of the cleaning progress of stained paper. For surface cleaning and spectra excitation the Q-switched Nd:YAG laser of 6 ns pulsewidth operating at wavelengths of 266, 355, 532, and 1064 nm and of fluence selected from the range 0.3–0.9 J/cm 2 was applied. The ablation parameters were optimized in agreement with the literature and the results were confirmed by surface studies and testing of the mechanical and chemical properties, and also by the response to the ageing process of the paper substrate. In case of the model paper irradiated in the UV range at 266 and 355 nm a visual inspection revealed local damages of the cellulose fibers accompanied by a decrease of the mechanical strength of the substrate. The effect was more pronounced after artificial ageing. The best results were obtained for samples irradiated at 532 nm and at laser fluence below the damage threshold of 0.6 J/cm 2 , which is in agreement with literature.
---
paper_title: An example of the complementarity of laser-induced breakdown spectroscopy and Raman microscopy for wall painting pigments analysis
paper_content:
layer-by-layer analysis through a precise laser ablation of the sample. This work deals with the behavior of pigments after a LIBS analysis, by trying to identify the compounds before and after the laser shot. Six commercial pigments prepared with the fresco technique were investigated: ultramarine blue, red lead, charcoal, a yellow and a red ochre, and a green earth. Raman spectra, acquired on the sample surface and in the crater induced by LIBS analysis, were compared. The results show that these pigments are well recognized after a LIBS measurement. The analysis of green earth illustrates that the combination of these two techniques gives complete information from a sample. Copyright 2007 John Wiley & Sons, Ltd.
---
paper_title: A comparison of nanosecond and femtosecond laser-induced plasma spectroscopy of brass samples
paper_content:
The ablation of brass samples in argon shield gas by 170 fs and 6 ns laser pulses has been studied by optical emission spectroscopy of the evolving plasmas. Differences observed in the temporal behavior of the spectral line intensities are explained by the shielding effect of the Ar plasma for ns-pulses and the free expansion of the plasma of the ablated material in case of fs-pulses. Brass with different ZnrCu ratios were used as samples. Different types of crater formation mechanisms in the case of ns- and fs-pulses were observed. At 40 mbar argon pressure the thresholds of ablation were found to be ; 0.1 and ; 1.5 J cm y2 for fs- and ns-pulses, respectively. With an internal standardization of zinc to copper it is possible to correct for differences in the ablation rates and to obtain linear calibration curves. For optimum experimental conditions, narrower confidence intervals for the determination of unknown concentrations were found in case of fs-pulses. Within the range of the laser intensities used, no dependence of the ZnrCu line intensity ratio on the number of laser pulses applied to the same ablation spot was observed, neither for fs- nor for ns-pulses, which is interpreted as the absence of fractional vaporization. Q 2000 Elsevier Science B.V. All rights reserved.
---
paper_title: Laser-induced breakdown spectroscopy (LIBS) in archaeological science—applications and prospects
paper_content:
Laser-induced breakdown spectroscopy (LIBS) has emerged in the past ten years as a promising technique for analysis and characterization of the composition of a broad variety of objects of cultural heritage including painted artworks, icons, polychromes, pottery, sculpture, and metal, glass, and stone artifacts. This article describes in brief the basic principles and technological aspects of LIBS, and reviews several test cases that demonstrate the applicability and prospects of LIBS in the field of archaeological science.
---
paper_title: LIBS-spectroscopy for monitoring and control of the laser cleaning process of stone and medieval glass
paper_content:
Abstract On-line monitoring or even closed-loop control is necessary to avoid over-cleaning in case the ablation process is not self-limiting. Therefore, the laser-induced breakdown spectroscopy (LIBS) was used. Basic investigations were carried out on original sandstone samples ( Elbsandstein ) with strong encrustations as well as medieval stained glass samples (13th century from Cologne Cathedral). The spectroscopic study has shown that the plasma emission can be used for determination of the elemental composition of the ablated material. The plasma was initiated by 248-nm pulses of an KrF-excimer laser (30 ns FWHM). For the spectroscopic analysis, a grating spectrograph in combination with an optical multichannel analyser was used. For the glass and stone samples we obtained a continual alteration of the LIBS spectrum (vanishing of peaks and generating of new element peaks) during the removal process. Thus, certain element peaks can be used to distinguish between encrustation layer and valuable underlying material. To show the potential of LIBS we designed an experimental laser cleaning set-up including closed-loop LIBS control and demonstrated successful automatic cleaning of an original glass fragment.
---
paper_title: Quantitative laser induced breakdown spectroscopy analysis of ancient marbles and corrections for the variability of plasma parameters and of ablation rate
paper_content:
White marble samples from ancient quarries have been analyzed by Laser Induced Breakdown Spectroscopy (LIBS) both on the bulk material and surface encrustations. With the aim to achieve quantitative results by LIBS, until now not reported on marble materials, calibration standards with CaCO3 matrices doped with certified soils were realized. Very different emission intensities and plasma parameters were observed on the standards and natural marbles. In order to compare so different spectra, a method for data analysis was developed, which takes into account variability of the ablation rate, plasma temperature and electron density. It was experimentally demonstrated that ablated volume is well correlated to the emission intensity of plasma continuum for a wide range of laser energies. LIBS signal normalization on the adjacent continuum level, together with introduction of correction factors dependent on plasma parameters, allowed the measuring of concentrations both for major and trace elements in marbles. The analytical procedure was validated by comparative SEM-EDX and ICP-OES measurements. Quantitative LIBS analyses were also performed during encrustation removal and could be applied to control laser-cleaning processes. The quantification of metal contents in the encrustations supported the occurrence of sulfates in the outer layers exposed to environmental agents via a catalytic process.
---
paper_title: Controlled UV laser cleaning of painted artworks: a systematic effect study on egg tempera paint samples
paper_content:
Abstract The Cooperative Research project “Advanced workstation for controlled laser cleaning of artworks” (ENV4-CT98-0787) has yielded important information on the application of UV laser cleaning to paint materials. In the project, in which conservators, researchers and engineers participated, the viability of the laser technique as an additional tool in present conservation practice was investigated. The research was pointed at the definition of the boundary conditions in which laser cleaning can be safely applied. It included a systematic effect study of tempera paint systems. Physical and chemical changes, induced by exposure to UV (248 nm) excimer laser light under various conditions, were evaluated. In parallel, an innovative laser cleaning tool was developed, allowing accurate and controlled removal of superficial layers from paint materials. Both aspects of the project are presented. The presentation of the research focuses on the integration of the results from various analytical techniques, yielding valuable information on the immediate and long-term effects of UV laser radiation on the paint materials. The analytical techniques include colorimetry, spectroscopic techniques, mass spectrometry and profilometry, as well as thermographic and UV transmission measurements. Furthermore, the application of the laser workstation on various painted artworks is shown. This includes the gradual removal of varnish layers and the recovery of original paint colour in fire-damaged paintings.
---
paper_title: ns- and fs-LIBS of copper-based-alloys: A different approach
paper_content:
A self-calibrated analytical technique, based on plasmas induced by either 250 fs or 7 ns laser pulses, is presented. This approach is comparable to other calibration-free methods based on LTE assumption. In order to apply this method to very different laser pulse durations, the partial-local thermodynamic equilibrium (p-LTE) has been considered within the energy range of 30,000-50,000 cm -1 . In order to obtain the neutral species densities, the detected plasma species emission lines intensities have been treated together with the experimental evaluated background black-body Planck-like emission distribution. For validating the followed method, three certified copper-based-alloys standards were employed and their minor components (Ni, Pb and Sn) amounts were determined. As a result, it arises, that this standardless method, independently from the laser source pulse durations, provides good quantitative analysis, and, consequently, that the composition of the plasma plume emitting species induced is not affected by the laser pulse time width.
---
paper_title: Laser Induced Breakdown Spectroscopy methodology for the analysis of copper-based-alloys used in ancient artworks☆
paper_content:
Abstract In this paper Laser Induced Breakdown Spectroscopy has been applied for determining the elemental composition of a set of ancient bronze artworks coming from archaeological site of Minervino Murge — Southern of Italy (dated around VII b.C.). Before carrying on the analysis of the archaeological samples, the characterization of the analytical technique has been accomplished by investigating the trueness of the typical assumptions adopted in LIBS, such as Local Thermodynamic Equilibrium, congruent ablation and plasma homogeneity. With this purpose, two different laser pulse durations, 7 ns and 350 fs, have been used. We have focused our attention on LIBS analysis of bronze standards by considering and discussing the bases of both methodology and analytical approach to be followed for the analysis of ancient copper-based-alloy samples. Unexpectedly, regardless from the laser pulse duration, the LIBS technique has shown, by considering an adequate approach on the emitting plasma features, that its peculiarities are anyway preserved so that a fast analysis of ancient copper-based-alloys can be achieved. After verifying the suitability of the methodology, it has been possible to fulfill the typical assumptions considered for the LIBS calibration curves method and use it for ancient bronze artworks analysis.
---
paper_title: Compositional analysis of Hispanic Terra Sigillata by laser-induced breakdown spectroscopy
paper_content:
Abstract Laser induced breakdown spectroscopy (LIBS) has been applied for the analysis of Roman pottery Hispanic Terra Sigillata dating back to the 1st–5th century A.C. from two important ceramic production centers in Spain. For each sample, several examinations were performed on slip and body providing data necessary to draw depth profiles of the contents of various elements. In all the cases investigated, the amount of some elements such as calcium and iron and the presence of other ones such as silicon and aluminum showed the differences existing between slip and body in these ancient ceramics in relation with their region and period of production. In addition, complementary analyses were carried out with scanning electron microscopy linked with energy dispersive X-ray microanalysis (SEM/EDX) to measure the thickness of slip and to obtain verification of chemical results.
---
paper_title: Effects of crater development on fractionation and signal intensity during laser ablation inductively coupled plasma mass spectrometry
paper_content:
Abstract The effects of crater development on ICP-MS signal intensities and elemental fractionation have been presented in this work. Craters formed after repetitive 266-nm Nd/YAG laser ablation with 1.0-mJ pulses had a cone-like shape. The laser ablation rate (ng/s) depended on the laser irradiance (laser pulse energy per unit time and unit area), decreasing as irradiance increased. In contrast, the particle entrainment/transport efficiency did not significantly change with irradiance. As the crater-aspect ratio (depth/diameter) increased above some threshold value of six, the Pb/U elemental ratio departed from the stoichiometric value. However, good stoichiometry of ablated mass could be achieved when experimental conditions were carefully selected. The exact mechanism of how crater development affects fractionation is not well understood. In this work, actual irradiance was introduced instead of a nominal value. Actual irradiance decreased as the crater deepened due to changes of the effective area, sampled by the laser beam.
---
paper_title: LIBS analysis of geomaterials: Geochemical fingerprinting for the rapid analysis and discrimination of minerals
paper_content:
Abstract Laser-induced breakdown spectroscopy (LIBS) is a simple atomic emission spectroscopy technique capable of real-time, essentially non-destructive determination of the elemental composition of any substance (solid, liquid, or gas). LIBS, which is presently undergoing rapid research and development as a technology for geochemical analysis, has attractive potential as a field tool for rapid man-portable and/or stand-off chemical analysis. In LIBS, a pulsed laser beam is focused such that energy absorption produces a high-temperature microplasma at the sample surface resulting in the dissociation and ionization of small amounts of material, with both continuum and atomic/ionic emission generated by the plasma during cooling. A broadband spectrometer-detector is used to spectrally and temporally resolve the light from the plasma and record the intensity of elemental emission lines. Because the technique is simultaneously sensitive to all elements, a single laser shot can be used to track the spectral intensity of specific elements or record the broadband LIBS emission spectra, which are unique chemical ‘fingerprints’ of a material. In this study, a broad spectrum of geological materials was analyzed using a commercial bench-top LIBS system with broadband detection from ∼200 to 965 nm, with multiple single-shot spectra acquired. The subsequent use of statistical signal processing approaches to rapidly identify and classify samples highlights the potential of LIBS for ‘geochemical fingerprinting’ in a variety of geochemical, mineralogical, and environmental applications that would benefit from either real-time or in-field chemical analysis.
---
paper_title: TRACE ELEMENT INPUTS INTO SOILS BY ANTHROPOGENIC ACTIVITIES AND IMPLICATIONS FOR HUMAN HEALTH
paper_content:
Trace element definition and functions, and inputs into soils from the most important anthropogenic sources, related and not related to agricultural practices, of general and local or incidental concern, are discussed in the first part of this review. Trace element inputs include those from commercial fertilizers, liming materials and agrochemicals, sewage sludges and other wastes used as soil amendments, irrigation waters, and atmospheric depositions from urban, industrial, and other sources. In the second part of the review, the most important ascertained effects of soil trace elements on human health are presented. The possible relations found between some specific soil trace elements, such as Cd, Se, As and others, and cancer incidence and mortality, and diffusion of other important human diseases are reviewed. Brief conclusions and recommendations conclude this review.
---
paper_title: TRACE ELEMENT INPUTS INTO SOILS BY ANTHROPOGENIC ACTIVITIES AND IMPLICATIONS FOR HUMAN HEALTH
paper_content:
Trace element definition and functions, and inputs into soils from the most important anthropogenic sources, related and not related to agricultural practices, of general and local or incidental concern, are discussed in the first part of this review. Trace element inputs include those from commercial fertilizers, liming materials and agrochemicals, sewage sludges and other wastes used as soil amendments, irrigation waters, and atmospheric depositions from urban, industrial, and other sources. In the second part of the review, the most important ascertained effects of soil trace elements on human health are presented. The possible relations found between some specific soil trace elements, such as Cd, Se, As and others, and cancer incidence and mortality, and diffusion of other important human diseases are reviewed. Brief conclusions and recommendations conclude this review.
---
paper_title: Mapping of lead, magnesium and copper accumulation in plant tissues by laser-induced breakdown spectroscopy and laser-ablation inductively coupled plasma mass spectrometry
paper_content:
article i nfo Laser-Induced Breakdown Spectroscopy (LIBS) and Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) were utilized for mapping the accumulation of Pb, Mg and Cu with a resolution up to 200 μm in a up to cm×cm area of sunflower (Helianthus annuus L.) leaves. The results obtained by LIBS and LA-ICP-MS are compared with the outcomes from Atomic Absorption Spectrometry (AAS) and Thin- Layer Chromatography (TLC). It is shown that laser-ablation based analytical methods can substitute or supplement these techniques mainly in the cases when a fast multi-elemental mapping of a large sample area is needed.
---
paper_title: Quantitative analysis of arsenic in mine tailing soils using double pulse-laser induced breakdown spectroscopy
paper_content:
Abstract A double pulse-laser induced breakdown spectroscopy (DP-LIBS) was used to determine arsenic (As) concentration in 16 soil samples collected from 5 different mine tailing sites in Korea. We showed that the use of double pulse laser led to enhancements of signal intensity (by 13% on average) and signal-to-noise ratio of As emission lines (by 165% on average) with smaller relative standard deviation compared to single pulse laser approach. We believe this occurred because the second laser pulse in the rarefied atmosphere produced by the first pulse led to the increase of plasma temperature and populations of exited levels. An internal standardization method using a Fe emission line provided a better correlation and sensitivity between As concentration and the DP-LIBS signal than any other elements used. The Fe was known as one of the major components in current soil samples, and its concentration varied not substantially. The As concentration determined by the DP-LIBS was compared with that obtained by atomic absorption spectrometry (AAS) to evaluate the current LIBS system. They are correlated with a correlation coefficient of 0.94. The As concentration by the DP-LIBS was underestimated in the high concentration range (>1000 mg-As/kg). The loss of sensitivity that occurred at high concentrations could be explained by self-absorption in the generated plasma.
---
paper_title: Evaluation of laser induced breakdown spectroscopy for cadmium determination in soils
paper_content:
Abstract Cadmium is known to be a toxic agent that accumulates in the living organisms and present high toxicity potential over lifetime. Efforts towards the development of methods for microanalysis of environmental samples, including the determination of this element by graphite furnace atomic absorption spectrometry (GFAAS), inductively coupled plasma optical emission spectrometry (ICP OES), and inductively coupled plasma-mass spectrometry (ICP-MS) techniques, have been increasing. Laser induced breakdown spectroscopy (LIBS) is an emerging technique dedicated to microanalysis and there is a lack of information dealing with the determination of cadmium. The aim of this work is to demonstrate the feasibility of LIBS for cadmium detection in soils. The experimental setup was designed using a laser Q-switched (Nd:YAG, 10 Hz, λ = 1064 nm) and the emission signals were collimated by lenses into an optical fiber coupled to a high-resolution intensified charge-coupled device (ICCD)-echelle spectrometer. Samples were cryogenically ground and thereafter pelletized before LIBS analysis. Best results were achieved by exploring a test portion (i.e. sampling spots) with larger surface area, which contributes to diminish the uncertainty due to element specific microheterogeneity. Calibration curves for cadmium determination were achieved using certified reference materials. The metrological figures of merit indicate that LIBS can be recommended for screening of cadmium contamination in soils.
---
paper_title: Determination of heavy metals in soils by Laser Induced Breakdown Spectroscopy
paper_content:
Laser Induced Breakdown Spectroscopy (LIBS) is a recent analytical technique that is based upon the measurement of emission lines generated by atomic species close to the surface of the sample, thus allowing their chemical identification. In this work, the LIBS technique has been applied to the determination of total contents of heavy metals in a number of reference soil samples. In order to validate the technique, LIBS data were compared with data obtained on the same soil samples by application of conventional Inductively Coupled Plasma (ICP) spectroscopy. The partial agreement obtained between the two sets of data suggested the potential applicability of the LIBS technique to the measurement of heavy metals in soils.
---
paper_title: Utilization of laser induced breakdown spectroscopy for investigation of the metal accumulation in vegetal tissues
paper_content:
We report on the development and implementation of analytical methodology for investigating elemental accumulation in different layers within plant leaves, with in-situ spatial resolution mapping, exploiting the technique of LIBS. The spectrochemical analysis of lead-doped leaf samples is demonstrated to develop a real time identification procedure in order to complement other analytical techniques not lending themselves for spatial resolution analysis. Our findings suggest that with elevated levels of Pb within the plants transportation and storage of some nutrition elements is changed.
---
paper_title: Towards quantitative laser-induced breakdown spectroscopy analysis of soil samples ☆
paper_content:
A quantitative analysis of chromium in soil samples is presented. Different emission lines related to chromium are studied in order to select the best one for quantitative features. Important matrix effects are demonstrated from one soil to the other, preventing any prediction of concentration in different soils on the basis of a univariate calibration curve. Finally, a classification of the LIBS data based on a series of Principal Component Analyses (PCA) is applied to a reduced dataset of selected spectral lines related to the major chemical elements in the soils. LIBS data of heterogeneous soils appear to be widely dispersed, which leads to a reconsideration of the sampling step in the analysis process.
---
paper_title: Measurement Of Nutrients In Green House Soil With Laser Induced Breakdown Spectroscopy
paper_content:
Laser-induced breakdown spectroscopy (LIBS) has been applied for the determination of nutrients in the green house soil samples. We determined appropriate spectral signatures of vital nutrients and calibrated the method to measure the nutrients in a naturally fertilized plot, cultivated with tomato and cucumber plants. From the calibration curves we predicted the concentrations of important nutrients such as Ca, K, P, Mg, Fe, S, Ni and Ba in the soil. Our measurements proved that the LIBS method rapidly and efficiently measures soil nutrients with excellent detection limits of 12, 9, 7, 9, 7, 10, 8 and 12~mg/kg for Ca, K, P, Mg, Fe, S, Ni and Ba respectively with a precision of 2%, The unique features of LIBS for rapid sample analysis demonstrated by this study suggests that this method offers promise for precision measurements of soil nutrients as compared to conventional methods in short span of time.
---
paper_title: Studying the enhanced phytoremediation of lead contaminated soils via laser induced breakdown spectroscopy
paper_content:
Abstract Phytoremediation popularly known as 'green clean technology' is a new promising technology used for toxic contaminants removal from the environment such as heavy metals (HMs), adopting suitable plants. This concept is increasingly being adopted as it is a cost effective and environmentally friendly alternative to traditional methods of treatment. This study was focused on using scented geranium, Pelargonium zonale , as accumulator or hyperaccumulator plant for natural lead extraction from artificially contaminated soil with different Pb concentrations (0, 2000, 5000, 7000 ppm). Utilization of EDTA as a chelator, that would permit higher metal availability and uptake by the tested plants roots, was also tested. Laser Induced Breakdown Spectroscopy (LIBS) was used to follow up Pb concentrations in both soil and plant green harvestable parts known as shoots, before, during and after lead addition in soil. LIBS measurements were conducted in a microdestructive way by focusing a high energy Nd:YAG laser, emitting at 1064 nm, on plant and soil samples previously dried, homogenized and pressed in pellets. The emitted LIBS spectra were acquired by a gated CCD after dispersion on a monochromator and analyzed to retrieve relative concentrations of the selected HM both in the soil and on plants as a function of the time after doping and eventual chelator addition. EDTA was found to enhance Pb uptake from the soil which increased with time, good correlation was found between LIBS and ICP-OES results of plant tissues spectrochemical analysis.
---
paper_title: Enrichment and depletion of major and trace elements, and radionuclides in ombrotrophic raw peat and corresponding humic acids
paper_content:
Abstract An ombrotrophic peat core (10 × 10 × 81 cm) was collected from Etang de la Gruere (Switzerland) and divided in 27 slices of 3 cm. Humic acids (HAs) were extracted from each slice after freeze-drying and fine milling. All raw peat and HA samples were analyzed using Energy-dispersive miniprobe X-ray fluorescence multielement analyser (EMMA-XRF), fluorescence spectroscopy and Low Background γ-spectrometry (LB γ-spec). The abundance and distribution of major and trace elements and radionuclides were measured in raw peat and corresponding HAs to possibly evaluate their affinity for organic ligands and establish which organic fraction might affect their behaviour along the profile. Raw peat samples show higher concentrations of Fe, Mn, Pb, Sr, Ti and Ca in comparison to the corresponding HAs, while Cu is more abundant in HAs. Zinc presents greater concentrations in raw peat in the first 24 cm of depth. All elements, with the exception of Fe, seem to be bound stably to HAs, although at different extent. The activity of 241 Am is limited to the raw peat of a specific section of the profile, while that of 137 Cs is recorded both in HAs and raw peat, even below the depth one would expect, suggesting the scarce mobility of the former and a different impact of these radionuclides on the environment.
---
paper_title: Monitoring and assessment of toxic metals in Gulf War oil spill contaminated soil using laser-induced breakdown spectroscopy
paper_content:
Laser-induced breakdown spectroscopy (LIBS) was applied for the detection of toxic metals in oil spill contaminated soil (OSCS). The OSCS samples were collected from Khursania Saudi Arabia along the coast of Persian Gulf exposed to oil spills in 1991 Gulf war. Environmentally important elements like Aluminum Magnesium, Calcium, Chromium, Titanium, Strontium, Iron, Barium, Sodium, potassium, Zirconium and Vanadium from the contaminated soil have been detected. Optimal experimental conditions for analysis were investigated. The LIBS system was calibrated using standard samples containing these trace elements. The results obtained using Laser-Induced Breakdown Spectroscopy (LIBS) were compared with the results obtained using Inductively Coupled Plasma Emission Spectroscopy (ICP). The concentrations of some elements (Ba and Cr) were found higher than permissible safe limits. Health risks associated with exposure to such toxic elements are also discussed.
---
paper_title: Analysis of lead and sulfur in environmental samples by double pulse laser induced breakdown spectroscopy
paper_content:
In the present work, a model of double pulse laser-induced breakdown spectroscopy (LIBS) spectrometer has been developed and results from two different applications of double pulse LIBS for solving the problems of environmental interest are presented. In one case, laser induced breakdown spectroscopy has been applied to the determination of heavy and toxic metals (lead) in soil samples. In the second case, laser induced breakdown spectroscopy was used in preliminary experiments for the detection of sulfur content in coal, and on the basis of spectral features, ways to improve the sensitivity of laser induced breakdown spectroscopy detection of sulfur are proposed. The detection limit for lead in soil was estimated to be approximately 20 ppm that is lower than the regulatory standards for the presence of lead in soil.
---
paper_title: Measuring Total Soil Carbon with Laser-Induced Breakdown Spectroscopy (LIBS)
paper_content:
Improving estimates of carbon inventories in soils is currently hindered by lack of a rapid analysis method for total soil carbon. A rapid, accurate, and precise method that could be used in the field would be a significant benefit to researchers investigating carbon cycling in soils and dynamics of soil carbon in global change processes. We tested a new analysis method for predicting total soil carbon using laser-induced breakdown spectroscopy (LIBS). We determined appropriate spectral signatures and calibrated the method using measurements from dry combustion of a Mollisol from a cultivated plot. From this calibration curve we predicted carbon concentrations in additional samples from the same soil and from an Alfisol collected in a semiarid woodland and compared these predictions with additional dry combustion measurements. Our initial tests suggest that the LIBS method rapidly and efficiently measures soil carbon with excellent detection limits (∼300 mg/kg), precision (4-5%), and accuracy (3-14%). Initial testing shows that LIBS measurements and dry combustion analyses are highly correlated (adjusted r 2 = 0.96) for soils of distinct morphology, and that a sample can be analyzed by LIBS in less than one minute. The LIBS method is readily adaptable to a field-portable instrument, and this attribute-in combination with rapid and accurate sample analysis-suggests that this new method offers promise for improving measurement of total soil carbon. Additional testing of LIBS is required to understand the effects of soil properties such as texture, moisture content, and mineralogical composition (i.e., silicon content) on LIBS measurements.
---
paper_title: Laser-induced breakdown spectroscopy for the environmental determination of total carbon and nitrogen in soils.
paper_content:
Soils from various sites have been analysed with the laser-induced breakdown spectroscopy (LIBS) technique for total elemental determination of carbon and nitrogen. Results from LIBS have been correlated to a standard laboratory-based technique (sample combustion), and strong linear correlations were obtained for determination of carbon concentrations. The LIBES technique was used on soils before and after acid washing, and the technique appears to be useful for the determination of both organic and inorganic soil carbon. The LIBS technique has the potential to be packaged into a field-deployable instrument.
---
paper_title: Real time and in situ determination of lead in road sediments using a man-portable laser-induced breakdown spectroscopy analyzer
paper_content:
Abstract In situ, real time levels of lead in road sediments have been measured using a man-portable laser-induced breakdown spectroscopy analyzer. The instrument consists of a backpack and a probe housing a Q-switched Nd:YAG laser head delivering 50 mJ per pulse at 1064 nm. Plasma emission was collected and transmitted via fiber optic to a compact cross Czerny-Turner spectrometer equipped with a linear CCD array allocated in the backpack together with a personal computer. The limit of detection (LOD) for lead and the precision measured in the laboratory were 190 μg g −1 (calculated by the 3σ method) and 9% R.S.D. (relative standard deviation), respectively. During the field campaign, averaged Pb concentration in the sediments were ranging from 480 μg g −1 to 660 μg g −1 depending on the inspected area, i.e. the entrance, the central part and the exit of the tunnel. These results were compared with those obtained with flame-atomic absorption spectrometry (flame-AAS). The relative error, expressed as [100(LIBS result − flame AAS result)/(LIBS result)], was approximately 14%.
---
paper_title: Development of a mobile system based on laser-induced breakdown spectroscopy and dedicated to in situ analysis of polluted soils ☆
paper_content:
Principal Components Analysis (PCA) is successfully applied to the full laser-induced breakdown spectroscopy (LIBS) spectra of soil samples, defining classes according to the concentrations of the major elements. The large variability of the LIBS data is related to the heterogeneity of the samples and the representativeness of the data is finally discussed. Then, the development of a mobile LIBS system dedicated to the in-situ analysis of soils polluted by heavy metals is described. Based on the use of ten-meter long optical fibers, the mobile system allows deported measurements. Finally, the laser-assisted drying process studied by the use of a customized laser has not been retained to overcome the problem of moisture.
---
paper_title: Double pulse, calibration-free laser-induced breakdown spectroscopy: A new technique for in situ standard-less analysis of polluted soils
paper_content:
Laser-induced breakdown spectroscopy (LIBS) is a promising technique for in situ environmental analysis. The potential of this technique for accurate quantitative analysis could be greatly improved using an innovative experimental setup – based on the use of two laser pulses suitably retarded – and analyzing the results with a standard-less procedure which overcomes the problems related to matrix effects. A new mobile instrument for soil analysis, developed at the Applied Laser Spectroscopy Laboratory in Pisa, is presented, and some experimental results are given.
---
paper_title: Analysis of environmental lead contamination: comparison of LIBS field and laboratory instruments
paper_content:
Abstract The Army Research Office of the Army Research Laboratory recently sponsored the development of a commercial laser-induced breakdown spectroscopy (LIBS) chemical sensor that is sufficiently compact and robust for use in the field. This portable unit was developed primarily for the rapid, non-destructive detection of lead (Pb) in soils and in paint. In order to better characterize the portable system, a comparative study was undertaken in which the performance of the portable system was compared with a laboratory LIBS system at the Army Research Laboratory that employs a much more sophisticated laser and detector. The particular focus of this study was to determine the effects on the performance of the field sensor's lower spectral resolution, lack of detector gating, and the multiple laser pulsing that occurs when using a passively Q-switched laser. Surprisingly, both the laboratory and portable LIBS systems exhibited similar performance with regards to detection of Pb in both soils and in paint over the 0.05–1% concentration levels. This implies that for samples similar to those studied here, high-temporal resolution time gating of the detector is not necessary for quantitative analysis by LIBS. It was also observed that the multiple pulsing of the laser did not have a significant positive or negative effect on the measurement of Pb concentrations. The alternative of using other Pb lines besides the strong 406-nm line was also investigated. No other Pb line was superior in strength to the 406-nm line for the latex paint and the type of soils used in the study, although the emission line at 220 nm in the UV portion of the spectrum holds potential for avoiding elemental interferences. These results are very encouraging for the development of lightweight, portable LIBS sensors that use less expensive and less sophisticated laser and detector components. The portable LIBS system was also field tested successfully at sites of documented Pb contamination on military installations in California and Colorado.
---
paper_title: Artificial neural network for Cu quantitative determination in soil using a portable Laser Induced Breakdown Spectroscopy system
paper_content:
Abstract Laser Induced Breakdown Spectroscopy (LIBS) is an advanced analytical technique for elemental determination based on direct measurement of optical emission of excited species on a laser induced plasma. In the realm of elemental analysis, LIBS has great potential to accomplish direct analysis independently of physical sample state (solid, liquid or gas). Presently, LIBS has been easily employed for qualitative analysis, nevertheless, in order to perform quantitative analysis, some effort is still required since calibration represents a difficult issue. Artificial neural network (ANN) is a machine learning paradigm inspired on biological nervous systems. Recently, ANNs have been used in many applications and its classification and prediction capabilities are especially useful for spectral analysis. In this paper an ANN was used as calibration strategy for LIBS, aiming Cu determination in soil samples. Spectra of 59 samples from a heterogenic set of reference soil samples and their respective Cu concentration were used for calibration and validation. Simple linear regression (SLR) and wrapper approach were the two strategies employed to select a set of wavelengths for ANN learning. Cross validation was applied, following ANN training, for verification of prediction accuracy. The ANN showed good efficiency for Cu predictions although the features of portable instrumentation employed. The proposed method presented a limit of detection (LOD) of 2.3 mg dm − 3 of Cu and a mean squared error (MSE) of 0.5 for the predictions.
---
paper_title: On board LIBS analysis of marine sediments collected during the XVI Italian campaign in Antarctica
paper_content:
Abstract The Laser-induced Breakdown Spectroscopy technique was applied on board the R/V Italica during the XVI Antarctic campaign (2000–2001) to carry out elemental chemical analysis of marine sediments collected using different sampling systems. To this end, a compact system has been built, which was suitable to operate also in the presence of mechanical vibrations, induced by the ship motion. Qualitative and quantitative analyses were performed on dried samples, without any further pre-treatment. Qualitative analyses have shown similar elemental composition among different collected sediments, except for significant differences in the case of rock fragments and manganese nodule. The latter also contains some heavy metals that in sediment layers were detected only in traces. The methodology to retrieve relative or absolute elemental concentration in heterogenous samples has been optimized and is scarcely sensitive to variations of sediment physical properties with depth, and to experimental parameters such as laser defocusing because of surface irregularities, and laser energy fluctuations. The relative distribution of the major elemental constituents, both from a bio-organic and mineral origin, was measured as a function of sediment depth. Measurements, once limited to specific spectral sections, and data analyses are fast and very reproducible. Most of the elements show a gradually varying distribution along the sampled core, except for silicon and barium, whose steep decrease with depth is strongly related to their biogenic origin. Quantitative LIBS analyses were performed on a limited number of samples and the results reported here, are comparable to the certified element contents in a reference sample of Antarctic sediments.
---
paper_title: Design, construction and assessment of a field-deployable laser-induced breakdown spectrometer for remote elemental sensing
paper_content:
A field-deployable laser-induced breakdown spectrometer for measurements in the hundreds of meters range has been presented. The system is capable of elemental analysis with no previous preparation and in near real time, with the only requirement of a free line-of-sight between the instrument and the sample. Main factors influencing LIBS performance at stand-off distances are outlined. LIBS signal is shown to depend on range of analysis, peak power, beam quality, laser wavelength and optics dimensions. A careful control of focusing conditions has been shown to be of importance to avoid interferences from air breakdown by the stand-off focused beam.
---
paper_title: Remote laser-induced plasma spectrometry for elemental analysis of samples of environmental interest
paper_content:
Remote laser-induced plasma spectrometry has been demonstrated as a valuable analytical tool both for qualitative inspection and quantitative determinations on environmental samples. For this objective, the pulsed radiation of a Q-switched Nd:YAG laser at 1064 nm has been used to produce a plasma in a remote sample, the light emission being collected under a coaxial open-path optical scheme and guided towards a spectrograph and then detected by an intensified CCD. A prospective study has been carried out to assess the suitability of the technique for the remote analysis of samples from a coastal scenario subjected to a high industrial activity. All the measurements have been done in the laboratory. Among the main factors influencing the analytical results, sample moisture and salinity, sample orientation and surface heterogeneity have been identified. The presence and distribution of Fe and Cr as a contaminant on sample surface has been quantified and discussed for samples including soil, rocks, and vegetation. At a stand-off distance of 12 m from the spectrometer to the sample, limits of detection in the order of 0.2% have been obtained for both elements.
---
paper_title: LIBS-an efficient approach for the determination of Cr in industrial wastewater.
paper_content:
In the present paper, LIB spectra of different water samples having varying concentration of Cr (certified reference material, CRM) have been recorded by using liquid jet (fabricated in our laboratory) configuration. Calibration curves for different atomic lines of Cr are compared and it is found that calibration curve for Cr II (283.5 nm) atomic line is the best in terms of the Limit of detection (LOD) which is found to be 30 ppm. This calibration curve has been used for quantification of Cr in wastewater collected from Cr-electroplating industry where the concentration of Cr is found to be 1500 ppm. Its removal can be planned by biological system, which is in progress.
---
paper_title: Detection of Toxic Metals in Waste Water from Dairy Products Plant Using Laser Induced Breakdown Spectroscopy
paper_content:
Laser Induced Breakdown Spectroscopy (LIBS) System was developed locally for determination of toxic metals in liquid samples and the system was tested for analysis of waste water collected from dairy products processing plant. The plasma was generated by focusing a pulsed Nd: YAG laser at 1064 nm on waste water samples. Optimal experimental conditions were evaluated for improving the sensitivity of our LIBS system through parametric dependence investigations. The Laser-Induced Breakdown Spectroscopy (LIBS) results were then compared with the results obtained using standard analytical technique such as Inductively Coupled Plasma Emission Spectroscopy (ICP). The evaluation of the potential and capabilities of LIBS as a rapid tool for liquid sample analysis are discussed in brief.
---
paper_title: Double-pulse LIBS in bulk water and on submerged bronze samples
paper_content:
In this work laser-induced breakdown spectroscopy (LIBS) has been applied in bulk water using a double-pulse laser source. As in the case of former experiments in air, the use of the double-pulse technique allows for enhancing line emission intensity and reducing the duration of the continuum spectrum, thus increasing the overall analytical performances of the technique. Tap water analysis of Na and Mg dissolved cations has been performed to investigate the capability of the technique, but most significant results have been obtained in determining the composition of submerged bronze targets by laser ablation of their surface in seawater. When the plasma is generated by double-pulse laser, the ablated matter is strongly confined by the water vapor inside the cavitations bubble. The confinement of the plasma leads to higher values of excitation temperature and holds the conditions suitable for chemical analysis (homogeneity and LTE) longer than what happens in gaseous media. The double-pulse experiments performed directly in bulk water point out the features of LIBS technique for real analytical applications in situ, such as the water quality assessment and the investigation of irremovable submerged objects.
---
paper_title: Single Pulse-Laser Induced Breakdown Spectroscopy in aqueous solution
paper_content:
In this paper the flexibility of Laser Induced Breakdown Spectroscopy (LIBS) has been proved for the analysis of water solutions. The plasma is generated directly in the bulk of a water solution by a Q-switched Nd:YAG laser (1064). The emission signal of four different solutions has been studied: AlCl3, NaCl, CaCO3 and LiF. The basic mechanisms influencing the emission signal and the experimental tricks for the optimization of the detection mode have been pointed out.
---
paper_title: From single pulse to double pulse ns-Laser Induced Breakdown Spectroscopy under water : Elemental analysis of aqueous solutions and submerged solid samples
paper_content:
Abstract In this paper the developments of Laser Induced Breakdown Spectroscopy (LIBS) underwater have been reviewed to clear up the basic aspects of this technique as well as the main peculiarities of the analytical approach. The strong limits of Single-Pulse (SP) LIBS are discussed on the basis of plasma emission spectroscopy observations, while the fundamental improvements obtained by means of the Double-Pulse (DP) technique are reported from both the experimental and theoretical point of view in order to give a complete description of DP-LIBS in bulk water and on submerged solid targets. Finally a detailed description of laser–water interaction and laser-induced bubble evolution is reported to point out the effect of the internal conditions (radius, pressure and temperature) of the first pulse induced bubble on the second pulse producing plasma. The optimization of the DP-LIBS emission signal and the determination of the lower detection limit, in a set of experiments reported in the current scientific literature, clearly demonstrate the feasibility and the advantages of this technique for underwater applications.
---
paper_title: Double pulse laser-induced breakdown spectroscopy of bulk aqueous solutions at oceanic pressures: interrelationship of gate delay, pulse energies, interpulse delay, and pressure.
paper_content:
Laser-induced breakdown spectroscopy (LIBS) has been identified as an analytical chemistry technique suitable for field use. We use double pulse LIBS to detect five analytes (sodium, manganese, calcium, magnesium, and potassium) that are of key importance in understanding the chemistry of deep ocean hydrothermal vent fluids as well as mixtures of vent fluids and seawater. The high pressure aqueous environment of the deep ocean is simulated in the laboratory, and the key double pulse experimental parameters (laser pulse energies, gate delay time, and interpulse delay time) are studied at pressures up to 2.76x10(7) Pa. Each element is found to have a unique optimal set of parameters for detection. For all pressures and energies, a short (< or = 100 ns) gate delay is necessary. As pressure increases, a shorter interpulse delay is needed and the double pulse conditions effectively become single pulse for both the 1.38x10(7) Pa and the 2.76x10(7) Pa conditions tested. Calibration curves reveal the limits of detection of the elements (5000 ppm Mg, 500 ppm K, 500 ppm Ca, 1000 ppm Mn, and 50 ppm Na) in aqueous solutions at 2.76x10(7) Pa for the experimental setup used. When compared to our previous single pulse LIBS work for Ca, Mn, and Na, the use of double pulse LIBS for analyte detection in high pressure aqueous solutions did not improve the limits of detection.
---
paper_title: Laser-induced breakdown spectroscopy of bulk aqueous solutions at oceanic pressures: evaluation of key measurement parameters.
paper_content:
The development of in situ chemical sensors is critical for present-day expeditionary oceanography and the new mode of ocean observing systems that we are entering. New sensors take a significant amount of time to develop; ::: therefore, validation of techniques in the laboratory for use in the ocean environment is necessary. Laser-induced breakdown spectroscopy (LIBS) is a promising in situ technique for oceanography. Laboratory investigations on the feasibility of using LIBS to detect analytes in bulk liquids at oceanic pressures were carried out. LIBS was successfully used to detect dissolved Na, Mn, Ca, K, and Li at pressures up to ::: 2.76×107 Pa. The effects of pressure, laser-pulse energy, interpulse delay, gate delay, temperature, and NaCl concentration on the LIBS signal were examined. An optimal range of laser-pulse energies was found to exist for analyte detection in bulk aqueous solutions at both low and high pressures. No pressure effect was seen on the emission intensity for Ca and Na, and an increase in emission intensity with increased pressure was seen for Mn. Using the dual-pulse technique for several analytes, a very short interpulse delay resulted in the greatest emission intensity. The presence of NaCl enhanced the emission intensity for Ca, but had no effect on peak intensity of Mn or K. Overall, increased pressure, the addition of NaCl to a solution, and temperature did not inhibit detection of analytes in solution and sometimes even enhanced the ability to detect the analytes. The results suggest that LIBS is a viable chemical sensing method for in situ analyte detection in high-pressure environments such as the deep ocean.
---
paper_title: Comparisons between LIBS and ICP/OES
paper_content:
In the framework of the development of new techniques, the ability of laser-induced breakdown spectroscopy (LIBS) to analyse remotely complex aqueous solutions was investigated. The jet configuration with a collimated gas stream was chosen because it appeared to be the most promising method for the LIBS probe, particularly in terms of sensitivity and repeatability. For emission collection, the echelle spectrometer offers a simultaneously recorded wavelength range from the UV to the near IR and is interesting for multielemental analysis for LIBS and also for inductively coupled plasma (ICP) optical emission spectroscopy (OES). The importance of parameters influencing the quantitative results of LIBS such as multispecies analysis, sheath gas, use of an internal standard and temporal parameters for analysis is described. LIBS quantitative data have been directly compared with results from the more standard ICP/OES technique.
---
paper_title: Detection of chromium in liquids by laser induced breakdown spectroscopy (LIBS)
paper_content:
Environmental concerns about the amount of dissolved heavy metals in coastal tidal waters have led to investigations into possible ways to detect chromium dissolved in water. A method using fluorescence spectroscopy in solution has been proposed. However, such optical emission spectroscopic methods tend to suffer from a lack of sensitivity caused by the strong quenching processes in liquids. In this investigation, Nd:YAG Q-switched laser pulses were utilised to generate a plasma filled bubble in a chromium solution. Fluorescence in the plasma was detected using an optical fibre tip placed adjacent to the bubble. Light wavelengths characteristic of chromium were detected and spectral images recorded using an optical multi-channel analyzer.
---
paper_title: ArF Laser-Induced Plasma Spectroscopy for Part-per-Billion Analysis of Metal Ions in Aqueous Solutions
paper_content:
Aqueous samples containing trace amounts of metal ions in 0.8 M HCl were ablated with an ArF laser. Plasma emissions were monitored for elemental analysis. The signal-to-noise ratio was optimized when the laser fluence was about 10 J cm-2, while the detector gate delay and width were 1-2 μs and 3-4 μs, respectively. During that time, the temperature and electron density of the induced plasma were also measured spectroscopically. The temperature dropped from about 0.5 to about 0.3 eV, while the density remained fairly constant at about 3 × 1016 cm-3. Background-free spectrochemical analysis was therefore possible. The detection limits for Na, Ca, Ba, and Pb were 0.4, 3, 7, and 300 ppb, respectively. These are 20 to 1000 times better than the best achieved by non-193-nm laser-induced breakdown spectroscopy.
---
paper_title: Laser induced breakdown spectroscopy of soils, rocks and ice at subzero temperatures in simulated martian conditions
paper_content:
Abstract We applied Laser Induced Breakdown Spectroscopy (LIBS) on moist soil/rock samples in simulated Martian conditions. The signal behavior as a function of the surface temperature in the range from + 25 °C to − 60 °C was studied at pressure of 7 mbar. We observed the strong signal oscillations below 0 °C with different negative peaks, whose position, width and magnitude depend on the surface roughness. In some cases, the signal was reduced for one order of magnitude with consequences for the LIBS analytical capability. We attribute such a signal behavior to the presence of supercooled water inside the surface pores, which freezing point depends on the pore size. On a same rock samples with different grades of the surface polishing, the signal has different temperature dependence. Its decrease was always registered close to 0 °C, corresponding to the freezing/melting of normal disordered ice, which can be present inside larger pores and scratching. An amount of the signal reduction at the phase transition temperatures does not seem to change with the laser energy density in the examined range. Comparative measurements were performed on a frozen water solution. A large depression, for two orders of magnitude, of the LIBS intensity was observed close to − 50 °C. The same negative peak, but with a smaller magnitude, was also registered on some rock/soil samples. Ablation rates and plasma parameters as a function of the sample temperature are also discussed, and their consequences for in-situ analyses.
---
paper_title: Spectral analysis of two Perseid meteors
paper_content:
Abstract Spectra of two bright ( −11 mag) Perseid meteors are studied. Monochromatic light curves are constructed and the spectra are analyzed at selected points along the trajectory. The shift of maxima of low excitation iron lines down the trajectory in meteor flares is observed and explained by a longer radiative lifetime of the upper levels for these lines. Two spectral components with the temperatures of 4400–4800 K and 10,000 K, respectively, are identified in the spectra in accordance with previous findings. The ratio of both components, in terms of mass, varied smoothly from about 100:1 over 15:1 to 30:1. This ratio is not an unambiguous function of meteor velocity, height and brightness but depends on the previous evolution of ablation. The abundances of heavy elements are found consistent with the chemical composition of carbonaceous chondrites and the dust of comet Halley. Hydrogen, however, is not more abundant than in carbonaceous chondrites and thus significantly less than in cometary dust. The initial masses of the two meteoroids are estimated at 40 and 80 g, respectively. The meteor V-band luminous efficiency is found to vary in the range log τ v = −11.8 to −12.0 in magnitude c.g.s. units. For the panchromatic luminous efficiency use of the value of −11.4 for bright Perseids is recommended. Nearly 1.5% of meteoroid kinetic energy is radiated out in the Ca II lines and 1 % in all other lines between 3500 and 6600 A.
---
paper_title: Laser-induced breakdown spectroscopy for space exploration applications: Influence of the ambient pressure on the calibration curves prepared from soil and clay samples
paper_content:
Abstract Recently, there has been an increasing interest in the laser-induced breakdown spectroscopy (LIBS) technique for stand-off detection of geological samples for use on landers and rovers to Mars, and for other space applications. For space missions, LIBS analysis capabilities must be investigated and instrumental development is required to take into account constraints such as size, weight, power and the effect of environmental atmosphere (pressure and ambient gas) on flight instrument performance. In this paper, we study the in-situ LIBS method at reduced pressure (7 Torr CO 2 to simulate the Martian atmosphere) and near vacuum (50 mTorr in air to begin to simulate the Moon or asteroids' pressure) as well as at atmospheric pressure in air (for Earth conditions and comparison). Here in-situ corresponds to distances on the order of 150 mm in contrast to stand-off analysis at distance of many meters. We show the influence of the ambient pressure on the calibration curves prepared from certified soil and clay pellets. In order to detect simultaneously all the elements commonly observed in terrestrial soils, we used an Echelle spectrograph. The results are discussed in terms of calibration curves, measurement precision, plasma light collection system efficiency and matrix effects.
---
paper_title: Laser induced breakdown spectroscopy on soils and rocks: Influence of the sample temperature, moisture and roughness
paper_content:
Abstract ExoMars, ESA's next mission to Mars, will include a combined Raman/LIBS instrument for the comprehensive in-situ mineralogical and elemental analyses of Martian rocks and soils. It is inferred that water exists in the upper Martian surface as ice layers, “crystal” water or adsorbed pore water. Thus, we studied Laser Induced Breakdown Spectroscopy (LIBS) on wet and dry rocks under Martian environmental conditions in the temperature range − 60 °C to + 20 °C and in two pressure regimes, above and below the water triple point. Above this point, the LIBS signals from the rock forming elements have local minima that are accompanied by hydrogen (water) emission maxima at certain temperatures that we associate with phase transitions of free or confined water/ice. At these sample temperatures, the plasma electron density and its temperature are slightly lowered. In contrast to powder samples, a general increase of the electron density upon cooling was observed on rock samples. By comparing the LIBS signal behavior from the same rock with different grades of polishing, and different rocks with the same surface treatment, it was possible to distinguish between the influence of surface roughness and the bulk material structure (pores and grains). Below the triple point of water, the LIBS signal from the major sample elements is almost independent of the sample temperature. However, at both considered pressures we observed a hydrogen emission peak close to − 50 °C, which is attributed to a phase transition of supercooled water trapped inside bulk pores.
---
paper_title: Comparative study of different methodologies for quantitative rock analysis by Laser-Induced Breakdown Spectroscopy in a simulated Martian atmosphere
paper_content:
Laser-Induced Breakdown Spectroscopy was selected by NASA as part of the ChemCam instrument package for the Mars Science Laboratory rover to be launched in 2009. ChemCam's Laser-Induced Breakdown Spectroscopy instrument will ablate surface coatings from materials and measure the elemental composition of underlying rocks and soils at distances from 1 up to 10 m. The purpose of our studies is to develop an analytical methodology enabling identification and quantitative analysis of these geological materials in the context of the ChemCam's Laser-Induced Breakdown Spectroscopy instrument performance. The study presented here focuses on several terrestrial rock samples which were analyzed by Laser-Induced Breakdown Spectroscopy at an intermediate stand-off distance (3 m) and in an atmosphere similar to the Martian one (9 mbar CO2). The experimental results highlight the matrix effects and the measurement inaccuracies due to the noise accumulated when low signals are collected with a detector system such as an Echelle spectrometer equipped with an Intensified Charge-Coupled Device camera. Three different methods are evaluated to correct the matrix effects and to obtain quantitative results: by using an external reference sample and normalizing to the sum of all elemental concentrations, by using the internal standardization by oxygen, a major element common to all studied matrices, and by applying the Calibration Free Laser-Induced Breakdown Spectroscopy method. The three tested methods clearly demonstrate that the matrix effects can be corrected merely by taking into account the difference in the amount of vaporized atoms between the rocks, no significant variation in plasma excitation temperatures being observed. The encouraging results obtained by the three methods indicate the possibility of meeting ChemCam project objectives for stand-off quantitative analysis on Mars.
---
paper_title: Investigation of LIBS feasibility for in situ planetary exploration: An analysis on Martian rock analogues
paper_content:
Abstract Laser induced breakdown spectroscopy (LIBS) has significant potential for remote terrestrial and extraterrestrial applications, nonetheless a number of correlated problems are still to be properly understood and possibly solved. This study focuses on several samples of terrestrial provenience, mostly volcanic rocks, which have importance as analogue to expected Martian samples. They were analysed after vaporisation with a frequency tripled Nd:YAG laser emitting at 355 nm , and measurements were made in an environment similar to the Martian one. Quantitative data were obtained by adopting the calibration free LIBS approach. A comparison with SEM-EDX data from the same samples is reported. Present results on Mars rocks analogues suggest that, in spite of residual interpretative ambiguities, LIBS technique permits elemental qualitative identification and quantitative analysis on silicate minerals examined in a rarefied atmosphere.
---
paper_title: Chemical abundances determined from meteor spectra: I. Ratios of the main chemical elements
paper_content:
Relative chemical abundances of 13 meteoroids were determined by averaging the composition of the radiating gas along the fireball path that originated during their penetration into the Earth's atmosphere. Mg, Fe, Ni, Cr, Mn, and Co abundances, relative to Si, are similar to those reported for CI and CM carbonaceous chondrites and interplanetary dust particles. In contrast, relative abundances of Ca and Ti in meteor spectra indicate that these elements suffer incomplete evaporation processes. The chemical composition of all meteoroids studied in this work differs from that of 1P/Halley dust.
---
paper_title: Use of the vacuum ultraviolet spectral region for laser-induced breakdown spectroscopy-based Martian geology and exploration
paper_content:
Abstract Several elements important to planetary geology (e.g. Br, C, Cl, P, S) and the human exploration of Mars (e.g. toxic elements such as As) have strong emission lines in the purge and vacuum ultraviolet (VUV) spectral region (100–200 nm). This spectral region has not been extensively studied for space applications using geological samples. We studied emissions from the laser-induced breakdown spectroscopy (LIBS) plasma in this region using a sample chamber filled with 7 torr (930 Pa) of CO 2 to simulate the Martian atmosphere. Pressures down to 0.02 torr were also used to evaluate the effect of the residual CO 2 on the spectra and to begin investigating the use of VUV-LIBS for airless bodies such as asteroids and the Moon. Spectra were recorded using a 0.3-m vacuum spectrometer with an intensified CCD (ICCD) camera. The effects of time delay and laser energy on LIBS detection at reduced pressure were examined. The effect of ambient CO 2 on the detection of C in soil was also evaluated. Lines useful for the spectrochemical analysis of As, Br, C, Cl, P, and S were determined and calibration curves were prepared for these elements. Although LIBS is being developed for stand-off analysis at many meters distance, the experiments reported here were aimed at in-situ (close-up) analysis.
---
paper_title: Meteor luminosity simulation through laser ablation of meteorites
paper_content:
Light production mechanisms during ablation of meteors in planetary atmospheres are still not well understood. We have used a high-power pulsed (15 mJ per 10-ns pulse) Nd:YAG laser, frequency-doubled to 532-nm wavelength, to heat chondritic meteorite samples rapidly, and then used CCD imaging and spectral cameras to search for light produced by the rapidly heated meteorite vapour. We report here that light is produced by the rapidly heated and expanding ablated material without the need for collision with a high-speed atomic beam. The light production region is of the order of 2 mm in diameter. The light produced is a combination of either continuum or unresolved spectral line emission, along with spectral emission lines of Ca, Fe, Na, N, Mg, Si and O, with spectra from both neutral and singly ionized species. We have used a scanning electron microscope (SEM) to characterize the size of the ablated region (approximately 160 μm wide and 170 μm deep). The SEM, equipped with energy-dispersive X-ray spectroscopy, was also used to perform elemental analysis of our sample, which confirmed the presence of the identified elements in the original meteorite. The current experiments were done at room temperature and pressure, with mean power input rates that would correspond to low atmospheric penetration of meteorites. While others have used similar techniques for remote identification of elements present on surfaces in space and in meteorites, to our knowledge this is the first paper to propose laser ablation as a means of studying light production in modest-sized meteors.
---
paper_title: Laser-Induced Breakdown Spectroscopy for Mars surface analysis: capabilities at stand-off distances and detection of chlorine and sulfur elements
paper_content:
Abstract An international consortium is studying the feasibility of performing in situ geochemical analysis of Mars soils and rocks at stand-off distances up to several meters using the Laser-Induced Breakdown Spectroscopy (LIBS) technique. Stand-off analysis for Martian exploration imposes particular requirements on instrumentation, and it is necessary to first test the performance of such a system in the laboratory. In this paper, we test the capabilities of two different experimental setups. The first one is dedicated to the qualitative analysis of metals and rocks at distances between 3 and 12 m. With the second one, we have obtained quantitative results for aluminum alloys and developed a spectral database under Martian conditions for sulfur and chlorine, two elements that are geologically interesting but generally difficult to detect by LIBS under standard conditions (atmospheric pressure, close distance). These studies were carried out to determine an optimal instrumental design for in situ Mars analysis. The quality of analytical results affected by the optical elements and spectrometer has been particularly highlighted.
---
paper_title: Analysis of Water Ice and Water Ice/Soil Mixtures Using Laser-Induced Breakdown Spectroscopy: Application to Mars Polar Exploration
paper_content:
Recently, laser-induced breakdown spectroscopy (LIBS) has been developed for the elemental analysis of geological samples for application to space exploration. There is also interest in using the technique for the analysis of water ice and ice/dust mixtures located at the Mars polar regions. The application is a compact instrument for a lander or rover to the Martian poles to interrogate stratified layers of ice and dusts that contain a record of past geologic history, believed to date back several million years. Here we present results of a study of the use of LIBS for the analysis of water ice and ice/dust mixtures in situ and at short stand-off distances (< 6.5 m) using experimental parameters appropriate for a compact instrument. Characteristics of LIBS spectra of water ice, ice/soil mixtures, element detection limits, and the ability to ablate through ice samples to monitor subsurface dust deposits are discussed.
---
paper_title: The Cambridge Encyclopedia of Meteorites
paper_content:
Foreword 1. Cosmic dust - interplanetary dust particles 2. The fall of meteorites 3. External morphology of meteorites 4. Classification of meteorites: a historical viewpoint 5. Primitive meteorites: the chondrites 6. Chondrites: a closer look 7. Primitive meteorites: the carbonaceous chondrites 8. Differentiated meteorites: the achondrites 9. Differentiated meteorites: irons and stony-irons 10. Meteorites and the early solar system 11. Asteroid parent bodies 12. Terrestrial impact craters Appendices.
---
paper_title: Feasibility of generating a useful laser-induced breakdown spectroscopy plasma on rocks at high pressure: preliminary study for a Venus mission
paper_content:
Abstract Laser-induced breakdown spectroscopy (LIBS) is being developed for future use on landers and rovers to Mars. The method also has potential for use on probes to other planets, the Moon, asteroids and comets. Like Mars, Venus is of strong interest because of its proximity to earth, but unlike Mars, conditions at the surface are far more hostile with temperatures in excess of 700 K and pressures on the order of 9.1 MPa (90 atm). These conditions present a significant challenge to spacecraft design and demand that rapid methods of chemical data gathering be implemented. The advantages of LIBS (e.g. stand-off and very rapid analysis) make the method particularly attractive for Venus exploration because of the expected short operational lifetimes (≈2 h) of surface instrumentation. Although the high temperature of Venus should pose no problem to the analytical capabilities of the LIBS spark, the demonstrated strong dependence of laser plasma characteristics on ambient gas pressures below earth atmospheric pressure requires that LIBS measurements be evaluated at the high Venus surface pressures. Here, we present a preliminary investigation of LIBS at 9.1 MPa for application to the analysis of a basalt rock sample. The results suggest the feasibility of the method for a Venus surface probe and that further study is justified.
---
paper_title: What can We Learn about Atmospheric Meteor Ablation and Light Production from Laser Ablation
paper_content:
Laboratory based laser ablation techniques can be used to study the size of the luminous region, predict spectral features, estimate the luminous efficiency factor, and assess the role of chemically differentiated thermal ablation. A pulsed Nd:YAG laser was used to ablate regions from ordinary and carbonaceous chondrite meteorites. CCD cameras and a digital spectroscope were used to measure the size and spectrum from the cloud of vaporised material. Scanning electron microscope (SEM) based energy dispersive x-ray spectroscopy (EDS) provided elemental abundance values in ablated and unablated regions. These results indicated some degree of differential ablation, with the most significant effect being significant loss of carbon from carbonaceous chondrites. This work suggests that a carbon matrix may play the role of the glue in the two component dustball model.
---
| Title: Laser Induced Breakdown Spectroscopy for Elemental Analysis in Environmental, Cultural Heritage and Space Applications: A Review of Methods and Results
Section 1: Introduction
Description 1: Provide an overview of the development and fundamental principles of Laser Induced Breakdown Spectroscopy (LIBS), including its advantages and limitations.
Section 2: Basic Principles of LIP and LIBS
Description 2: Outline the basic principles of Laser-Induced Plasma (LIP) formation and the operational mechanics of LIBS, including experimental set-ups and configurations.
Section 3: Elemental Analysis by LIBS
Description 3: Discuss the fundamental assumptions and practical considerations in using LIBS for qualitative and quantitative elemental analysis.
Section 4: LIBS Approaches to Quantitative Analysis
Description 4: Detail the various methods for quantitative analysis using LIBS, including calibration line and calibration-free approaches.
Section 5: LIBS in Cultural Heritage Science
Description 5: Explore the applications of LIBS in cultural heritage, with examples of its use in analyzing archaeological and historical artifacts.
Section 6: LIBS in Environmental Science
Description 6: Discuss the applications of LIBS in environmental science, focusing on soil analysis and other related fields.
Section 7: Laboratory LIBS Analysis of Soils
Description 7: Detail the laboratory procedures and methodologies for analyzing soil samples using LIBS, including comparisons with other techniques.
Section 8: In situ LIBS Analysis of Soils
Description 8: Address the challenges and methodologies for performing in situ LIBS analysis of soil, including portable systems and matrix effects.
Section 9: Stand-off LIBS Analysis of Soils
Description 9: Describe the stand-off LIBS techniques for soil analysis, highlighting the capabilities and limitations in remote sensing applications.
Section 10: LIBS Analysis of Aqueous Samples
Description 10: Discuss the potential and challenges of using LIBS for analyzing aqueous samples, including wastewater and seawater.
Section 11: LIBS in Space Science
Description 11: Review the applications of LIBS in space exploration, including the analysis of extraterrestrial materials and in situ planetary exploration.
Section 12: Conclusions
Description 12: Summarize the review, highlighting the versatility and potential of LIBS across various fields of application, along with the main findings and future directions. |
Survey on Fall Detection and Fall Prevention Using Wearable and External Sensors | 12 | ---
paper_title: Fall prevention control of passive intelligent walker based on human model
paper_content:
As aging progresses, fall accident of the person using the walker is an acute problem. It is necessary to know the situation of the userpsilas fall to prevent it. In this paper, we propose a method for estimating the userpsilas fall by modeling of the user in real time as a solid body link model, and pay attention to the center of gravity of the model. We also propose a method for controlling a passive intelligent walker to prevent the userpsilas fall according to the support polygon and the walking characteristic of the user. We experimented with passive intelligent walker in which we implement the proposed fall prevention control and show the effectiveness of the proposal method.
---
paper_title: A pervasive solution for risk awareness in the context of fall prevention
paper_content:
In the present work, we introduce Fallarm, a pervasive fall prevention solution suitable for hospitals and care facilities, as well as for home settings. We applied a multifaceted intervention strategy based on closed-loop information exchange between proactive and reactive methods: comprehensive assessment protocols determine the individuals' risk of falling; an innovative device continuously monitors subjects' activities, and it provides patients with constant feedback about their actual risk. Thus, it increases their awareness; simultaneously, it realizes measures to prevent adverse events, and it reports any incident and aims to reduce the level of injury. As a result, our solution offers a comprehensive strategy for the remote management of a person's risk of falling 24 hours a day, enabling many vulnerable people to remain living independently. In this paper, we detail the architecture of our system, and we discuss the results of an experimental study we conducted to demonstrate the applicability of Fallarm in both clinical and home settings.
---
paper_title: A Posture Recognition-Based Fall Detection System for Monitoring an Elderly Person in a Smart Home Environment
paper_content:
The mobile application is capable of detecting possible falls for elderly, through the use of special sensors. The alert messages contain useful information about the people in danger, such as his/her geo location and also corresponding directions on a map. In occasions of false alerts, the supervised person is given the ability to estimate the value of importance of a possible alert and to stop it before being transmitted. This paper describes system for monitoring and fall detection of ELDERLY PEOPLE using triaxial accelerometer together with ZigBee transceiver to detect fall of ELDERLY PEOPLE. The Accidental Fall Detection System will be able to assist careers as well as the elderly, as the careers will be notified immediately to the intended person. This fall detection system is designed to detect the accidental fall of the elderly and alert the careers or their loved ones via Smart-Messaging Services (SMS) immediately. This fall detection is created using microcontroller technology as the heart of the system, the accelerometer as to detect the sudden movement or fall and the Global System for Mobile (GSM) modem, to send out SMS to the receiver
---
paper_title: Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera
paper_content:
Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.
---
paper_title: Home environment risk factors for falls in older people and the efficacy of home modifications
paper_content:
Most homes contain potential hazards, and many older people attribute their falls to trips or slips inside the home or immediate home surroundings. However, the existence of home hazards alone is insufficient to cause falls, and the interaction between an older person’s physical abilities and their exposure to environmental stressors appears to be more important. Taking risks or impulsivity may further elevate falls risk. Some studies have found that environmental hazards contribute to falls to a greater extent in older vigorous people than in older frail people. This appears to be due to increased exposure to falls hazards with an increase in the proportion of such falls occurring outside the home. There may also be a non-linear pattern between mobility and falls associated with hazards. Household environmental hazards may pose the greatest risk for older people with fair balance, whereas those with poor balance are less exposed to hazards and those with good mobility are more able to withstand them. Reducing hazards in the home appears not to be an effective falls-prevention strategy in the general older population and those at low risk of falls. Home hazard reduction is effective if targeted at older people with a history of falls and mobility limitations. The effectiveness may depend on the provision of concomitant training for improving transfer abilities and other strategies for effecting behaviour change.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: Human Activity Recognition and Pattern Discovery
paper_content:
In principle, activity recognition can be exploited to great societal benefits, especially in real-life, human centric applications such as elder care and healthcare. This article focused on recognizing simple human activities. Recognizing complex activities remains a challenging and active area of research and the nature of human activities poses different challenges. Human activity understanding encompasses activity recognition and activity pattern discovery. The first focuses on accurate detection of human activities based on a predefined activity model. An activity pattern discovery researcher builds a pervasive system first and then analyzes the sensor data to discover activity patterns.
---
paper_title: Data Collection and Analysis
paper_content:
The book entitled “Data Collection and Analysis” consists of thirteen chapters in total dealing with different aspects of research data collection and analysis. The objectives of the book are to equip students to understand, evaluate and use evidence in their academic and professional work. The book is designed for students from a wide range of disciplines (including sociology, social psychology, social policy, criminology, health studies, government and politics) and practitioners and readers in a number of applied areas (for example, nurses and other medical practitioners, social workers and others in the caring professions, workers in the criminal justice system, market researchers, teachers and others in the field of education). Chapter-I looks at the issues which logically and generally in practice precede data collection itself, for example what cases to select, and how the study should be designed while Chapter-II dealt with the methods and problems of designing and undertaking sample surveys. The Chapter concludes that the quality of the inferences being made from a sample will be related to both sample size and sampling method (Page: 52). Chapter-III dealt with observational research. It defines the concept observation (Page: 57) and introduced researchers and readers to the different styles and techniques used in observational research. Chapter-IV is concerned with methods of data collection that explicitly involve interviewing or questioning individuals. The Chapter concludes that it is important to define at the outset what the ideal researcher’s objectives should be when assessing and evaluating published research (Page: 117). Chapter-V focuses on statistical sources and databases. It also examines the implications of technological developments for social research (Page: 122). The Chapter concludes that the most obvious consequences of the growth in information services is the growing number of research reports that can be produced based wholly or in part upon existing sources of information (Page: 136). Chapter-VI is studying documentary sources in considering principles to evaluate existing sources as data with focus on documentary sources in the traditional sense of textual documents which are written. Chapter-VII focuses on process that receives scant attention in many research reports: namely, the process of transforming data into variables that can be analysed to produce the information found in the results sections of such reports. In other words the Chapter looks at the extent to which the data on which research arguments are based are not ‘found in the world’, but are constructed by the researcher(s). The Chapter concludes that the majority of data sets are re-coded, re-weighted and ‘manipulated’ or otherwise ‘reinterpreted’ in a parsimonious way during data handling and coding (Page: 179). Chapter-VIII looks at how figures are laid out in tables and graphs. The contents of this Chapter will help researchers and readers to understand any differences between groups or associations between variables which may be important for the argument of the report. At the end of the Chapter it was pointed out that an observed difference or association in a sample does not prove that such a difference or association exists in the population which the sample represents; differences can occur by chance alone, given sampling error (Page: 218). Chapter-IX looks at a range of ways of outlining differences between groups or association between variables, comparing means, laying out figures in tables, computing correlations or working out lines of regression. The Chapter has concentrated mainly on ‘zero-order’ results: the direct relationship between one or more independent variables, one at time and a dependent variable (Page: 260). Chapter-X looked at the extension of statistical analysis to situations where we need to take account of more than one independent variable: in other words, where researchers need to use multivariate analyses. Chapter-XI looked at some of the strategies used by qualitative researchers for analysing unstructured data. The chapter concentrated in particular on the kind of qualitative data analysis that has been codified by Glaser and Strauss (1967) as grounded theorising, since this represents probably the most common approach in use today. Chapter-XII concerned with the use of documents in social research. The range of documents upon which social scientists have drawn includes diaries, letters, essays, personal notes, biographies and autobiographies, institutional memoranda and reports, and governmental pronouncements as in Green Papers, White Papers and Acts of Parliament. The Chapter conclude that critical research
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: An Introduction to Feature Extraction
paper_content:
This chapter introduces the reader to the various aspects of feature extraction covered in this book. Section 1 reviews definitions and notations and proposes a unified view of the feature extraction problem. Section 2 is an overview of the methods and results presented in the book, emphasizing novel contributions. Section 3 provides the reader with an entry point in the field of feature extraction by showing small revealing examples and describing simple but effective algorithms. Finally, Section 4 introduces a more theoretical formalism and points to directions of research and open problems.
---
paper_title: Using feature construction to avoid large feature spaces in text classification
paper_content:
Feature space design is a critical part of machine learning. This is an especially difficult challenge in the field of text classification, where an arbitrary number of features of varying complexity can be extracted from documents as a preprocessing step. A challenge for researchers has consistently been to balance expressiveness of features with the size of the corresponding feature space, due to issues with data sparsity that arise as feature spaces grow larger. Drawing on past successes utilizing genetic programming in similar problems outside of text classification, we propose and implement a technique for constructing complex features from simpler features, and adding these more complex features into a combined feature space which can then be utilized by more sophisticated machine learning classifiers. Applying this technique to a sentiment analysis problem, we show encouraging improvement in classification accuracy, with a small and constant increase in feature space size. We also show that the features we generate carry far more predictive power than any of the simple features they contain.
---
paper_title: An Introduction to Feature Extraction
paper_content:
This chapter introduces the reader to the various aspects of feature extraction covered in this book. Section 1 reviews definitions and notations and proposes a unified view of the feature extraction problem. Section 2 is an overview of the methods and results presented in the book, emphasizing novel contributions. Section 3 provides the reader with an entry point in the field of feature extraction by showing small revealing examples and describing simple but effective algorithms. Finally, Section 4 introduces a more theoretical formalism and points to directions of research and open problems.
---
paper_title: An Introduction to Feature Extraction
paper_content:
This chapter introduces the reader to the various aspects of feature extraction covered in this book. Section 1 reviews definitions and notations and proposes a unified view of the feature extraction problem. Section 2 is an overview of the methods and results presented in the book, emphasizing novel contributions. Section 3 provides the reader with an entry point in the field of feature extraction by showing small revealing examples and describing simple but effective algorithms. Finally, Section 4 introduces a more theoretical formalism and points to directions of research and open problems.
---
paper_title: Machine Recognition of Human Activities: A Survey
paper_content:
The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) "actions" and 2) "activities." "Actions" are characterized by simple motion patterns typically executed by a single human. "Activities" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: Training a Support Vector Machine in the Primal
paper_content:
Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: A survey of accuracy evaluation metrics of recommendation tasks
paper_content:
Recommender systems are now popular both commercially and in the research community, where many algorithms have been suggested for providing recommendations. These algorithms typically perform differently in various domains and tasks. Therefore, it is important from the research perspective, as well as from a practical view, to be able to decide on an algorithm that matches the domain and the task of interest. The standard way to make such decisions is by comparing a number of algorithms offline using some evaluation metric. Indeed, many evaluation metrics have been suggested for comparing recommendation algorithms. The decision on the proper evaluation metric is often critical, as each metric may favor a different algorithm. In this paper we review the proper construction of offline experiments for deciding on the most appropriate algorithm. We discuss three important tasks of recommender systems, and classify a set of appropriate well known evaluation metrics for each task. We demonstrate how using an improper evaluation metric can lead to the selection of an improper algorithm for the task of interest. We also discuss other important considerations when designing offline experiments.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: A Posture Recognition-Based Fall Detection System for Monitoring an Elderly Person in a Smart Home Environment
paper_content:
The mobile application is capable of detecting possible falls for elderly, through the use of special sensors. The alert messages contain useful information about the people in danger, such as his/her geo location and also corresponding directions on a map. In occasions of false alerts, the supervised person is given the ability to estimate the value of importance of a possible alert and to stop it before being transmitted. This paper describes system for monitoring and fall detection of ELDERLY PEOPLE using triaxial accelerometer together with ZigBee transceiver to detect fall of ELDERLY PEOPLE. The Accidental Fall Detection System will be able to assist careers as well as the elderly, as the careers will be notified immediately to the intended person. This fall detection system is designed to detect the accidental fall of the elderly and alert the careers or their loved ones via Smart-Messaging Services (SMS) immediately. This fall detection is created using microcontroller technology as the heart of the system, the accelerometer as to detect the sudden movement or fall and the Global System for Mobile (GSM) modem, to send out SMS to the receiver
---
paper_title: Computer Vision: Algorithms and Applications
paper_content:
Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.
---
paper_title: A Depth-Based Fall Detection System Using a Kinect® Sensor
paper_content:
We propose an automatic, privacy-preserving, fall detection method for indoor environments, based on the usage of the Microsoft Kinect® depth sensor, in an “on-ceiling” configuration, and on the analysis of depth frames. All the elements captured in the depth scene are recognized by means of an Ad-Hoc segmentation algorithm, which analyzes the raw depth data directly provided by the sensor. The system extracts the elements, and implements a solution to classify all the blobs in the scene. Anthropometric relationships and features are exploited to recognize one or more human subjects among the blobs. Once a person is detected, he is followed by a tracking algorithm between different frames. The use of a reference depth frame, containing the set-up of the scene, allows one to extract a human subject, even when he/she is interacting with other objects, such as chairs or desks. In addition, the problem of blob fusion is taken into account and efficiently solved through an inter-frame processing algorithm. A fall is detected if the depth blob associated to a person is near to the floor. Experimental tests show the effectiveness of the proposed solution, even in complex scenarios.
---
paper_title: A Depth-Based Fall Detection System Using a Kinect® Sensor
paper_content:
We propose an automatic, privacy-preserving, fall detection method for indoor environments, based on the usage of the Microsoft Kinect® depth sensor, in an “on-ceiling” configuration, and on the analysis of depth frames. All the elements captured in the depth scene are recognized by means of an Ad-Hoc segmentation algorithm, which analyzes the raw depth data directly provided by the sensor. The system extracts the elements, and implements a solution to classify all the blobs in the scene. Anthropometric relationships and features are exploited to recognize one or more human subjects among the blobs. Once a person is detected, he is followed by a tracking algorithm between different frames. The use of a reference depth frame, containing the set-up of the scene, allows one to extract a human subject, even when he/she is interacting with other objects, such as chairs or desks. In addition, the problem of blob fusion is taken into account and efficiently solved through an inter-frame processing algorithm. A fall is detected if the depth blob associated to a person is near to the floor. Experimental tests show the effectiveness of the proposed solution, even in complex scenarios.
---
paper_title: Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera
paper_content:
Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: A Posture Recognition-Based Fall Detection System for Monitoring an Elderly Person in a Smart Home Environment
paper_content:
The mobile application is capable of detecting possible falls for elderly, through the use of special sensors. The alert messages contain useful information about the people in danger, such as his/her geo location and also corresponding directions on a map. In occasions of false alerts, the supervised person is given the ability to estimate the value of importance of a possible alert and to stop it before being transmitted. This paper describes system for monitoring and fall detection of ELDERLY PEOPLE using triaxial accelerometer together with ZigBee transceiver to detect fall of ELDERLY PEOPLE. The Accidental Fall Detection System will be able to assist careers as well as the elderly, as the careers will be notified immediately to the intended person. This fall detection system is designed to detect the accidental fall of the elderly and alert the careers or their loved ones via Smart-Messaging Services (SMS) immediately. This fall detection is created using microcontroller technology as the heart of the system, the accelerometer as to detect the sudden movement or fall and the Global System for Mobile (GSM) modem, to send out SMS to the receiver
---
paper_title: A hybrid human fall detection scheme
paper_content:
This paper presents a novel video-based human fall detection system that can detect a human fall in real-time with a high detection rate. This fall detection system is based on an ingenious combination of skeleton feature and human shape variation, which can efficiently distinguish “fall-down” activities from “fall-like” ones. The experimental results indicate that the proposed human fall detection system can achieve a high detection rate and low false alarm rate.
---
paper_title: Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera
paper_content:
Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.
---
paper_title: Aging in place: fall detection and localization in a distributed smart camera network
paper_content:
This paper presents the design, implementation and evaluation of a distributed network of smart cameras whose function is to detect and localize falls, an important application in elderly living environments. A network of overlapping smart cameras uses a decentralized procedure for computing inter-image homographies that allows the location of a fall to be reported in 2D world coordinates by calibrating only one camera. Also, we propose a joint routing and homography transformation scheme for multi-hop localization that yields localization errors of less than 2 feet using very low resolution images. Our goal is to demonstrate that such a distributed low-power system can perform adequately in this and related applications. A prototype implementation is given for low-power Agilent/UCLA Cyclops cameras running on the Crossbow MICAz platform. We demonstrate the effectiveness of the fall detection as well as the precision of the localization using a simulation of our sample implementation.
---
paper_title: Video based automatic fall detection in indoor environment
paper_content:
An increase in population of elderly people living in isolated environments, leads to the need for an automated monitoring system. Fall is one of the major reasons for death of elderly people. So fall detection is an essential part of an automated indoor monitoring system. Video based automatic fall detection is robust and more reliable than other fall detection methods. Most of the available video based fall detection mechanisms are based on the extraction of motion dynamic features like velocity and intensity gradient of the person in video. These methods are computationally intensive and less accurate. This paper presents an accurate and computationally less intensive approach to detect fall, using only the static features of the person such as aspect ratio and inclination angle. Experimental results show that this method is robust in detecting all kinds of fall.
---
paper_title: Motion control of intelligent passive-type Walker for fall-prevention function based on estimation of user state
paper_content:
In this paper, we introduce a passive-type walker using servo brakes referred to as RT Walker. RT Walker realizes several functions such as obstacles/steps avoidance function, path following function, gravity compensation function, variable motion characteristics function, etc., by controlling only servo brakes without using servo motors. These passive-type systems are dependable for using practically in real world environment, because of the passive dynamics with respect to the applied force/moment, simple structure, lightweight, and so on. However, the most serious problem of them is the falling accident of user, because the passive-type systems are lightweight and move easily based on the small force/moment applied by the user unintentionally. In this paper, we pay attention to a method for estimating the human state during the usage of the walker and propose a motion control algorithm for realizing a fall-prevention function based on its human state. We also implement the proposed control methods in RT Walker experimentally and illustrate the validity of them
---
paper_title: Fall Detection from Human Shape and Motion History Using Video Surveillance
paper_content:
Nowadays, Western countries have to face the growing population of seniors. New technologies can help people stay at home by providing a secure environment and improving their quality of life. The use of computer vision systems offers a new promising solution to analyze people behavior and detect some unusual events. In this paper, we propose a new method to detect falls, which are one of the greatest risk for seniors living alone. Our approach is based on a combination of motion history and human shape variation. Our algorithm provides promising results on video sequences of daily activities and simulated falls.
---
paper_title: Home Alone Faint Detection Surveillance System Using Thermal Camera
paper_content:
Fainting is the unwanted event occur amongst the senior citizens, patients or pregnant women. This event can cause physical injuries or even mental problems. In this paper, we proposed a simple faint detection algorithm apply into a thermal imaging camera to monitor the fainting event. In addition, this system is to allow the immediate treatment can be carried out for life saving purposes when there are people fainting. Experimental results on sample images show that the proposed surveillance system achieved high accuracy of 96.15% for poor lighting condition and 86.19% in indoor in detecting the human faint event.
---
paper_title: Pyroelectric IR sensor arrays for fall detection in the older population
paper_content:
Uncooled pyroelectric sensor arrays have been studied over many years for their uses in thermal imaging applications. These arrays will only detect changes in IR flux and so systems based upon them are very good at detecting movements of people in the scene without sensing the background, if they are used in staring mode. Relatively-low element count arrays (16x 16) can be used for a variety of people sensing applications, including people counting (for safety applications), queue monitoring etc. With appropriate signal processing such systems can be also be used for the detection of particular events such as a person falling over. There is a considerable need for automatic fall detection amongst older people, but there are important limitations to some of the current and emerging technologies available for this. Simple sensors, such as I or 2 element pyroelectric infra-red sensors provide crude data that is difficult to interpret; the use of devices worn on the person, such as wrist communicator and motion detectors have potential, but are reliant on the person being able and willing to wear the device; video cameras may be seen as intrusive and require considerable human resources to monitor activity while machine-interpretation of camera images is complex and may be difficult in this application area. The use of a pyroelectric thermal array sensor was seen to have a number of potential benefits. The sensor is wall-mounted and does not require the user to wear a device. It enables detailed analysis of a subject's motion to be achieved locally, within the detector, using only a modest processor. This is possible due to the relative ease with which data from the sensor can be interpreted relative to the data generated by alternative sensors such as video devices. In addition to the cost-effectiveness of this solution, it was felt that the lack of detail in the low-level data, together with the elimination of the need to transmit data outside the detector, would help to avert feelings intrusiveness on the part of the end-user. The main benefits of this type of technology would be for older people who spend time alone in unsupervised environments. This would include people living alone in ordinary housing or in sheltered accommodation (apartment complexes for older people with local warden) and non-communal areas in residential/nursing home environments (e.g. bedrooms and ensuite bathrooms and toilets). This paper will review the development of the array, the pyroelectric ceramic material upon which it is based and the system capabilities. It will present results from the Framework 5 SIMBAD project, which used the system to monitor the movements of elderly people over a considerable period of time.
---
paper_title: A smart sensor to detect the falls of the elderly
paper_content:
Falls are a major health hazard for the elderly and a major obstacle to independent living. The estimated incidence of falls for both institutionalized and independent persons aged over 75 is at least 30 percent per year. In the SIMBAD (Smart Inactivity Monitor using Array-Based Detectors) project, we've developed an intelligent fall detector based on a low-cost array of infrared detectors. A field trial and user research indicate that SIMBAD could significantly enhance the functionality and effectiveness of existing monitoring systems and community alarm systems.
---
paper_title: A Smart and Passive Floor-Vibration Based Fall Detector for Elderly
paper_content:
Falls are very prevalent among the elderly. They are the second leading cause of unintentional-injury death for people of all ages and the leading cause of death for elders 79 years and older. Studies have shown that the medical outcome of a fall is largely dependent upon the response and rescue time. Hence, a highly accurate automatic fall detector is an important component of the living setting for older adult to expedite and improve the medical care provided to this population. Though there are several kinds of fall detectors currently available, they suffer from various drawbacks. Some of them are intrusive while others require the user to wear and activate the devices, and hence may fail in the event of user non-compliance. This paper describes the working principle and the design of a floor vibration-based fall detector that is completely passive and unobtrusive to the resident. The detector was designed to overcome some of the common drawbacks of the earlier fall detectors. The performance of the detector is evaluated by conducting controlled laboratory tests using anthropomorphic dummies. The results showed 100% fall detection rate with minimum potential for false alarms
---
paper_title: A pervasive solution for risk awareness in the context of fall prevention
paper_content:
In the present work, we introduce Fallarm, a pervasive fall prevention solution suitable for hospitals and care facilities, as well as for home settings. We applied a multifaceted intervention strategy based on closed-loop information exchange between proactive and reactive methods: comprehensive assessment protocols determine the individuals' risk of falling; an innovative device continuously monitors subjects' activities, and it provides patients with constant feedback about their actual risk. Thus, it increases their awareness; simultaneously, it realizes measures to prevent adverse events, and it reports any incident and aims to reduce the level of injury. As a result, our solution offers a comprehensive strategy for the remote management of a person's risk of falling 24 hours a day, enabling many vulnerable people to remain living independently. In this paper, we detail the architecture of our system, and we discuss the results of an experimental study we conducted to demonstrate the applicability of Fallarm in both clinical and home settings.
---
paper_title: Artificial Neural Networks as an alternative to traditional fall detection methods
paper_content:
Falls are common events among older adults and may have serious consequences. Automatic fall detection systems are becoming a popular tool to rapidly detect such events, helping family or health personal to rapidly help the person that falls. This paper presents the results obtained in the process of testing a new fall detection method, based on Artificial Neural Networks (ANN). This method intends to improve fall detection accuracy, by avoiding the traditional threshold - based fall detection methods, and introducing ANN as a suitable option on this application.Also ANN have low computational cost, this characteristic makes them easy to implement on a portable device, comfortable to be wear by the patient.
---
paper_title: A reliable fall detection system based on wearable sensor and signal magnitude area for elderly residents
paper_content:
Falls are the primary cause of accidents for elderly people and often result in serious injury and health threats. It is also the main obstacle to independent living for frail and elderly people. A reliable fall detector can reduce the fear of falling and provide the user with the reassurance to maintain an independent lifestyle since the reliable and effective fall detection mechanism will provide urgent medical support and dramatically reduce the cost of medical care. In this work, we propose a fall-detecting system based on a wearable sensor and a real-time fall detection algorithm. We use a waist- mounted tri-axial accelerometer to capture movement data of the human body, and propose a fall detection method that uses the area under a signal magnitude curve to distinguish between falls and daily activities. Experimental results demonstrate the effectiveness of proposed scheme with high reliability and sensitivity on fall detection. The system is not only cost effective but also portable that fulfills the requirements of fall detection.
---
paper_title: A two-threshold fall detection algorithm for reducing false alarms
paper_content:
Wireless health monitoring can be used in health care for the aged to support independent living, either at home or in sheltered housing, for as long as possible. The most important single monitoring need with respect to security and well-being of the elderly is fall detection. In this paper, a two-threshold MATLAB-algorithm for fall detection is described. The algorithm uses mainly tri-axial accelerometer and tri-axial gyroscope data measured from the waist to distinguish between fall, possible fall, and activity of daily living (ADL). The decision between fall and possible fall is done by the posture information from the waist- and ankle-worn devices ten seconds after the fall impact. By categorizing falls into these two sub-categories, an alarm is generated only in serious falls, thus leading to low false alarm rate. The impact itself is detected as the total sum vector magnitudes of both the acceleration and angular velocity exceeds their fixed thresholds. With this method, the sensitivity of the algorithm is 95.6% with the set of 68 recorded fall events. Specificity is 99.6% with the set of 231 measured ADL movements. It is further shown that the use of two thresholds gives better results than just one threshold.
---
paper_title: The Future of Integrated Circuits: A Survey of Nanoelectronics
paper_content:
While most of the electronics industry is dependent on the ever-decreasing size of lithographic tran- sistors, this scaling cannot continue indefinitely. Nanoelectro- nics (circuits built with components on the scale of 10 nm) seem to be the most promising successor to lithographic based ICs. Molecular-scale devices including diodes, bistable switches, carbon nanotubes, and nanowires have been fab- ricated and characterized in chemistry labs. Techniques for self-assembling these devices into different architectures have also been demonstrated and used to build small-scale proto- types. While these devices and assembly techniques will lead to nanoscale electronics, they also have the drawback of being prone to defects and transient faults. Fault-tolerance techni- ques will be crucial to the use of nanoelectronics. Lastly, changes to the software tools that support the fabrication and use of ICs will be needed to extend them to support nano- electronics. This paper introduces nanoelectronics and reviews the current progress made in research in the areas of tech- nologies, architectures, fault tolerance, and software tools.
---
paper_title: Ageing of population and health care expenditure: a red herring?
paper_content:
This paper studies the relationship between health care expenditure (HCE) and age, using longitudinal rather than cross-sectional data. The econometric analysis of HCE in the last eight quarters of life of individuals who died during the period 1983-1992 indicates that HCE depends on remaining lifetime but not on calendar age, at least beyond 65p. The positive relationship between age and HCE observed in cross-sectional data may be caused by the simple fact that at age 80, for example, there are many more individuals living in their last 2 years than at age 65. The limited impact of age on HCE suggests that population ageing may contribute much less to future growth of the health care sector than claimed by most observers. Copyright © 1999 John Wiley & Sons, Ltd.
---
paper_title: Foot age estimation for fall-prevention using sole pressure by fuzzy logic
paper_content:
This paper describes a foot age estimation system using fuzzy logic. The method employs sole pressure distribution change data. The sole pressure data is acquired by a mat type load distribution sensor. The proposed method extracts step length, step center of sole pressure width, distance of single support period and time of double support period as gait features. The fuzzy degrees for young age, middle age and elderly groups are calculated from these gait features. The foot age of the walking person on the sensor is estimated by fuzzy MIN-MAX center of gravity method. In the experiment, the proposed method estimated subject ages with good correlation coefficient.
---
paper_title: A pervasive solution for risk awareness in the context of fall prevention
paper_content:
In the present work, we introduce Fallarm, a pervasive fall prevention solution suitable for hospitals and care facilities, as well as for home settings. We applied a multifaceted intervention strategy based on closed-loop information exchange between proactive and reactive methods: comprehensive assessment protocols determine the individuals' risk of falling; an innovative device continuously monitors subjects' activities, and it provides patients with constant feedback about their actual risk. Thus, it increases their awareness; simultaneously, it realizes measures to prevent adverse events, and it reports any incident and aims to reduce the level of injury. As a result, our solution offers a comprehensive strategy for the remote management of a person's risk of falling 24 hours a day, enabling many vulnerable people to remain living independently. In this paper, we detail the architecture of our system, and we discuss the results of an experimental study we conducted to demonstrate the applicability of Fallarm in both clinical and home settings.
---
paper_title: Home environment risk factors for falls in older people and the efficacy of home modifications
paper_content:
Most homes contain potential hazards, and many older people attribute their falls to trips or slips inside the home or immediate home surroundings. However, the existence of home hazards alone is insufficient to cause falls, and the interaction between an older person’s physical abilities and their exposure to environmental stressors appears to be more important. Taking risks or impulsivity may further elevate falls risk. Some studies have found that environmental hazards contribute to falls to a greater extent in older vigorous people than in older frail people. This appears to be due to increased exposure to falls hazards with an increase in the proportion of such falls occurring outside the home. There may also be a non-linear pattern between mobility and falls associated with hazards. Household environmental hazards may pose the greatest risk for older people with fair balance, whereas those with poor balance are less exposed to hazards and those with good mobility are more able to withstand them. Reducing hazards in the home appears not to be an effective falls-prevention strategy in the general older population and those at low risk of falls. Home hazard reduction is effective if targeted at older people with a history of falls and mobility limitations. The effectiveness may depend on the provision of concomitant training for improving transfer abilities and other strategies for effecting behaviour change.
---
paper_title: In-Home Fall Risk Assessment and Detection Sensor System
paper_content:
Falls are a major problem in older adults. A continuous, unobtrusive, environmentally mounted (i.e., embedded into the environment and not worn by the individual), in-home monitoring system that automatically detects when falls have occurred or when the risk of falling is increasing could alert health care providers and family members to intervene to improve physical function or manage illnesses that may precipitate falls. Researchers at the University of Missouri Center for Eldercare and Rehabilitation Technology are testing such sensor systems for fall risk assessment (FRA) and detection in older adults' apartments in a senior living community. Initial results comparing ground truth (validated measures) of FRA data and GAITRite System parameters with data captured from Microsoft(®) Kinect and pulse-Doppler radar are reported.
---
paper_title: Motion control of intelligent passive-type Walker for fall-prevention function based on estimation of user state
paper_content:
In this paper, we introduce a passive-type walker using servo brakes referred to as RT Walker. RT Walker realizes several functions such as obstacles/steps avoidance function, path following function, gravity compensation function, variable motion characteristics function, etc., by controlling only servo brakes without using servo motors. These passive-type systems are dependable for using practically in real world environment, because of the passive dynamics with respect to the applied force/moment, simple structure, lightweight, and so on. However, the most serious problem of them is the falling accident of user, because the passive-type systems are lightweight and move easily based on the small force/moment applied by the user unintentionally. In this paper, we pay attention to a method for estimating the human state during the usage of the walker and propose a motion control algorithm for realizing a fall-prevention function based on its human state. We also implement the proposed control methods in RT Walker experimentally and illustrate the validity of them
---
paper_title: A novel fall prevention scheme for intelligent cane robot by using a motor driven universal joint
paper_content:
In this study, we propose a novel fall prevention scheme for an omni-direction type cane robot by using a DC motor driven universal joint. The cane robot which is driven by three omni-wheels is called Intelligent Cane Robot (iCane). It is designed for aiding the elderly and handicapped people walking as shown in Fig.1. The motion of cane robot is controlled for both normal and abnormal walking conditions. The user's normal walking aided by the cane robot, a concept called “Intentional Direction (ITD)” is proposed. Guided by the online estimated ITD, we apply the admittance control method in the motion control of cane robot. For the abnormal walking, we mainly studied the case of user's fall down. The center of gravity (COG) of user can be estimated from the angle of an inverted pendulum which represents human dynamic model. Fall prevention algorithm based on the relationship between user's COG and the cane is proposed. Because the size of the cane robot is small, when the robot is preventing the user falling down, firstly, the stability of the cane robot should be ensured. A universal joint which is driven by two DC motors is used to reduce the moment causing the cane robot falling over. The proposed method is verified through experiments.
---
paper_title: Gait analysis of sit-to-walk motion by using portable acceleration monitor device for fall prevention
paper_content:
The purpose of this study is to investigate whether the acceleration of the center of gravity of body during sit-to-walk motion have a relationship with falling or not. In this study, we measured sit-to-walk motion of fall experienced and inexperienced subjects by using a portable acceleration monitor device that we have developed. The result of discriminant analysis by using indexes with a significant difference revealed a 90.3% correct prediction rate for falling. The results indicated possibility of fall prevention by this method.
---
paper_title: Psychosocial factors associated with fall-related hip fractures
paper_content:
Background: fall-related injuries in older people are a major public health concern. This study examined the relationship between psychosocial determinants of healthy ageing and risk of fall-related hip fracture in community-dwelling older people. The purpose was to contribute evidence for promotion of healthy ageing strategies in population-based interventions for fall injury prevention. Methods: a case-control study was conducted with 387 participants, with at least two controls recruited per case. Cases of fall-related hip fracture in community-dwelling people aged 65 and older were recruited from hospital admissions in Brisbane, Australia, in 2003-2004. Community-based controls, matched by age, sex and postcode, were recruited via electoral roll sampling. A questionnaire assessing psychosocial factors, identified as determinants of healthy ageing, was administered at face-to-face interviews. Results: psychosocial factors having a significant independent protective effect on hip fracture risk included being currently married [OR: 0.44 (0.22 to 0.88)], living in present residence for 5 years or more [OR: 0.43 (0.22 to 0.84)], having private health insurance [OR: 0.49 (0.27 to 0.90)], using proactive coping strategies [OR: 0.52 (0.29 to 0.92)], having a higher level of life satisfaction [OR: 0.47 (0.27 to 0.81)], and engagement in social activities in older age [OR: 0.30 (0.17 to 0.54)]. Conclusion: this study suggests that psychosocial determinants of healthy ageing are protective in fall-related hip fracture injury in older people. Reduction in the public health burden caused by this injury may then be achieved by implementing healthy ageing strategies involving community-based approaches to enhance the psychosocial environments of older people.
---
paper_title: Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera
paper_content:
Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: A data mining approach for fall detection by using k-nearest neighbour algorithm on wireless sensor network data
paper_content:
Fall detection technology is critical for the elderly people. In order to avoid the need of full time care giving service, the actual trend is to encourage elderly to stay living autonomously in their homes as long as possible. Reliable fall detection methods can enhance life safety of the elderly and boost their confidence by immediately alerting fall cases to caregivers. This study presents an algorithm of fall detection, which detects fall events by using data-mining approach. The authors' proposed method performs detection in two steps. First, it collects the wireless sensor network (WSN) data in stream format from sensor devices. Second, it uses k-nearest neighbour algorithm, that is, well-known lazy learning algorithm to detect fall occurrences. It detects falls by identifying the fall patterns in the data stream. Experiments show that the proposed method has promising results on WSN data stream in detecting falls.
---
paper_title: Real life applicable fall detection system based on wireless body area network
paper_content:
Real-time health monitoring with wearable sensors is an active area of research. In this domain, observing the physical condition of elderly people or patients in personal environments such as home, office, and restroom has special significance because they might be unassisted in these locations. The elderly people have limited physical abilities and are more vulnerable to serious physical damages even with small accidents, e.g. fall. The falls are unpredictable and unavoidable. In case of a fall, early detection and prompt notification to emergency services is essential for quick recovery. However, the existing fall detection devices are bulky and uncomfortable to wear. Also, detection system using the devices requires the higher computation overhead to detect falls from activities of daily living (ADL). In this paper, we propose a new fall detection system using one sensor node which can be worn as a necklace to provide both the comfortable wearing and low computation overhead. The proposed necklace-shaped sensor node includes tri-axial accelerometer and gyroscope sensors to classify the behaviour and posture of the detection subject. The simulated experimental results performed 5 fall scenarios 50 times by 5 persons show that our proposed detection approach can successfully distinguish between ADL and fall, with sensitivities greater than 80% and specificities of 100%.
---
paper_title: PerFallD: A pervasive fall detection system using mobile phones
paper_content:
Falls are a major health risk that diminish the quality of life among elderly people. With the elderly population surging, especially with aging “baby boomers”, fall detection becomes increasingly important. However, existing commercial products and academic solutions struggle to achieve pervasive fall detection. In this paper, we propose utilizing mobile phones as a platform for pervasive fall detection system development. To our knowledge, we are the first to do so. We design a detection algorithm based on mobile phone platforms. We propose PerFallD, a pervasive fall detection system implemented on mobile phones. We implement a prototype system on the Android G1 phone and conduct experiments to evaluate our system. In particular, we compare PerFallD's performance with that of existing work and a commercial product. Experimental results show that PerFallD achieves strong detection performance and power efficiency.
---
paper_title: Fall Detection on Mobile Phones Using Features from a Five-Phase Model
paper_content:
The injuries caused by falls are great threats to the elderly people. With the ability of communication and motion sensing, the mobile phone is an ideal platform to detect the occurrence of fall accidents and help the injured person receive first aid. However, the missed detection and false alarm of the monitoring software will cause annoyance to the users in real use. In this paper, we present a novel fall detection technique using features from a five-phase model which describes the state change of the user's motion during the fall. Experiment results validate the effectiveness of the algorithm and show that the features derived from the model as gravity-cross rate and non-primarily maximum and minimum points of the acceleration data are useful to improve the precision of the detection. Moreover, we implement the technique as uCare, an Android application that helps elderly people in fall prevention, detection and first aid seeking.
---
paper_title: Human falling detection algorithm using back propagation neural network
paper_content:
A fall monitor system is necessary to reduce the rate of fall fatalities in elderly people. As an accelerometer has been smaller and inexpensive, it has been becoming widely used in motion detection fields. This paper proposes the falling detection algorithm based on back propagation neural network to detect the fall of elderly people. In the experiment, a tri-axial accelerometer was attached to waists of five healthy and young people. In order to evaluate the performance of the fall detection, five young people were asked to simulate four daily-life activities and four falls; walking, jumping, flopping on bed, rising from bed, front fall, back fall, left fall and right fall. The experimental results show that the proposed algorithm can potentially distinguish the falling activities from the other daily-life activities.
---
paper_title: Fall detection with distributed floor-mounted accelerometers: An overview of the development and evaluation of a fall detection system within the project eHome
paper_content:
Within the project "eHome" a prototype of an assistive home system was developed, aiming to prolong the independent life of elderly people at home. Besides communication, e-access and safety relevant features, a core part of this system is an automatic fall detection, which utilizes floor-mounted accelerometers to gather body-sound signals that typically occur during a human fall. This approach targets to avoid acceptance, usability and reliability issues of available body-mounted fall detectors. The system was developed with focus on practical applicability, reliability and exploitability. The prototype was evaluated successfully in laboratory and during 507 days in real-life at homes of persons from the target group. During the laboratory trials a sensitivity of 87% and a specificity of 97.7% could be achieved for a defined fall scenario and across four tested floors. Further research is suggested to investigate floor dependencies of the fall detection performance.
---
paper_title: A wearable wireless fall detection system with accelerators
paper_content:
Falls in elderly is a major health problem and a cost burden to social services. Thus automatic fall detectors are needed to support the independence and security of the elderly. The goal of this research is to develop a real-time portable wireless fall detection system, which is capable of automatically discriminating between falls and Activities of Daily Life (ADL). The fall detection system contains a portable fall-detection terminal and a monitoring centre, both of which communicate with ZigBee protocol. To extract the features of falls, falls data and ADL data obtained from young subjects are analyzed. Based on the characteristics of falls, an effective fall detection algorithm using tri-axis accelerometers is introduced, and the results show that falls can be distinguished from ADL with a sensitivity over 95% and a specificity of 100%, for a total set of 270 movements.
---
paper_title: A hybrid human fall detection scheme
paper_content:
This paper presents a novel video-based human fall detection system that can detect a human fall in real-time with a high detection rate. This fall detection system is based on an ingenious combination of skeleton feature and human shape variation, which can efficiently distinguish “fall-down” activities from “fall-like” ones. The experimental results indicate that the proposed human fall detection system can achieve a high detection rate and low false alarm rate.
---
paper_title: A Posture Recognition-Based Fall Detection System for Monitoring an Elderly Person in a Smart Home Environment
paper_content:
The mobile application is capable of detecting possible falls for elderly, through the use of special sensors. The alert messages contain useful information about the people in danger, such as his/her geo location and also corresponding directions on a map. In occasions of false alerts, the supervised person is given the ability to estimate the value of importance of a possible alert and to stop it before being transmitted. This paper describes system for monitoring and fall detection of ELDERLY PEOPLE using triaxial accelerometer together with ZigBee transceiver to detect fall of ELDERLY PEOPLE. The Accidental Fall Detection System will be able to assist careers as well as the elderly, as the careers will be notified immediately to the intended person. This fall detection system is designed to detect the accidental fall of the elderly and alert the careers or their loved ones via Smart-Messaging Services (SMS) immediately. This fall detection is created using microcontroller technology as the heart of the system, the accelerometer as to detect the sudden movement or fall and the Global System for Mobile (GSM) modem, to send out SMS to the receiver
---
paper_title: A two-threshold fall detection algorithm for reducing false alarms
paper_content:
Wireless health monitoring can be used in health care for the aged to support independent living, either at home or in sheltered housing, for as long as possible. The most important single monitoring need with respect to security and well-being of the elderly is fall detection. In this paper, a two-threshold MATLAB-algorithm for fall detection is described. The algorithm uses mainly tri-axial accelerometer and tri-axial gyroscope data measured from the waist to distinguish between fall, possible fall, and activity of daily living (ADL). The decision between fall and possible fall is done by the posture information from the waist- and ankle-worn devices ten seconds after the fall impact. By categorizing falls into these two sub-categories, an alarm is generated only in serious falls, thus leading to low false alarm rate. The impact itself is detected as the total sum vector magnitudes of both the acceleration and angular velocity exceeds their fixed thresholds. With this method, the sensitivity of the algorithm is 95.6% with the set of 68 recorded fall events. Specificity is 99.6% with the set of 231 measured ADL movements. It is further shown that the use of two thresholds gives better results than just one threshold.
---
paper_title: CARE: A dynamic stereo vision sensor system for fall detection
paper_content:
This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety for elderly at home. The system consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA for interfacing the detectors, early data processing, and stereo matching for depth map reconstruction, (3) a digital signal processor for interpreting the sensor data in real-time for fall recognition, and (4) a wireless communication module for instantly alerting caring institutions. This system was designed for incident detection in private homes of elderly to foster safety and security. The two main advantages of the system, compared to existing wearable systems are from the application's point of view: (a) the stationary installation has a better acceptance for independent living comparing to permanent wearing devices, and (b) the privacy of the system is systematically ensured since the vision detector does not produce real images such as classic video sensors. The system can actually process about 300 kevents per second. It was evaluated using 500 fall cases acquired with a stuntman. More than 90% positive detections were reported. We will show a live demonstration during ISCAS2012 of the sensor system and its capabilities.
---
paper_title: Embedded fall detection with a neural network and bio-inspired stereo vision
paper_content:
In this paper, we present a bio-inspired, purely passive, and embedded fall detection system for its application towards safety for elderly at home. Bio-inspired means the use of two optical detector chips with event-driven pixels that are sensitive to relative light intensity changes only. The two chips are used as stereo configuration which enables a 3D representation of the observed area with a stereo matching technique. In contrast to conventional digital cameras, this image sensor delivers asynchronous events instead of synchronous intensity or color images, thus, the privacy issue is systematically solved. Another advantage is that stationary installed fall detection systems have a better acceptance for independent living compared to permanently worn devices. The fall detection is done by a trained neural network. First, a meaningful feature vector is calculated from the point clouds, then the neural network classifies the actual event as fall or non-fall. All processing is done on an embedded device consisting of an FPGA for stereo matching and a DSP for neural network calculation achieving several fall evaluations per second. The results evaluation showed that our fall detection system achieves a fall detection rate of more than 96% with false positives below 5% for our prerecorded dataset consisting of 679 fall scenarios.
---
paper_title: Fall detection using a Gaussian distribution of clustered knowledge, augmented radial basis neural-network, and multilayer perceptron
paper_content:
The rapidly increasing population of elderly people has posed a big challenge to research in fall prevention and detection. Substantial amounts of injuries, disabilities, traumas and deaths among elderly people due to falls have been reported worldwide. There is therefore a need for a reliable, simple, and affordable automatic fall detection system. This paper proposes a reliable fall detection algorithm using minimal information from a single waist worn wireless tri-axial accelerometer. The method proposed is to approach fall detection using digital signal processing and neural networks. This method includes the application of Discrete Wavelet Transform (DWT), Regrouping Particle Swarm Optimization (RegPSO), a proposed method called Gaussian Distribution of Clustered Knowledge (GCK), and an Ensemble of Classifiers using two different classifiers: Multilayer Perceptron Neural Network (MLP) and Augmented Radial Basis Neural Networks (ARBF). The proposed method has been tested on 8 healthy individuals in a home environment and yields promising result of up to 100% sensitivity on ingroup, 97.65% sensitivity on outgroup, and 99.56% specificity on Activities of Daily Living (ADL) data.
---
paper_title: Fall Detection with Wearable Sensors--Safe (Smart Fall Detection)
paper_content:
The high rate of falls incidence among the elderly calls for the development of reliable and robust fall detection systems. A number of such systems have been proposed, with claims of fall detection accuracy of over 90% based on accelerometers and gyroscopes. However, most such fall detection algorithms have been developed based on observational analysis of the data gathered, leading to thresholds setting for fall/non-fall situations. Whilst the fall detection accuracies reported appear to be high, there is little evidence that the threshold based methods proposed generalise well with different subjects and different data gathering strategies or experimental scenarios. Moreover, few attempts appear to have been made to validate the proposed methods in real-life scenarios or to deliver robust fall decisions in real-time. The research here uses machine learning and particularly decision trees to detect 4 types of falls (forward, backward, right and left). When applied to experimental data from 8 male subjects, the accelerometers and gyroscopes based system discriminates between activities of daily living (ADLs) and falls with a precision of 81% and recall of 92%. The performance and robustness of the method proposed has been further analysed in terms its sensitivity to subject physical profile and training set size.
---
paper_title: View-invariant Fall Detection for Elderly in Real Home Environment
paper_content:
We propose a novel context based human fall detection mechanism in real home environment. Fall incidents are detected using head and floor information. The centroid location of the head and feet from each frame are used to learn a context model consisting of normal head and floor blocks. Every floor block has its associated Gaussian distribution, representing a set of head blocks. This Gaussian distribution defines standard vertical distance as average height of an object at that specific floor block. The classification of blocks and average height is later used to detect a fall. Fall detection methods often detect bending situations as fall. This method is able to distinguish bending and sitting from falling. Furthermore, a fall into any direction and at any distance from camera can be detected. Evaluation results show the robustness and high accuracy of the proposed approach.
---
paper_title: In-Home Fall Risk Assessment and Detection Sensor System
paper_content:
Falls are a major problem in older adults. A continuous, unobtrusive, environmentally mounted (i.e., embedded into the environment and not worn by the individual), in-home monitoring system that automatically detects when falls have occurred or when the risk of falling is increasing could alert health care providers and family members to intervene to improve physical function or manage illnesses that may precipitate falls. Researchers at the University of Missouri Center for Eldercare and Rehabilitation Technology are testing such sensor systems for fall risk assessment (FRA) and detection in older adults' apartments in a senior living community. Initial results comparing ground truth (validated measures) of FRA data and GAITRite System parameters with data captured from Microsoft(®) Kinect and pulse-Doppler radar are reported.
---
paper_title: A pervasive solution for risk awareness in the context of fall prevention
paper_content:
In the present work, we introduce Fallarm, a pervasive fall prevention solution suitable for hospitals and care facilities, as well as for home settings. We applied a multifaceted intervention strategy based on closed-loop information exchange between proactive and reactive methods: comprehensive assessment protocols determine the individuals' risk of falling; an innovative device continuously monitors subjects' activities, and it provides patients with constant feedback about their actual risk. Thus, it increases their awareness; simultaneously, it realizes measures to prevent adverse events, and it reports any incident and aims to reduce the level of injury. As a result, our solution offers a comprehensive strategy for the remote management of a person's risk of falling 24 hours a day, enabling many vulnerable people to remain living independently. In this paper, we detail the architecture of our system, and we discuss the results of an experimental study we conducted to demonstrate the applicability of Fallarm in both clinical and home settings.
---
paper_title: Fall prevention control of passive intelligent walker based on human model
paper_content:
As aging progresses, fall accident of the person using the walker is an acute problem. It is necessary to know the situation of the userpsilas fall to prevent it. In this paper, we propose a method for estimating the userpsilas fall by modeling of the user in real time as a solid body link model, and pay attention to the center of gravity of the model. We also propose a method for controlling a passive intelligent walker to prevent the userpsilas fall according to the support polygon and the walking characteristic of the user. We experimented with passive intelligent walker in which we implement the proposed fall prevention control and show the effectiveness of the proposal method.
---
paper_title: Motion control of intelligent passive-type Walker for fall-prevention function based on estimation of user state
paper_content:
In this paper, we introduce a passive-type walker using servo brakes referred to as RT Walker. RT Walker realizes several functions such as obstacles/steps avoidance function, path following function, gravity compensation function, variable motion characteristics function, etc., by controlling only servo brakes without using servo motors. These passive-type systems are dependable for using practically in real world environment, because of the passive dynamics with respect to the applied force/moment, simple structure, lightweight, and so on. However, the most serious problem of them is the falling accident of user, because the passive-type systems are lightweight and move easily based on the small force/moment applied by the user unintentionally. In this paper, we pay attention to a method for estimating the human state during the usage of the walker and propose a motion control algorithm for realizing a fall-prevention function based on its human state. We also implement the proposed control methods in RT Walker experimentally and illustrate the validity of them
---
paper_title: On Space-Time Interest Points
paper_content:
Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. ::: ::: To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.
---
paper_title: RGBD-camera based get-up event detection for hospital fall prevention
paper_content:
In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.
---
paper_title: Foot age estimation for fall-prevention using sole pressure by fuzzy logic
paper_content:
This paper describes a foot age estimation system using fuzzy logic. The method employs sole pressure distribution change data. The sole pressure data is acquired by a mat type load distribution sensor. The proposed method extracts step length, step center of sole pressure width, distance of single support period and time of double support period as gait features. The fuzzy degrees for young age, middle age and elderly groups are calculated from these gait features. The foot age of the walking person on the sensor is estimated by fuzzy MIN-MAX center of gravity method. In the experiment, the proposed method estimated subject ages with good correlation coefficient.
---
paper_title: A novel fall prevention scheme for intelligent cane robot by using a motor driven universal joint
paper_content:
In this study, we propose a novel fall prevention scheme for an omni-direction type cane robot by using a DC motor driven universal joint. The cane robot which is driven by three omni-wheels is called Intelligent Cane Robot (iCane). It is designed for aiding the elderly and handicapped people walking as shown in Fig.1. The motion of cane robot is controlled for both normal and abnormal walking conditions. The user's normal walking aided by the cane robot, a concept called “Intentional Direction (ITD)” is proposed. Guided by the online estimated ITD, we apply the admittance control method in the motion control of cane robot. For the abnormal walking, we mainly studied the case of user's fall down. The center of gravity (COG) of user can be estimated from the angle of an inverted pendulum which represents human dynamic model. Fall prevention algorithm based on the relationship between user's COG and the cane is proposed. Because the size of the cane robot is small, when the robot is preventing the user falling down, firstly, the stability of the cane robot should be ensured. A universal joint which is driven by two DC motors is used to reduce the moment causing the cane robot falling over. The proposed method is verified through experiments.
---
paper_title: A reliable fall detection system based on wearable sensor and signal magnitude area for elderly residents
paper_content:
Falls are the primary cause of accidents for elderly people and often result in serious injury and health threats. It is also the main obstacle to independent living for frail and elderly people. A reliable fall detector can reduce the fear of falling and provide the user with the reassurance to maintain an independent lifestyle since the reliable and effective fall detection mechanism will provide urgent medical support and dramatically reduce the cost of medical care. In this work, we propose a fall-detecting system based on a wearable sensor and a real-time fall detection algorithm. We use a waist- mounted tri-axial accelerometer to capture movement data of the human body, and propose a fall detection method that uses the area under a signal magnitude curve to distinguish between falls and daily activities. Experimental results demonstrate the effectiveness of proposed scheme with high reliability and sensitivity on fall detection. The system is not only cost effective but also portable that fulfills the requirements of fall detection.
---
paper_title: smartPrediction: a real-time smartphone-based fall risk prediction and prevention system
paper_content:
The high risk of falls and the substantial increase in the elderly population have recently stimulated scientific research on Smartphone-based fall detection systems. Even though these systems are helpful for fall detection, the best way to reduce the number of falls and their consequences is to predict and prevent them from happening in the first place. To address the issue of fall prevention, in this paper, we propose a fall prediction system by integrating the sensor data of Smartphones and a Smartshoe. We designed and implemented a Smartshoe that contains four pressure sensors with a Wi-Fi communication module to unobtrusively collect data in any environment. By assimilating the Smartshoe and Smartphone sensors data, we performed an extensive set of experiments to evaluate normal and abnormal walking patterns. The system can generate an alert message in the Smartphone to warn the user about the high-risk gait patterns and potentially save them from an imminent fall. We validated our approach using a decision tree with 10-fold cross validation and found 97.2% accuracy in gait abnormality detection.
---
paper_title: The Barthel ADL Index: a reliability study.
paper_content:
The Barthel Index is a valid measure of disability. In this study we investigated the reliability of four different methods of obtaining the score in 25 patients: self-report, asking a trained nurse who had worked with the patient for at least one shift, and separate testing by two skilled observers within 72 hours of admission. Analysis of total (summed) scores revealed a close correlation between all four methods: a difference of 4/20 points was likely to reflect a genuine difference. In individual items, most disagreement was minor and involved the definition of middle grades. Asking an informed nurse or relative was as reliable as testing, and is quicker.
---
paper_title: Development and evaluation of evidence based risk assessment tool (STRATIFY) to predict which elderly inpatients will fall: case-control and cohort studies.
paper_content:
OBJECTIVES ::: To identify clinical characteristics of elderly inpatients that predict their chance of falling (phase 1) and to use these characteristics to derive a risk assessment tool and to evaluate its power in predicting falls (phases 2 and 3). ::: ::: ::: DESIGN ::: Phase 1: a prospective case-control study. Phases 2 and 3: prospective evaluations of the derived risk assessment tool in predicting falls in two cohorts. ::: ::: ::: SETTING ::: Elderly care units of St Thomas's Hospital (phase 1 and 2) and Kent and Canterbury Hospital (phase 3). ::: ::: ::: SUBJECTS ::: Elderly hospital inpatients (aged > or = 65 years): 116 cases and 116 controls in phase 1,217 patients in phase 2, and 331 in phase 3. ::: ::: ::: MAIN OUTCOME MEASURES ::: 21 separate clinical characteristics were assessed in phase 1, including the abbreviated mental test score, modified Barthel index, a transfer and mobility score obtained by combining the transfer and mobility sections of the Barthel index, and several nursing judgements. ::: ::: ::: RESULTS ::: In phase 1 five factors were independently associated with a higher risk of falls: fall as a presenting complaint (odds ratio 4.64 (95% confidence interval 2.59 to 8.33); a transfer and mobility score of 3 or 4 (2.10 (1.22 to 3.61)); and primary nurses' judgment that a patient was agitated (20.9 (9.62 to 45.62)), needed frequent toileting (2.48 (1.08 to 5.70)), and was visually impaired (3.56 (1.26 to 10.05)). A risk assessment score (range 0-5) was derived by scoring one point for each of these five factors. In phases 2 and 3 a risk assessment score > 2 was used to define high risk: the sensitivity and specificity of the score to predict falls during the following week was 93% and 88% respectively in phase 2 and 92% and 68% respectively in phase 3. ::: ::: ::: CONCLUSION ::: This simple risk assessment tool predicted with clinically useful sensitivity and specificity a high percentage of falls among elderly hospital inpatients.
---
| Title: Survey on Fall Detection and Fall Prevention Using Wearable and External Sensors
Section 1: Introduction
Description 1: Provide a background on fall-related injuries, their consequences, and common causes. Introduce the importance of fall detection (FD) and fall prevention (FP) systems, and briefly mention the types of sensors used.
Section 2: Machine Learning General Model of FD and FP Systems
Description 2: Describe the general model used by FD and FP systems, including the data collection, feature extraction, learning, and evaluation modules. Explain the key terms related to machine learning.
Section 3: Data Collection
Description 3: Explain the process of data collection, the variables needed, and the methods of gathering and cleaning the data.
Section 4: Feature Extraction
Description 4: Discuss the process of selecting relevant characteristics or attributes from the collected data. Explain feature construction and feature selection.
Section 5: Learning Module
Description 5: Describe the learning mechanisms in FD and FP systems, focusing on supervised learning algorithms such as Decision Trees, Naive Bayes, K-Nearest Neighbor, and Support Vector Machines.
Section 6: Model Evaluation
Description 6: Explain the importance of evaluating the performance of learning models in FD and FP systems. Describe common performance indicators and evaluation methods.
Section 7: Design Issues
Description 7: Discuss important aspects to consider when designing or evaluating FD and FP systems, such as obtrusiveness, occlusion, privacy, computational cost, energy consumption, noise, and threshold definition.
Section 8: Fall Detection Systems
Description 8: Present the basic structure of FD systems and the different types of sensors used. Provide an overview of various fall detection systems using external and wearable sensors.
Section 9: Fall Prevention Systems
Description 9: Introduce fall prevention systems and discuss the significant factors and strategies for fall prevention, covering environmental, physical, and psychological factors.
Section 10: Evaluation of the Systems
Description 10: Provide a qualitative evaluation of state-of-the-art FD and FP systems based on the design issues described previously. Summarize key studies and compare their approaches, strengths, and weaknesses.
Section 11: Conclusions
Description 11: Summarize the main findings of the survey, the general model of FD and FP systems, design considerations, and qualitative evaluation of various systems. Offer final thoughts and potential directions for future research.
Section 12: Author Contributions
Description 12: Specify the contributions made by each author to the manuscript. |
A Review on Algorithms for Constraint-based Causal Discovery | 17 | ---
paper_title: Big data problems on discovering and analyzing causal relationships in epidemiological data
paper_content:
This research focuses on learning causal relationships on epidemiological data. We introduce the research need for causal reasoning and address one of the big data problems in epidemiology by showing the complexity of causal discovery and analysis in an observational epidemiological dataset. We also provide several computational methods of solving the problems including building a framework of causal reasoning on epidemio-logical dataset, improved algorithms for local causal discoveries, and the conceptual design of subgraph decompositions. This research further discusses how these approaches we have made are related to epidemiology. Through this research, we are able to more efficiently and effectively discover and analyze causal relationships in a big dataset of epidemiology.
---
paper_title: Computation, causation, and discovery
paper_content:
In science, business and policymaking - anywhere data are used in prediction - two sorts of problems requiring very different methods of analysis often arise. The first, problems of recognition and classification, concerns learning how to use some features of a system to accurately predict other features of that system. The second, problems of causal discovery, concerns learning how to predict those changes to some features of a system that will result if an intervention changes other features. This book is about the second - more difficult - type of problem. Typical problems of causal discovery are: how will a change in commission rates affect the total sales of a company? How will a reduction in cigarette smoking among older smokers affect their life expectancy? How will a change in the formula a college uses to award scholarships affect its dropout rate? These sorts of changes are interventions that directly alter some features of the system and perhaps - and this is the question - indirectly alter others. The contributors discuss recent research and applications using Bayes nets or directed graphic representations, including representations of feedback of "recursive" systems. The book contains a thorough discussion of foundational issues, algorithms, proof techniques, and applications to economics, physics, biology, educational research and other areas.
---
paper_title: The center for causal discovery of biomedical knowledge from big data
paper_content:
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers.
---
paper_title: Probabilistic Graphical Models: Principles and Techniques
paper_content:
Most tasks require a person or an automated system to reason -- to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.
---
paper_title: Bridging Causal Relevance and Pattern Discriminability: Mining Emerging Patterns from High-Dimensional Data
paper_content:
It is a nontrivial task to build an accurate emerging pattern (EP) classifier from high-dimensional data because we inevitably face two challenges 1) how to efficiently extract a minimal set of strongly predictive EPs from an explosive number of candidate patterns, and 2) how to handle the highly sensitive choice of the minimal support threshold. To address these two challenges, we bridge causal relevance and EP discriminability (the predictive ability of emerging patterns) to facilitate EP mining and propose a new framework of mining EPs from high-dimensional data. In this framework, we study the relationships between causal relevance in a causal Bayesian network and EP discriminability in EP mining, and then reduce the pattern space of EP mining to direct causes and direct effects, or the Markov blanket (MB) of the class attribute in a causal Bayesian network. The proposed framework is instantiated by two EPs-based classifiers, CE-EP and MB-EP, where CE stands for direct Causes and direct Effects, and MB for Markov Blanket. Extensive experiments on a broad range of data sets validate the effectiveness of the CE-EP and MB-EP classifiers against other well-established methods, in terms of predictive accuracy, pattern numbers, running time, and sensitivity analysis.
---
paper_title: Structure Learning in Graphical Modeling
paper_content:
A graphical model is a statistical model that is associated to a graph whose nodes correspond to variables of interest. The edges of the graph reflect allowed conditional dependencies among the variables. Graphical models admit computationally convenient factorization properties and have long been a valuable tool for tractable modeling of multivariate distributions. More recently, applications such as reconstructing gene regulatory networks from gene expression data have driven major advances in structure learning, that is, estimating the graph underlying a model. We review some of these advances and discuss methods such as the graphical lasso and neighborhood selection for undirected graphical models (or Markov random fields), and the PC algorithm and score-based search methods for directed graphical models (or Bayesian networks). We further review extensions that account for effects of latent variables and heterogeneous data sources.
---
paper_title: Practical Approaches to Causal Relationship Exploration
paper_content:
This brief presents four practical methods to effectively explore causal relationships, which are often used for explanation, prediction and decision making in medicine, epidemiology, biology, economics, physics and social sciences. The first two methods apply conditional independence tests for causal discovery. The last two methods employ association rule mining for efficient causal hypothesis generation, and a partial association test and retrospective cohort study for validating the hypotheses. All four methods are innovative and effective in identifying potential causal relationships around a given target, and each has its own strength and weakness. For each method, a software tool is provided along with examples demonstrating its use. Practical Approaches to Causal Relationship Exploration is designed for researchers and practitioners working in the areas of artificial intelligence, machine learning, data mining, and biomedical research. The material also benefits advanced students interested in causal relationship discovery.
---
paper_title: Experiment selection for causal discovery
paper_content:
Randomized controlled experiments are often described as the most reliable tool available to scientists for discovering causal relationships among quantities of interest. However, it is often unclear how many and which different experiments are needed to identify the full (possibly cyclic) causal structure among some given (possibly causally insufficient) set of variables. Recent results in the causal discovery literature have explored various identifiability criteria that depend on the assumptions one is able to make about the underlying causal process, but these criteria are not directly constructive for selecting the optimal set of experiments. Fortunately, many of the needed constructions already exist in the combinatorics literature, albeit under terminology which is unfamiliar to most of the causal discovery community. In this paper we translate the theoretical results and apply them to the concrete problem of experiment selection. For a variety of settings we give explicit constructions of the optimal set of experiments and adapt some of the general combinatorics results to answer questions relating to the problem of experiment selection.
---
paper_title: An Introduction to Causal Inference
paper_content:
The goal of many sciences is to understand the mechanisms by which variables came to take on the values they have (that is, to find a generative model), and to predict what the values of those variables would be if the naturally occurring mechanisms were subject to outside manipulations. The past 30 years has seen a number of conceptual developments that are partial solutions to the problem of causal inference from observational sample data or a mixture of observational sample and experimental data, particularly in the area of graphical causal modeling. However, in many domains, problems such as the large numbers of variables, small samples sizes, and possible presence of unmeasured causes, remain serious impediments to practical applications of these developments. The articles in the Special Topic on Causality address these and other problems in applying graphical causal modeling algorithms. This introduction to the Special Topic on Causality provides a brief introduction to graphical causal modeling, places the articles in a broader context, and describes the differences between causal inference and ordinary machine learning classification and prediction problems.
---
paper_title: Determining molecular predictors of adverse drug reactions with causality analysis based on structure learning
paper_content:
Objective Adverse drug reaction (ADR) can have dire consequences. However, our current understanding of the causes of drug-induced toxicity is still limited. Hence it is of paramount importance to determine molecular factors of adverse drug responses so that safer therapies can be designed. ::: ::: Methods We propose a causality analysis model based on structure learning (CASTLE) for identifying factors that contribute significantly to ADRs from an integration of chemical and biological properties of drugs. This study aims to address two major limitations of the existing ADR prediction studies. First, ADR prediction is mostly performed by assessing the correlations between the input features and ADRs, and the identified associations may not indicate causal relations. Second, most predictive models lack biological interpretability. ::: ::: Results CASTLE was evaluated in terms of prediction accuracy on 12 organ-specific ADRs using 830 approved drugs. The prediction was carried out by first extracting causal features with structure learning and then applying them to a support vector machine (SVM) for classification. Through rigorous experimental analyses, we observed significant increases in both macro and micro F1 scores compared with the traditional SVM classifier, from 0.88 to 0.89 and 0.74 to 0.81, respectively. Most importantly, identified links between the biological factors and organ-specific drug toxicities were partially supported by evidence in Online Mendelian Inheritance in Man. ::: ::: Conclusions The proposed CASTLE model not only performed better in prediction than the baseline SVM but also produced more interpretable results (ie, biological factors responsible for ADRs), which is critical to discovering molecular activators of ADRs.
---
paper_title: From Observational Studies to Causal Rule Mining
paper_content:
Randomised controlled trials (RCTs) are the most effective approach to causal discovery, but in many circumstances it is impossible to conduct RCTs. Therefore observational studies based on passively observed data are widely accepted as an alternative to RCTs. However, in observational studies, prior knowledge is required to generate the hypotheses about the cause-effect relationships to be tested, hence they can only be applied to problems with available domain knowledge and a handful of variables. In practice, many data sets are of high dimensionality, which leaves observational studies out of the opportunities for causal discovery from such a wealth of data sources. In another direction, many efficient data mining methods have been developed to identify associations among variables in large data sets. The problem is, causal relationships imply associations, but the reverse is not always true. However we can see the synergy between the two paradigms here. Specifically, association rule mining can be used to deal with the high-dimensionality problem while observational studies can be utilised to eliminate non-causal associations. In this paper we propose the concept of causal rules (CRs) and develop an algorithm for mining CRs in large data sets. We use the idea of retrospective cohort studies to detect CRs based on the results of association rule mining. Experiments with both synthetic and real world data sets have demonstrated the effectiveness and efficiency of CR mining. In comparison with the commonly used causal discovery methods, the proposed approach in general is faster and has better or competitive performance in finding correct or sensible causes. It is also capable of finding a cause consisting of multiple variables, a feature that other causal discovery methods do not possess.
---
paper_title: Identifying causal gateways and mediators in complex spatio-temporal systems
paper_content:
Identifying regions important for spreading and mediating perturbations is crucial to assess the susceptibilities of spatio-temporal complex systems such as the Earth's climate to volcanic eruptions, extreme events or geoengineering. Here a data-driven approach is introduced based on a dimension reduction, causal reconstruction, and novel network measures based on causal effect theory that go beyond standard complex network tools by distinguishing direct from indirect pathways. Applied to a data set of atmospheric dynamics, the method identifies several strongly uplifting regions acting as major gateways of perturbations spreading in the atmosphere. Additionally, the method provides a stricter statistical approach to pathways of atmospheric teleconnections, yielding insights into the Pacific-Indian Ocean interaction relevant for monsoonal dynamics. Also for neuroscience or power grids, the novel causal interaction perspective provides a complementary approach to simulations or experiments for understanding the functioning of complex spatio-temporal systems with potential applications in increasing their resilience to shocks or extreme events.
---
paper_title: A Bayesian Method for the Induction of Probabilistic Networks from Data
paper_content:
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
---
paper_title: Causal Discovery for Climate Research Using Graphical Models
paper_content:
Causal discovery seeks to recover cause‐effect relationships from statistical data using graphical models. One goal of this paper is to provide an accessible introduction to causal discovery methods for climate scientists, with a focus on constraint-based structure learning. Second, in a detailed case study constraintbased structure learning is applied to derive hypotheses of causal relationships between four prominent modes of atmospheric low-frequency variability in boreal winter including the Western Pacific Oscillation (WPO), Eastern Pacific Oscillation (EPO), Pacific‐North America (PNA) pattern, and North Atlantic Oscillation (NAO). The results are shown in the form of static and temporal independence graphs also known as Bayesian Networks. It is found that WPO and EPO are nearly indistinguishable from the cause‐ effect perspective as strong simultaneous coupling is identified between the two. In addition, changes in the state of EPO (NAO) may cause changes in the state of NAO (PNA) approximately 18 (3‐6) days later. These results are not only consistent with previous findings on dynamical processes connecting different low-frequency modes (e.g., interaction between synoptic and low-frequency eddies) but also provide the basis for formulating new hypotheses regarding the time scale and temporal sequencing of dynamical processes responsible for these connections. Last, the authors propose to use structure learning for climate networks, which are currently based primarily on correlation analysis. While correlation-based climate networks focus on similarity between nodes, independence graphs would provide an alternative viewpoint by focusing on information flow in the network.
---
paper_title: Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies
paper_content:
A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which
---
paper_title: Inferring microRNA-mRNA causal regulatory relationships from expression data
paper_content:
MOTIVATION ::: microRNAs (miRNAs) are known to play an essential role in the post-transcriptional gene regulation in plants and animals. Currently, several computational approaches have been developed with a shared aim to elucidate miRNA-mRNA regulatory relationships. Although these existing computational methods discover the statistical relationships, such as correlations and associations between miRNAs and mRNAs at data level, such statistical relationships are not necessarily the real causal regulatory relationships that would ultimately provide useful insights into the causes of gene regulations. The standard method for determining causal relationships is randomized controlled perturbation experiments. In practice, however, such experiments are expensive and time consuming. Our motivation for this study is to discover the miRNA-mRNA causal regulatory relationships from observational data. ::: ::: ::: RESULTS ::: We present a causality discovery-based method to uncover the causal regulatory relationship between miRNAs and mRNAs, using expression profiles of miRNAs and mRNAs without taking into consideration the previous target information. We apply this method to the epithelial-to-mesenchymal transition (EMT) datasets and validate the computational discoveries by a controlled biological experiment for the miR-200 family. A significant portion of the regulatory relationships discovered in data is consistent with those identified by experiments. In addition, the top genes that are causally regulated by miRNAs are highly relevant to the biological conditions of the datasets. The results indicate that the causal discovery method effectively discovers miRNA regulatory relationships in data. Although computational predictions may not completely replace intervention experiments, the accurate and reliable discoveries in data are cost effective for the design of miRNA experiments and the understanding of miRNA-mRNA regulatory relationships.
---
paper_title: Causal Discovery from Medical Textual Data
paper_content:
Medical records usually incorporate investigative reports, historical notes, patient encounters or discharge summaries as textual data. This study focused on learning causal relationships from intensive care unit (ICU) discharge summaries of 1611 patients. Identification of the causal factors of clinical conditions and outcomes can help us formulate better management, prevention and control strategies for the improvement of health care. For causal discovery we applied the Local Causal Discovery (LCD) algorithm, which uses the framework of causal Bayesian Networks to represent causal relationships among model variables. LCD takes as input a dataset and outputs causes of the form variable Y causally influences variable Z. Using the words that occur in the discharge summaries as attributes for input, LCD output 8 purported causal relationships. The relationships ranked as most probable subjectively appear to be most causally plausible.
---
paper_title: A Review of Bayesian Networks and Structure Learning
paper_content:
This article reviews the topic of Bayesian networks. A Bayesian network is a factorisation of a probability distribution along a directed acyclic graph. The relation between graphical d-separation and independence is described. A short article by Arthur Cayley (1853) [7] is discussed, which laid ideas later used in Bayesian networks: factorisation, the noisy `or' gate, applications of algebraic geometry to Bayesian networks. The ideas behind Pearl's intervention calculus when the DAG represents a causal dependence structure; the relation between the work of Cayley and Pearl is commented on. Most of the discussion is about structure learning, outlining the two main approaches; search and score versus constraint based. Constraint based algorithms often rely on the assumption of faithfulness, that the data to which the algorithm is applied is generated from distributions satisfying a faithfulness assumption where graphical d- separation and independence are equivalent. The article presents some considerations for constraint based algorithms based on recent data analysis, indicating a variety of situations where the faithfulness assumption does not hold.
---
paper_title: Learning Bayesian Networks
paper_content:
This chapter addresses the problem of learning the parameters from data. It also discusses score-based structure learning and constraint-based structure learning. The method for learning all parameters in a Bayesian network follows readily from the method for learning a single parameter. The chapter presents a method for learning the probability of a binomial variable and extends this method to multinomial variables. It also provides guidelines for articulating the prior beliefs concerning probabilities. The chapter illustrates the constraint-based approach by showing how to learn a directed acyclic graph (DAG) faithful to a probability distribution. Structure learning consists of learning the DAG in a Bayesian network from data. It is necessary to know which DAG satisfies the Markov condition with the probability distribution P that is generating the data. The process of learning such a DAG is called “model selection.” A DAG includes a probability distribution P if the DAG does not entail any conditional independencies that are not in P. In score-based structure learning, a score is assigned to each DAG based on the data such that in the limit. After scoring the DAGs, the score are used, possibly along with prior probabilities, to learn a DAG. The most straightforward score, the Bayesian score, is the probability of the data D given the DAG. Once a DAG is learnt from data, the parameters can be known. The result will be a Bayesian network that can be used to do inference. In the constraint-based approach, a DAG is found for which the Markov condition entails all and only those conditional independencies that are in the probability distribution P of the variables of interest. The chapter applies structure learning to inferring causal influences from data and presents learning packages. It presents examples of learning Bayesian networks and of causal learning.
---
paper_title: The hidden life of latent variables: Bayesian learning with mixed graph models
paper_content:
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the specification of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modeling such variables explicitly.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: Bayesian Network Induction via Local Neighborhoods
paper_content:
In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in real-world domains. We present an efficient algorithm for learning Bayes networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov blankets, then connecting nodes in a maximally consistent way. In contrast to the majority of work, which typically uses hill-climbing approaches that may produce dense and causally incorrect nets, our approach yields much more compact causal networks by heeding independencies in the data. Compact causal networks facilitate fast inference and are also easier to understand. We prove that under mild assumptions, our approach requires time polynomial in the size of the data and the number of nodes. A randomized variant, also presented here, yields comparable results at much higher speeds.
---
paper_title: Active Learning for Structure in Bayesian Networks
paper_content:
The task of causal structure discovery from empirical data is a fundamental problem in many areas. Experimental data is crucial for accomplishing this task. However, experiments are typically expensive, and must be selected with great care. This paper uses active learning to determine the experiments that are most informative towards uncovering the underlying structure. We formalize the causal learning task as that of learning the structure of a causal Bayesian network. We consider an active learner that is allowed to conduct experiments, where it intervenes in the domain by setting the values of certain variables. We provide a theoretical framework for the active learning problem, and an algorithm that actively chooses the experiments to perform based on the model learned so far. Experimental results show that active learning can substantially reduce the number of observations required to determine the structure of a domain.
---
paper_title: Active learning of causal networks with intervention experiments and optimal
paper_content:
The causal discovery from data is important for various scientific investigations. Because we cannot distinguish the different directed acyclic graphs (DAGs) in a Markov equivalence class learned from observational data, we have to collect further information on causal structures from experiments with external interventions. In this paper, we propose an active learning approach for discovering causal structures in which we first find a Markov equivalence class from observational data, and then we orient undirected edges in every chain component via intervention experiments separately. In the experiments, some variables are manipulated through external interventions. We discuss two kinds of intervention experiments, randomized experiment and quasi-experiment. Furthermore, we give two optimal designs of experiments, a batch-intervention design and a sequential-intervention design, to minimize the number of manipulated variables and the set of candidate structures based on the minimax and the maximum entropy criteria. We show theoretically that structural learning can be done locally in subgraphs of chain components without need of checking illegal v-structures and cycles in the whole network and that a Markov equivalence subclass obtained after each intervention can still be depicted as a chain graph.
---
paper_title: Causal Inference and Causal Explanation with Background Knowledge
paper_content:
This paper presents correct algorithms for answering the following two questions; (i) Does there exist a causal explanation consistent with a set of background knowledge which explains all of the observed independence facts in a sample? (ii) Given that there is such a causal explanation what are the causal relationships common to every such causal explanation?
---
paper_title: Using Markov Blankets for Causal Structure Learning
paper_content:
We show how a generic feature-selection algorithm returning strongly relevant variables can be turned into a causal structure-learning algorithm. We prove this under the Faithfulness assumption for the data distribution. In a causal graph, the strongly relevant variables for a node X are its parents, children, and children's parents (or spouses), also known as the Markov blanket of X. Identifying the spouses leads to the detection of the V-structure patterns and thus to causal orientations. Repeating the task for all variables yields a valid partially oriented causal graph. We first show an efficient way to identify the spouse links. We then perform several experiments in the continuous domain using the Recursive Feature Elimination feature-selection algorithm with Support Vector Regression and empirically verify the intuition of this direct (but computationally expensive) approach. Within the same framework, we then devise a fast and consistent algorithm, Total Conditioning (TC), and a variant, TCbw, with an explicit backward feature-selection heuristics, for Gaussian data. After running a series of comparative experiments on five artificial networks, we argue that Markov blanket algorithms such as TC/TCbw or Grow-Shrink scale better than the reference PC algorithm and provides higher structural accuracy.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: A definition of conditional mutual information for arbitrary ensembles
paper_content:
Shannon's mutual information for discrete random variables has been generalized to random ensembles by R. L. Dobrushin, M. S. Pinsker, and others. Dobrushin also proposed a rather indirect and not very intuitive definition of conditional mutual information which has the additional shortcoming of requiring the existence of certain regular conditional probability measures. In this note we propose a new definition of conditional mutual information for arbitrary ensembles which is simple, intuitive, and completely general. We show that our definition and Dobrushin's both result in the same quantity when the latter is defined. We also derive several important properties of conditional mutual information.
---
paper_title: Learning gaussian graphical models of gene networks with false discovery rate control
paper_content:
In many cases what matters is not whether a false discovery is made or not but the expected proportion of false discoveries among all the discoveries made, i.e. the so-called false discovery rate (FDR). We present an algorithm aiming at controlling the FDR of edges when learning Gaussian graphical models (GGMs). The algorithm is particularly suitable when dealing with more nodes than samples, e.g. when learning GGMs of gene networks from gene expression data.We illustrate this on the Rosetta compendium [8].
---
paper_title: A new class of non-Shannon-type inequalities for entropies
paper_content:
In this paper we prove a countable set of non-Shannon-type linear information inequalities for entropies of discrete random variables, i.e., information inequalities which cannot be reduced to the "basic" inequality I(X : Y |Z) 0. Our results generalize the inequalities of Z. Zhang and R. Yeung (1998) who found the first examples of non-Shannon-type information inequalities.
---
paper_title: Tornado Forecasting with Multiple Markov Boundaries
paper_content:
Reliable tornado forecasting with a long-lead time can greatly support emergency response and is of vital importance for the economy and society. The large number of meteorological variables in spatiotemporal domains and the complex relationships among variables remain the top difficulties for a long-lead tornado forecasting. Standard data mining approaches to tackle high dimensionality are usually designed to discover a single set of features without alternating options for domain scientists to select more reliable and physical interpretable variables. In this work, we provide a new solution to use the concept of multiple Markov boundaries in local causal discovery to identify multiple sets of the precursors for tornado forecasting. Specifically, our algorithm first confines the extremely large feature spaces to a small core feature space, then it mines multiple sets of the precursors from the core feature space that may equally contribute to tornado forecasting. With the multiple sets of the precursors, we are able to report to domain scientists the predictive but practical set of precursors. An extensive empirical study is conducted on eight benchmark data sets and the historical tornado data near Oklahoma City, OK in the United States. Experimental results show that the tornado precursors we identified can help to improve the reliability of long-lead time catastrophic tornado forecasting.
---
paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables
paper_content:
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
---
paper_title: Adaptive Thresholding in Structure Learning of a Bayesian Network
paper_content:
Thresholding a measure in conditional independence (CI) tests using a fixed value enables learning and removing edges as part of learning a Bayesian network structure. However, the learned structure is sensitive to the threshold that is commonly selected: 1) arbitrarily; 2) irrespective of characteristics of the domain; and 3) fixed for all CI tests. We analyze the impact on mutual information - a CI measure - of factors, such as sample size, degree of variable dependence, and variables' cardinalities. Following, we suggest to adaptively threshold individual tests based on the factors. We show that adaptive thresholds better distinguish between pairs of dependent variables and pairs of independent variables and enable learning structures more accurately and quickly than when using fixed thresholds.
---
paper_title: Markov Blanket Feature Selection Using Representative Sets
paper_content:
It has received much attention in recent years to use Markov blankets in a Bayesian network for feature selection. The Markov blanket of a class attribute in a Bayesian network is a unique yet minimal feature subset for optimal feature selection if the probability distribution of a data set can be faithfully represented by this Bayesian network. However, if a data set violates the faithful condition, Markov blankets of a class attribute may not be unique. To tackle this issue, in this paper, we propose a new concept of representative sets and then design the selection via group alpha-investing (SGAI) algorithm to perform Markov blanket feature selection with representative sets for classification. Using a comprehensive set of real data, our empirical studies have demonstrated that SGAI outperforms the state-of-the-art Markov blanket feature selectors and other well-established feature selection methods.
---
paper_title: Algorithms for discovery of multiple Markov boundaries
paper_content:
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multipleMarkov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains.
---
paper_title: Scalable Techniques for Mining Causal Structures
paper_content:
Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. ::: ::: In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.
---
paper_title: Causal Inference and Causal Explanation with Background Knowledge
paper_content:
This paper presents correct algorithms for answering the following two questions; (i) Does there exist a causal explanation consistent with a set of background knowledge which explains all of the observed independence facts in a sample? (ii) Given that there is such a causal explanation what are the causal relationships common to every such causal explanation?
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: An Algorithm for Fast Recovery of Sparse Causal Graphs
paper_content:
Previous asymptotically correct algorithms for recovering causal structure from sample probabilities have been limited even in sparse causal graphs to a few variables. We describe an asymptotically...
---
paper_title: PC algorithm for nonparanormal graphical models
paper_content:
The PC algorithm uses conditional independence tests for model selection in graphical modeling with acyclic directed graphs. In Gaussian models, tests of conditional independence are typically based on Pearson correlations, and high-dimensional consistency results have been obtained for the PC algorithm in this setting. Analyzing the error propagation from marginal to partial correlations, we prove that high-dimensional consistency carries over to a broader class of Gaussian copula or nonparanormal models when using rank-based measures of correlation. For graph sequences with bounded degree, our consistency result is as strong as prior Gaussian results. In simulations, the 'Rank PC' algorithm works as well as the 'Pearson PC' algorithm for normal data and considerably better for non-normal data, all the while incurring a negligible increase of computation time. While our interest is in the PC algorithm, the presented analysis of error propagation could be applied to other algorithms that test the vanishing of low-order partial correlations.
---
paper_title: Estimating high-dimensional directed acyclic graphs with the PC-algorithm
paper_content:
We consider the PC-algorithm Spirtes et. al. (2000) for estimating the skeleton of a very high-dimensional acyclic directed graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible for sparse problems with many nodes, i.e. variables, and it has the attractive property to automatically achieve high computational efficiency as a function of sparseness of the true underlying DAG. We prove consistency of the algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(n^a) for any 0<a<infinity. The sparseness assumption is rather minimal requiring only that the neighborhoods in the DAG are of lower order than sample size n. We empirically demonstrate the PC-algorithm for simulated data and argue that the algorithm is rather insensitive to the choice of its single tuning parameter.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: Adjacency-Faithfulness and Conservative Causal Inference
paper_content:
Most causal inference algorithms in the literature (e.g., Pearl (2000), Spirtes et al. (2000), Heckerman et al. (1999)) exploit an assumption usually referred to as the causal Faithfulness or Stability condition. In this paper, we highlight two components of the condition used in constraint-based algorithms, which we call "Adjacency-Faithfulness" and "Orientation-Faithfulness". We point out that assuming Adjacency-Faithfulness is true, it is in principle possible to test the validity of Orientation-Faithfulness. Based on this observation, we explore the consequence of making only the Adjacency-Faithfulness assumption. We show that the familiar PC algorithm has to be modified to be (asymptotically) correct under the weaker, Adjacency-Faithfulness assumption. Roughly the modified algorithm, called Conservative PC (CPC), checks whether Orientation-Faithfulness holds in the orientation phase, and if not, avoids drawing certain causal conclusions the PC algorithm would draw. However, if the stronger, standard causal Faithfulness condition actually obtains, the CPC algorithm is shown to output the same pattern as the PC algorithm does in the large sample limit. We also present a simulation study showing that the CPC algorithm runs almost as fast as the PC algorithm, and outputs significantly fewer false causal arrowheads than the PC algorithm does on realistic sample sizes. We end our paper by discussing how score-based algorithms such as GES perform when the Adjacency-Faithfulness but not the standard causal Faithfulness condition holds, and how to extend our work to the FCI algorithm, which allows for the possibility of latent variables.
---
paper_title: Conservative Independence-Based Causal Structure Learning in Absence of Adjacency Faithfulness
paper_content:
This paper presents an extension to the Conservative PC algorithm which is able to detect violations of adjacency faithfulness under causal sufficiency and triangle faithfulness. Violations can be characterized by pseudo-independent relations and equivalent edges, both generating a pattern of conditional independencies that cannot be modeled faithfully. Both cases lead to uncertainty about specific parts of the skeleton of the causal graph. These ambiguities are modeled by an f-pattern. We prove that our Adjacency Conservative PC algorithm is able to correctly learn the f-pattern. We argue that the solution also applies for the finite sample case if we accept that only strong edges can be identified. Experiments based on simulations and the ALARM benchmark model show that the rate of false edge removals is significantly reduced, at the expense of uncertainty on the skeleton and a higher sensitivity for accidental correlations.
---
paper_title: A Uniformly Consistent Estimator of Causal Effects under the $k$-Triangle-Faithfulness Assumption
paper_content:
Spirtes, Glymour and Scheines [Causation, Prediction, and Search (1993) Springer] described a pointwise consistent estimator of the Markov equivalence class of any causal structure that can be represented by a directed acyclic graph for any parametric family with a uniformly consistent test of conditional independence, under the Causal Markov and Causal Faithfulness assumptions. Robins et al. [Biometrika 90 (2003) 491-515], however, proved that there are no uniformly consistent estimators of Markov equivalence classes of causal structures under those assumptions. Subsequently, Kalisch and B\"{u}hlmann [J. Mach. Learn. Res. 8 (2007) 613-636] described a uniformly consistent estimator of the Markov equivalence class of a linear Gaussian causal structure under the Causal Markov and Strong Causal Faithfulness assumptions. However, the Strong Faithfulness assumption may be false with high probability in many domains. We describe a uniformly consistent estimator of both the Markov equivalence class of a linear Gaussian causal structure and the identifiable structural coefficients in the Markov equivalence class under the Causal Markov assumption and the considerably weaker k-Triangle-Faithfulness assumption.
---
paper_title: Adjacency-Faithfulness and Conservative Causal Inference
paper_content:
Most causal inference algorithms in the literature (e.g., Pearl (2000), Spirtes et al. (2000), Heckerman et al. (1999)) exploit an assumption usually referred to as the causal Faithfulness or Stability condition. In this paper, we highlight two components of the condition used in constraint-based algorithms, which we call "Adjacency-Faithfulness" and "Orientation-Faithfulness". We point out that assuming Adjacency-Faithfulness is true, it is in principle possible to test the validity of Orientation-Faithfulness. Based on this observation, we explore the consequence of making only the Adjacency-Faithfulness assumption. We show that the familiar PC algorithm has to be modified to be (asymptotically) correct under the weaker, Adjacency-Faithfulness assumption. Roughly the modified algorithm, called Conservative PC (CPC), checks whether Orientation-Faithfulness holds in the orientation phase, and if not, avoids drawing certain causal conclusions the PC algorithm would draw. However, if the stronger, standard causal Faithfulness condition actually obtains, the CPC algorithm is shown to output the same pattern as the PC algorithm does in the large sample limit. We also present a simulation study showing that the CPC algorithm runs almost as fast as the PC algorithm, and outputs significantly fewer false causal arrowheads than the PC algorithm does on realistic sample sizes. We end our paper by discussing how score-based algorithms such as GES perform when the Adjacency-Faithfulness but not the standard causal Faithfulness condition holds, and how to extend our work to the FCI algorithm, which allows for the possibility of latent variables.
---
paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables
paper_content:
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
---
paper_title: ParallelPC: an R package for efficient constraint based causal exploration
paper_content:
Discovering causal relationships from data is the ultimate goal of many research areas. Constraint based causal exploration algorithms, such as PC, FCI, RFCI, PC-simple, IDA and Joint-IDA have achieved significant progress and have many applications. A common problem with these methods is the high computational complexity, which hinders their applications in real world high dimensional datasets, e.g gene expression datasets. In this paper, we present an R package, ParallelPC, that includes the parallelised versions of these causal exploration algorithms. The parallelised algorithms help speed up the procedure of experimenting big datasets and reduce the memory used when running the algorithms. The package is not only suitable for super-computers or clusters, but also convenient for researchers using personal computers with multi core CPUs. Our experiment results on real world datasets show that using the parallelised algorithms it is now practical to explore causal relationships in high dimensional datasets with thousands of variables in a single multicore computer. ParallelPC is available in CRAN repository at this https URL
---
paper_title: A Fast PC Algorithm for High Dimensional Causal Discovery with Multi-Core PCs
paper_content:
Discovering causal relationships from observational data is a crucial problem and it has applications in many research areas. The PC algorithm is the state-of-the-art constraint based method for causal discovery. However, runtime of the PC algorithm, in the worst-case, is exponential to the number of nodes (variables), and thus it is inefficient when being applied to high dimensional data, e.g., gene expression datasets. On another note, the advancement of computer hardware in the last decade has resulted in the widespread availability of multi-core personal computers. There is a significant motivation for designing a parallelized PC algorithm that is suitable for personal computers and does not require end users’ parallel computing knowledge beyond their competency in using the PC algorithm. In this paper, we develop parallel-PC, a fast and memory efficient PC algorithm using the parallel computing technique. We apply our method to a range of synthetic and real-world high dimensional datasets. Experimental results on a dataset from the DREAM 5 challenge show that the original PC algorithm could not produce any results after running more than 24 hours; meanwhile, our parallel-PC algorithm managed to finish within around 12 hours with a 4-core CPU computer, and less than six hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a causal inference method for inferring miRNA-mRNA regulatory relationships. The experimental results show that parallel-PC helps improve both the efficiency and accuracy of the causal inference algorithm.
---
paper_title: The control of the false discovery rate in multiple testing under dependency
paper_content:
Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased. 1.1. Simultaneous hypotheses testing. The control of the increased type I error when testing simultaneously a family of hypotheses is a central issue in the area of multiple comparisons. Rarely are we interested only in whether all hypotheses are jointly true or not, which is the test of the intersection null hypothesis. In most applications, we infer about the individual hypotheses, realizing that some of the tested hypotheses are usually true—we hope not all—and some are not. We wish to decide which ones are not true, indicating (statistical) discoveries. An important such problem is that of multiple endpoints in a clinical trial: a new treatment is compared with an existing one in terms of a large number of potential benefits (endpoints).
---
paper_title: A direct approach to false discovery rates
paper_content:
Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential "p"-value rejection methods based on the observed data. Whereas a sequential "p"-value method fixes the error rate and "estimates" its corresponding rejection region, we propose the opposite approach-we "fix" the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the "q"-value, the pFDR analogue of the "p"-value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini-Hochberg FDR method. Copyright 2002 Royal Statistical Society.
---
paper_title: Order-independent constraint-based causal structure learning
paper_content:
We consider constraint-based methods for causal structure learning, such as the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al., 1993, 2000; Richardson, 1996; Colombo et al., 2012; Claassen et al., 2013). The first step of all these algorithms consists of the adjacency search of the PC-algorithm. The PC-algorithm is known to be order-dependent, in the sense that the output can depend on the order in which the variables are given. This order-dependence is a minor issue in low-dimensional settings. We show, however, that it can be very pronounced in high-dimensional settings, where it can lead to highly variable results. We propose several modifications of the PC-algorithm (and hence also of the other algorithms) that remove part or all of this order-dependence. All proposed modifications are consistent in high-dimensional settings under the same conditions as their original counterparts. We compare the PC-, FCI-, and RFCI-algorithms and their modifications in simulation studies and on a yeast gene expression data set. We show that our modifications yield similar performance in low-dimensional settings and improved performance in high-dimensional settings. All software is implemented in the R-package pcalg.
---
paper_title: Improving the Reliability of Causal Discovery from Small Data Sets using Argumentation
paper_content:
We address the problem of improving the reliability of independence-based causal discovery algorithms that results from the execution of statistical independence tests on small data sets, which typically have low reliability. We model the problem as a knowledge base containing a set of independence facts that are related through Pearl's well-known axioms. Statistical tests on finite data sets may result in errors in these tests and inconsistencies in the knowledge base. We resolve these inconsistencies through the use of an instance of the class of defeasible logics called argumentation, augmented with a preference function, that is used to reason about and possibly correct errors in these tests. This results in a more robust conditional independence test, called an argumentative independence test. Our experimental evaluation shows clear positive improvements in the accuracy of argumentative over purely statistical tests. We also demonstrate significant improvements on the accuracy of causal structure discovery from the outcomes of independence tests both on sampled data from randomly generated causal models and on real-world data sets.
---
paper_title: A Reasoning Model Based on the Production of Acceptable Arguments
paper_content:
Argumentation is a reasoning model based on the construction of arguments and counter-arguments (or defeaters) followed by the selection of the most acceptable of them. In this paper, we refine the argumentation framework proposed by Dung by taking into account preference relations between arguments in order to integrate two complementary points of view on the concept of acceptability: acceptability based on the existence of direct counter-arguments and acceptability based on the existence of defenders. An argument is thus acceptable if it is preferred to its direct defeaters or if it is defended against its defeaters. This also refines previous works by Prakken and Sartor, by associating with each argument a notion of strength, while these authors embed preferences in the definition of the defeat relation. We propose a revised proof theory in terms of AND/OR trees, verifying if a given argument is acceptable, which better reflects the dialectical form of argumentation.
---
paper_title: A Recursive Method for Structural Learning of Directed Acyclic Graphs
paper_content:
In this paper, we propose a recursive method for structural learning of directed acyclic graphs (DAGs), in which a problem of structural learning for a large DAG is first decomposed into two problems of structural learning for two small vertex subsets, each of which is then decomposed recursively into two problems of smaller subsets until none subset can be decomposed further. In our approach, search for separators of a pair of variables in a large DAG is localized to small subsets, and thus the approach can improve the efficiency of searches and the power of statistical tests for structural learning. We show how the recent advances in the learning of undirected graphical models can be employed to facilitate the decomposition. Simulations are given to demonstrate the performance of the proposed method.
---
paper_title: Decomposition of structural learning about directed acyclic graphs
paper_content:
In this paper, we propose that structural learning of a directed acyclic graph can be decomposed into problems related to its decomposed subgraphs. The decomposition of structural learning requires conditional independencies, but it does not require that separators are complete undirected subgraphs. Domain or prior knowledge of conditional independencies can be utilized to facilitate the decomposition of structural learning. By decomposition, search for d-separators in a large network is localized to small subnetworks. Thus both the efficiency of structural learning and the power of conditional independence tests can be improved.
---
paper_title: Bayesian Network Induction via Local Neighborhoods
paper_content:
In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in real-world domains. We present an efficient algorithm for learning Bayes networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov blankets, then connecting nodes in a maximally consistent way. In contrast to the majority of work, which typically uses hill-climbing approaches that may produce dense and causally incorrect nets, our approach yields much more compact causal networks by heeding independencies in the data. Compact causal networks facilitate fast inference and are also easier to understand. We prove that under mild assumptions, our approach requires time polynomial in the size of the data and the number of nodes. A randomized variant, also presented here, yields comparable results at much higher speeds.
---
paper_title: Fast Markov blanket discovery algorithm via local learning within single pass
paper_content:
Learning of Markov blanket (MB) can be regarded as an optimal solution to the feature selection problem. In this paper, an efficient and effective framework is suggested for learning MB. Firstly, we propose a novel algorithm, called Iterative Parent-Child based search of MB (IPC-MB), to induce MB without having to learn a whole Bayesian network first. It is proved correct, and is demonstrated to be more efficient than the current state of the art, PCMB, by requiring much fewer conditional independence (CI) tests. We show how to construct an AD-tree into the implementation so that computational efficiency is further increased through collecting full statistics within a single data pass. We conclude that IPC-MB plus AD-tree appears a very attractive solution in very large applications.
---
paper_title: Algorithms for large scale markov blanket discovery
paper_content:
This paper presents a number of new algorithms for discovering the Markov Blanket of a target variable T from training data. The Markov Blanket can be used for variable selection for classification, for causal discovery, and for Bayesian Network learning. We introduce a low-order polynomial algorithm and several variants that soundly induce the Markov Blanket under certain broad conditions in datasets with thousands of variables and compare them to other state-of-the-art local and global methods with excellent results.
---
paper_title: Swamping and masking in Markov boundary discovery
paper_content:
This paper considers the problems of swamping and masking in Markov boundary discovery for a target variable. There are two potential reasons for swamping and masking: one is incorrectness of some conditional independence (CI) tests, and the other is violation of local composition. First, we explain why the incorrectness of CI tests may lead to swamping and masking, analyze how to reduce the incorrectness of CI tests, and build an algorithm called LRH under local composition. For convenience, we integrate the two existing algorithms, IAMB and KIAMB, and our LRH into an algorithmic framework called LCMB. Second, since LCMB may prematurely stop searching if local composition is violated, a theoretical improvement on LCMB is made as follows: we analyze how to resume the stopped search of LCMB, construct a corresponding algorithmic framework called WLCMB, and show that its correctness only needs a more relaxed condition than that of LCMB. Finally, we apply LCMB and WLCMB to a number of Bayesian networks. The experimental results reveal that LRH is much more efficient than the existing two LCMB algorithms and that WLCMB can further improve LCMB.
---
paper_title: The max-min hill-climbing Bayesian network structure learning algorithm
paper_content:
We present a new algorithm for Bayesian network structure learning, called Max-Min Hill-Climbing (MMHC). The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and state-of-the-art algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at http://www.dsl-lab.org/supplements/mmhc_paper/mmhc_index.html.
---
paper_title: Learning gaussian graphical models of gene networks with false discovery rate control
paper_content:
In many cases what matters is not whether a false discovery is made or not but the expected proportion of false discoveries among all the discoveries made, i.e. the so-called false discovery rate (FDR). We present an algorithm aiming at controlling the FDR of edges when learning Gaussian graphical models (GGMs). The algorithm is particularly suitable when dealing with more nodes than samples, e.g. when learning GGMs of gene networks from gene expression data.We illustrate this on the Rosetta compendium [8].
---
paper_title: Efficient Markov Blanket Discovery and Its Application
paper_content:
In a Bayesian network (BN), a target node is independent of all other nodes given its Markov blanket (MB), and finding the MB has many applications, including feature selection and BN structure learning. We propose a new MB discovery algorithm, simultaneous MB (STMB), to improve the efficiency of the existing topology-based MB discovery algorithms. The proposed method removes the necessity of enforcing the symmetry constraint that is prevalent in existing algorithms, by exploiting the coexisting property between spouses and descendants of the target node. Since STMB mainly reduces the number of independence tests needed to complete the MB set after finding the parents-and-children set, it is applicable to all previous topology-based methods. STMB is both sound and complete. Experiments show that STMB has a comparable accuracy but much better efficiency than state-of-the-art methods. An application on benchmark feature selection datasets further demonstrates the excellent performance of STMB.
---
paper_title: HITON, A Novel Markov Blanket Algorithm for Optimal Variable Selection
paper_content:
We introduce a novel, sound, sample-efficient, and highly-scalable algorithm for variable selection for classification, regression and prediction called HITON. The algorithm works by inducing the Markov Blanket of the variable to be classified or predicted. A wide variety of biomedical tasks with different characteristics were used for an empirical evaluation. Namely, (i) bioactivity prediction for drug discovery, (ii) clinical diagnosis of arrhythmias, (iii) bibliographic text categorization, (iv) lung cancer diagnosis from gene expression array data, and (v) proteomics-based prostate cancer detection. State-of-the-art algorithms for each domain were selected for baseline comparison. Results: (1) HITON reduces the number of variables in the prediction models by three orders of magnitude relative to the original variable set while improving or maintaining accuracy. (2) HITON outperforms the baseline algorithms by selecting more than two orders-ofmagnitude smaller variable sets than the baselines, in the selected tasks and datasets.
---
paper_title: Time and sample efficient discovery of Markov blankets and direct causal relations
paper_content:
Data Mining with Bayesian Network learning has two important characteristics: under conditions learned edges between variables correspond to casual influences, and second, for every variable T in the network a special subset (Markov Blanket) identifiable by the network is the minimal variable set required to predict T. However, all known algorithms learning a complete BN do not scale up beyond a few hundred variables. On the other hand, all known sound algorithms learning a local region of the network require an exponential number of training instances to the size of the learned region.The contribution of this paper is two-fold. We introduce a novel local algorithm that returns all variables with direct edges to and from a target variable T as well as a local algorithm that returns the Markov Blanket of T. Both algorithms (i) are sound, (ii) can be run efficiently in datasets with thousands of variables, and (iii) significantly outperform in terms of approximating the true neighborhood previous state-of-the-art algorithms using only a fraction of the training size required by the existing methods. A fundamental difference between our approach and existing ones is that the required sample depends on the generating graph connectivity and not the size of the local region; this yields up to exponential savings in sample relative to previously known algorithms. The results presented here are promising not only for discovery of local causal structure, and variable selection for classification, but also for the induction of complete BNs.
---
paper_title: Gene Selection for Cancer Classification using Support Vector Machines
paper_content:
DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate.
---
paper_title: Using Markov Blankets for Causal Structure Learning
paper_content:
We show how a generic feature-selection algorithm returning strongly relevant variables can be turned into a causal structure-learning algorithm. We prove this under the Faithfulness assumption for the data distribution. In a causal graph, the strongly relevant variables for a node X are its parents, children, and children's parents (or spouses), also known as the Markov blanket of X. Identifying the spouses leads to the detection of the V-structure patterns and thus to causal orientations. Repeating the task for all variables yields a valid partially oriented causal graph. We first show an efficient way to identify the spouse links. We then perform several experiments in the continuous domain using the Recursive Feature Elimination feature-selection algorithm with Support Vector Regression and empirically verify the intuition of this direct (but computationally expensive) approach. Within the same framework, we then devise a fast and consistent algorithm, Total Conditioning (TC), and a variant, TCbw, with an explicit backward feature-selection heuristics, for Gaussian data. After running a series of comparative experiments on five artificial networks, we argue that Markov blanket algorithms such as TC/TCbw or Grow-Shrink scale better than the reference PC algorithm and provides higher structural accuracy.
---
paper_title: Bayesian Network Structure Learning by Recursive Autonomy Identification
paper_content:
We propose the recursive autonomy identification (RAI) algorithm for constraint-based (CB) Bayesian network structure learning. The RAI algorithm learns the structure by sequential application of conditional independence (CI) tests, edge direction and structure decomposition into autonomous sub-structures. The sequence of operations is performed recursively for each autonomous sub-structure while simultaneously increasing the order of the CI test. While other CB algorithms d-separate structures and then direct the resulted undirected graph, the RAI algorithm combines the two processes from the outset and along the procedure. By this means and due to structure decomposition, learning a structure using RAI requires a smaller number of CI tests of high orders. This reduces the complexity and run-time of the algorithm and increases the accuracy by diminishing the curse-of-dimensionality. When the RAI algorithm learned structures from databases representing synthetic problems, known networks and natural problems, it demonstrated superiority with respect to computational complexity, run-time, structural correctness and classification accuracy over the PC, Three Phase Dependency Analysis, Optimal Reinsertion, greedy search, Greedy Equivalence Search, Sparse Candidate, and Max-Min Hill-Climbing algorithms.
---
paper_title: Recursive Autonomy Identification for Bayesian Network Structure Learning
paper_content:
We propose a constraint-based algorithm for Bayesian network structure learning called recursive autonomy identification (RAI). The RAI algorithm learns the structure by recursive application of conditional independence (CI) tests of increasing orders, edge direction and structure decomposition into autonomous substructures. In comparison to other constraintbased algorithms d-separating structures and then directing the resulted undirected graph, the RAI algorithm combines the two processes from the outset and along the procedure. Learning using the RAI algorithm renders smaller condition sets thus requires a smaller number of high order CI tests. This reduces complexity and run-time as well as increases accuracy since diminishing the curse-of-dimensionality. When evaluated on synthetic and "real-world" databases as well as the ALARM network, the RAI algorithm shows better structural correctness, run-time reduction along with accuracy improvement compared to popular constraint-based structure learning algorithms. Accuracy improvement is also demonstrated when compared to a common search-and-score structure learning algorithm.
---
paper_title: Learning Causal Bayesian Networks from Observations and Experiments: A Decision Theoretic Approach
paper_content:
We discuss a decision theoretic approach to learn causal Bayesian networks from observational data and experiments. We use the information of observational data to learn a completed partially directed acyclic graph using a structure learning technique and try to discover the directions of the remaining edges by means of experiment. We will show that our approach allows to learn a causal Bayesian network optimally with relation to a number of decision criteria. Our method allows the possibility to assign costs to each experiment and each measurement. We introduce an algorithm that allows to actively add results of experiments so that arcs can be directed during learning. A numerical example is given as demonstration of the techniques.
---
paper_title: Active Learning for Structure in Bayesian Networks
paper_content:
The task of causal structure discovery from empirical data is a fundamental problem in many areas. Experimental data is crucial for accomplishing this task. However, experiments are typically expensive, and must be selected with great care. This paper uses active learning to determine the experiments that are most informative towards uncovering the underlying structure. We formalize the causal learning task as that of learning the structure of a causal Bayesian network. We consider an active learner that is allowed to conduct experiments, where it intervenes in the domain by setting the values of certain variables. We provide a theoretical framework for the active learning problem, and an algorithm that actively chooses the experiments to perform based on the model learned so far. Experimental results show that active learning can substantially reduce the number of observations required to determine the structure of a domain.
---
paper_title: Active learning of causal networks with intervention experiments and optimal
paper_content:
The causal discovery from data is important for various scientific investigations. Because we cannot distinguish the different directed acyclic graphs (DAGs) in a Markov equivalence class learned from observational data, we have to collect further information on causal structures from experiments with external interventions. In this paper, we propose an active learning approach for discovering causal structures in which we first find a Markov equivalence class from observational data, and then we orient undirected edges in every chain component via intervention experiments separately. In the experiments, some variables are manipulated through external interventions. We discuss two kinds of intervention experiments, randomized experiment and quasi-experiment. Furthermore, we give two optimal designs of experiments, a batch-intervention design and a sequential-intervention design, to minimize the number of manipulated variables and the set of candidate structures based on the minimax and the maximum entropy criteria. We show theoretically that structural learning can be done locally in subgraphs of chain components without need of checking illegal v-structures and cycles in the whole network and that a Markov equivalence subclass obtained after each intervention can still be depicted as a chain graph.
---
paper_title: On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables
paper_content:
We show that if any number of variables are allowed to be simultaneously and independently randomized in any one experiment, log2(N) + 1 experiments are sufficient and in the worst case necessary to determine the causal relations among N ≥ 2 variables when no latent variables, no sample selection bias and no feedback cycles are present. For all K, 0 < K < 1/2N we provide an upper bound on the number experiments required to determine causal structure when each experiment simultaneously randomizes K variables. For large N, these bounds are significantly lower than the N - 1 bound required when each experiment randomizes at most one variable. For kmax < N/2, we show that (N/kmaz -1) + N/2kmax log2(kmax) experiments are sufficient and in the worst case necessary. We offer a conjecture as to the minimal number of experiments that are in the worst case sufficient to identify all causal relations among N observed variables that are a subset of the vertices of a DAG.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: Partial orientation and local structural learning of causal networks for prediction
paper_content:
For a prediction problem of a given target feature in a large causal network under external interventions, we propose in this paper two partial orientation and local structural learning (POLSL) approaches, Local-Graph and PCD-by-PCD (where PCD denotes Parents, Children and some Descendants). The POLSL approaches are used to discover the local structure of the target and to orient edges connected to the target without discovering a global causal network. Thus they can greatly reduce computational complexity of structural learning and improve power of statistical tests. This approach is stimulated by the challenge problems proposed in IEEE World Congress on Computational Intelligence (WCCI2008) competition workshop. For the cases with and without external interventions, we select difierent feature sets to build prediction models. We apply the L1 penalized logistic regression model to the prediction. For the case with noise and calibrant features in microarray data, we propose a two-stage fllter to correct global and local patterns of noise.
---
paper_title: A Simple Constraint-Based Algorithm for Efficiently Mining Observational Databases for Causal Relationships
paper_content:
This paper presents a simple, efficient computer-based method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observational. There is great potential for mining such databases to discover causal relationships. We illustrate how observational data can constrain the causal relationships among measured variables, sometimes to the point that we can conclude that one variable is causing another variable. The presentation here is based on a constraint-based approach to causal discovery. A primary purpose of this paper is to present the constraint-based causal discovery method in the simplest possible fashion in order to (1) readily convey the basic ideas that underlie more complex constraint-based causal discovery techniques, and (2) permit interested readers to rapidly program and apply the method to their own databases, as a start toward using more elaborate causal discovery algorithms.
---
paper_title: Ultra-scalable and efficient methods for hybrid observational and experimental local causal pathway discovery
paper_content:
Discovery of causal relations from data is a fundamental objective of several scientific disciplines. Most causal discovery algorithms that use observational data can infer causality only up to a statistical equivalency class, thus leaving many causal relations undetermined. In general, complete identification of causal relations requires experimentation to augment discoveries from observational data. This has led to the recent development of several methods for active learning of causal networks that utilize both observational and experimental data in order to discover causal networks. In this work, we focus on the problem of discovering local causal pathways that contain only direct causes and direct effects of the target variable of interest and propose new discovery methods that aim to minimize the number of required experiments, relax common sufficient discovery assumptions in order to increase discovery accuracy, and scale to high-dimensional data with thousands of variables. We conduct a comprehensive evaluation of new and existing methods with data of dimensionality up to 1,000,000 variables. We use both artificially simulated networks and in-silico gene transcriptional networks that model the characteristics of real gene expression data.
---
paper_title: Causal Discovery Using A Bayesian Local Causal Discovery Algorithm
paper_content:
This study focused on the development and application of an efficient algorithm to induce causal relationships from observational data. The algorithm, called BLCD, is based on a causal Bayesian network framework. BLCD initially uses heuristic greedy search to derive the Markov Blanket (MB) of a node that serves as the "locality" for the identification of pair-wise causal relationships. BLCD takes as input a dataset and outputs potential causes of the form variable X causally influences variable Y. Identification of the causal factors of diseases and outcomes, can help formulate better management, prevention and control strategies for the improvement of health care. In this study we focused on investigating factors that may contribute causally to infant mortality in the United States. We used the U.S. Linked Birth/Infant Death dataset for 1991 with more than four million records and about 200 variables for each record. Our sample consisted of 41,155 re-cords randomly selected from the whole dataset. Each record had maternal, paternal and child factors and the outcome at the end of the first year--whether the infant survived or not. Using the infant birth and death dataset as input, BLCD out-put six purported causal relationships. Three out of the six relationships seem plausible. Even though we have not yet discovered a clinically novel causal link, we plan to look for novel causal pathways using the full sample.
---
paper_title: Local causal discovery of direct causes and effects
paper_content:
We focus on the discovery and identification of direct causes and effects of a target variable in a causal network. State-of-the-art causal learning algorithms generally need to find the global causal structures in the form of complete partial directed acyclic graphs (CPDAG) in order to identify direct causes and effects of a target variable. While these algorithms are effective, it is often unnecessary and wasteful to find the global structures when we are only interested in the local structure of one target variable (such as class labels). We propose a new local causal discovery algorithm, called Causal Markov Blanket (CMB), to identify the direct causes and effects of a target variable based on Markov Blanket Discovery. CMB is designed to conduct causal discovery among multiple variables, but focuses only on finding causal relationships between a specific target variable and other variables. Under standard assumptions, we show both theoretically and experimentally that the proposed local causal discovery algorithm can obtain the comparable identification accuracy as global methods but significantly improve their efficiency, often by more than one order of magnitude.
---
paper_title: Local causal and Markov blanket induction for causal discovery and feature selection for classification. Part II: Analysis and extensions
paper_content:
We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes/effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. ::: ::: Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. ::: ::: In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning.
---
paper_title: Algorithms for discovery of multiple Markov boundaries
paper_content:
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multipleMarkov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains.
---
paper_title: Bayesian Algorithms for Causal Data Mining
paper_content:
We present two Bayesian algorithms CD-B and CD-H for discovering unconfounded cause and effect relationships from observational data without assuming causal sufficiency which precludes hidden common causes for the observed variables. The CD-B algorithm first estimates the Markov blanket of a node X using a Bayesian greedy search method and then applies Bayesian scoring methods to discriminate the parents and children of X. Using the set of parents and set of children CD-B constructs a global Bayesian network and outputs the causal effects of a node X based on the identification of Y arcs. Recall that if a node X has two parent nodes A,B and a child node C such that there is no arc between A,B and A,B are not parents of C, then the arc from X to C is called a Y arc. The CD-H algorithm uses the MMPC algorithm to estimate the union of parents and children of a target node X. The subsequent steps are similar to those of CD-B. We evaluated the CD-B and CD-H algorithms empirically based on simulated data from four different Bayesian networks. We also present comparative results based on the identification of Y structures and Y arcs from the output of the PC, MMHC and FCI algorithms. The results appear promising for mining causal relationships that are unconfounded by hidden variables from observational data.
---
paper_title: Scalable Techniques for Mining Causal Structures
paper_content:
Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. ::: ::: In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.
---
paper_title: Learning the causal structure of overlapping variable sets
paper_content:
In many real-world applications of machine learning and data mining techniques, one finds that one must separate the variables under consideration into multiple subsets (perhaps to reduce computational complexity, or because of a shift in focus during data collection and analysis). In this paper, we use the framework of Bayesian networks to examine the problem of integrating the learning outputs for multiple overlapping datasets. In particular, we provide rules for extracting causal information about the true (unknown) Bayesian network from the previously learned (partial) Bayesian networks. We also provide the SLPR algorithm, which efficiently uses these previously learned Bayesian networks to guide learning of the full structure. A complexity analysis of the worst-case scenario for the SLPR algorithm reveals that the algorithm is always less complex than a comparable reference algorithm (though no absolute optimality proof is known). Although no expected-case analysis is given, the complexity analysis suggests that (given the currently available set of algorithms) one should always use the SLPR algorithm, regardless of the underlying generating structure. The results provided in this paper point to a wide range of open questions, which are briefly discussed.
---
paper_title: Scientific Coherence and the Fusion of Experimental Results
paper_content:
A pervasive feature of the sciences, particularly the applied sciences, is an experimental focus on a few (often only one) possible causal connections. At the same time, scientists often advance and apply relatively broadmodels that incorporate many different causal mechanisms. We are naturally led to ask whether there are normative rules for integrating multiple local experimental conclusions into models covering many additional variables. In this paper, we provide a positive answer to this question by developing several inference rules that use local causal models to place constraints on the integrated model, given quite general assumptions. We also demonstrate the practical value of these rules by applying them to a case study from ecology.
---
paper_title: Incorporating Causal Prior Knowledge as Path-Constraints in Bayesian Networks and Maximal Ancestral Graphs
paper_content:
We consider the incorporation of causal knowledge about the presence or absence of (possibly indirect) causal relations into a causal model. Such causal relations correspond to directed paths in a causal model. This type of knowledge naturally arises from experimental data, among others. Specifically, we consider the formalisms of Causal Bayesian Networks and Maximal Ancestral Graphs and their Markov equivalence classes: Partially Directed Acyclic Graphs and Partially Oriented Ancestral Graphs. We introduce sound and complete procedures which are able to incorporate causal prior knowledge in such models. In simulated experiments, we show that often considering even a few causal facts leads to a significant number of new inferences. In a case study, we also show how to use real experimental data to infer causal knowledge and incorporate it into a real biological causal network. The code is available at mensxmachina.org.
---
paper_title: Constraint-based Causal Discovery from Multiple Interventions over Overlapping Variable Sets
paper_content:
Scientific practice typically involves repeatedly studying a system, each time trying to unravel a different perspective. In each study, the scientist may take measurements under different experimental conditions (interventions, manipulations, perturbations) and measure different sets of quantities (variables). The result is a collection of heterogeneous data sets coming from different data distributions. In this work, we present algorithm COmbINE, which accepts a collection of data sets over overlapping variable sets under different experimental conditions; COmbINE then outputs a summary of all causal models indicating the invariant and variant structural characteristics of all models that simultaneously fit all of the input data sets. COmbINE converts estimated dependencies and independencies in the data into path constraints on the data-generating causal model and encodes them as a SAT instance. The algorithm is sound and complete in the sample limit. To account for conflicting constraints arising from statistical errors, we introduce a general method for sorting constraints in order of confidence, computed as a function of their corresponding p-values. In our empirical evaluation, COmbINE outperforms in terms of efficiency the only pre-existing similar algorithm; the latter additionally admits feedback cycles, but does not admit conflicting constraints which hinders the applicability on real data. As a proof-of-concept, COmbINE is employed to co-analyze 4 real, mass-cytometry data sets measuring phosphorylated protein concentrations of overlapping protein sets under 3 different interventions.
---
paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables
paper_content:
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
---
paper_title: The hidden life of latent variables: Bayesian learning with mixed graph models
paper_content:
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the specification of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modeling such variables explicitly.
---
paper_title: Learning Linear Bayesian Networks with Latent Variables
paper_content:
This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from low-order observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multi-level DAGs, DAGs with effective depth one, as well as certain families of polytrees.
---
paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables
paper_content:
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: Finding Latent Causes in Causal Networks: an Efficient Approach Based on Markov Blankets
paper_content:
Causal structure-discovery techniques usually assume that all causes of more than one variable are observed. This is the so-called causal sufficiency assumption. In practice, it is untestable, and often violated. In this paper, we present an efficient causal structure-learning algorithm, suited for causally insufficient data. Similar to algorithms such as IC* and FCI, the proposed approach drops the causal sufficiency assumption and learns a structure that indicates (potential) latent causes for pairs of observed variables. Assuming a constant local density of the data-generating graph, our algorithm makes a quadratic number of conditional-independence tests w.r.t. the number of variables. We show with experiments that our algorithm is comparable to the state-of-the-art FCI algorithm in accuracy, while being several orders of magnitude faster on large problems. We conclude that MBCS* makes a new range of causally insufficient problems computationally tractable.
---
paper_title: Constraint-based Causal Discovery from Multiple Interventions over Overlapping Variable Sets
paper_content:
Scientific practice typically involves repeatedly studying a system, each time trying to unravel a different perspective. In each study, the scientist may take measurements under different experimental conditions (interventions, manipulations, perturbations) and measure different sets of quantities (variables). The result is a collection of heterogeneous data sets coming from different data distributions. In this work, we present algorithm COmbINE, which accepts a collection of data sets over overlapping variable sets under different experimental conditions; COmbINE then outputs a summary of all causal models indicating the invariant and variant structural characteristics of all models that simultaneously fit all of the input data sets. COmbINE converts estimated dependencies and independencies in the data into path constraints on the data-generating causal model and encodes them as a SAT instance. The algorithm is sound and complete in the sample limit. To account for conflicting constraints arising from statistical errors, we introduce a general method for sorting constraints in order of confidence, computed as a function of their corresponding p-values. In our empirical evaluation, COmbINE outperforms in terms of efficiency the only pre-existing similar algorithm; the latter additionally admits feedback cycles, but does not admit conflicting constraints which hinders the applicability on real data. As a proof-of-concept, COmbINE is employed to co-analyze 4 real, mass-cytometry data sets measuring phosphorylated protein concentrations of overlapping protein sets under 3 different interventions.
---
paper_title: Learning equivalence classes of acyclic models with latent and selection variables from multiple datasets with overlapping variables
paper_content:
While there has been considerable research in learning probabilistic graphical models from data for predictive and causal inference, almost all existing algorithms assume a single dataset of i.i.d. observations for all variables. For many applications, it may be impossible or impractical to obtain such datasets, but multiple datasets of i.i.d. observations for different subsets of these variables may be available. Tillman et al. [2009] showed how directed graphical models learned from such datasets can be integrated to construct an equivalence class of structures over all variables. While their procedure is correct, it assumes that the structures integrated do not entail contradictory conditional independences and dependences for variables in their intersections. While this assumption is reasonable asymptotically, it rarely holds in practice with finite samples due to the frequency of statistical errors. We propose a new correct procedure for learning such equivalence classes directly from the multiple datasets which avoids this problem and is thus more practically useful. Empirical results indicate our method is not only more accurate, but also faster and requires less memory.
---
paper_title: Constraint-based causal discovery: conflict resolution with answer set programming
paper_content:
Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-the-art ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraint-based methods in accuracy.
---
paper_title: Discovering Cyclic Causal Models with Latent Variables: A General SAT-Based Procedure
paper_content:
We present a very general approach to learning the structure of causal models based on d-separation constraints, obtained from any given set of overlapping passive observational or experimental data sets. The procedure allows for both directed cycles (feedback loops) and the presence of latent variables. Our approach is based on a logical representation of causal pathways, which permits the integration of quite general background knowledge, and inference is performed using a Boolean satisfiability (SAT) solver. The procedure is complete in that it exhausts the available information on whether any given edge can be determined to be present or absent, and returns "unknown" otherwise. Many existing constraint-based causal discovery algorithms can be seen as special cases, tailored to circumstances in which one or more restricting assumptions apply. Simulations illustrate the effect of these assumptions on discovery and how the present algorithm scales.
---
paper_title: Towards integrative causal analysis of heterogeneous data sets and studies
paper_content:
We present methods able to predict the presence and strength of conditional and unconditional dependencies (correlations) between two variables Y and Z never jointly measured on the same samples, based on multiple data sets measuring a set of common variables. The algorithms are specializations of prior work on learning causal structures from overlapping variable sets. This problem has also been addressed in the field of statistical matching. The proposed methods are applied to a wide range of domains and are shown to accurately predict the presence of thousands of dependencies. Compared against prototypical statistical matching algorithms and within the scope of our experiments, the proposed algorithms make predictions that are better correlated with the sample estimates of the unknown parameters on test data ; this is particularly the case when the number of commonly measured variables is low. ::: ::: The enabling idea behind the methods is to induce one or all causal models that are simultaneously consistent with (fit) all available data sets and prior knowledge and reason with them. This allows constraints stemming from causal assumptions (e.g., Causal Markov Condition, Faithfulness) to propagate. Several methods have been developed based on this idea, for which we propose the unifying name Integrative Causal Analysis (INCA). A contrived example is presented demonstrating the theoretical potential to develop more general methods for co-analyzing heterogeneous data sets. The computational experiments with the novel methods provide evidence that causally-inspired assumptions such as Faithfulness often hold to a good degree of approximation in many real systems and could be exploited for statistical inference. Code, scripts, and data are available at www.mensxmachina.org.
---
paper_title: Integrating locally learned causal structures with overlapping variables
paper_content:
In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets and applying Structural EM, a heuristic DAG structure learning procedure for data with missing values, to the concatenated data.
---
paper_title: Learning Causal Structure from Overlapping Variable Sets
paper_content:
We present an algorithm name cSAT+ for learning the causal structure in a domain from datasets measuring different variable sets. The algorithm outputs a graph with edges corresponding to all possible pairwise causal relations between two variables, named Pairwise Causal Graph (PCG). Examples of interesting inferences include the induction of the absence or presence of some causal relation between two variables never measured together. cSAT+ converts the problem to a series of SAT problems, obtaining leverage from the efficiency of state-ofthe-art solvers. In our empirical evaluation, it is shown to outperform ION, the first algorithm solving a similar but more general problem, by two orders of magnitude.
---
paper_title: Local causal and Markov blanket induction for causal discovery and feature selection for classification. Part II: Analysis and extensions
paper_content:
We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes/effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. ::: ::: Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. ::: ::: In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning.
---
paper_title: Recovering causal effects from selection bias
paper_content:
Controlling for selection and confounding biases are two of the most challenging problems that appear in data analysis in the empirical sciences as well as in artificial intelligence tasks. The combination of previously studied methods for each of these biases in isolation is not directly applicable to certain non-trivial cases in which selection and confounding biases are simultaneously present. In this paper, we tackle these instances non-parametrically and in full generality. We provide graphical and algorithmic conditions for recov-erability of interventional distributions for when selection and confounding biases are both present. Our treatment completely characterizes the class of causal effects that are recoverable in Markovian models, and is sufficient for Semi-Markovian models.
---
paper_title: Causal inference using invariant prediction: identification and confidence intervals
paper_content:
Summary ::: What is the difference between a prediction that is made with a causal model and that with a non-causal model? Suppose that we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments.
---
paper_title: Learning high-dimensional directed acyclic graphs with latent and selection variables
paper_content:
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
---
paper_title: Probabilistic Computational Causal Discovery for Systems Biology
paper_content:
Discovering the causal mechanisms of biological systems is necessary to design new drugs and therapies. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations and causal models under certain conditions with a limited set of interventions/manipulations. This chapter reviews the basic concepts and principles of CD, the nature of the assumptions to enable it, potential pitfalls in its application, and recent advances and directions. Importantly, several success stories in molecular and systems biology are discussed in detail.
---
paper_title: Local causal and Markov blanket induction for causal discovery and feature selection for classification. Part II: Analysis and extensions
paper_content:
We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes/effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. ::: ::: Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. ::: ::: In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning.
---
paper_title: HITON, A Novel Markov Blanket Algorithm for Optimal Variable Selection
paper_content:
We introduce a novel, sound, sample-efficient, and highly-scalable algorithm for variable selection for classification, regression and prediction called HITON. The algorithm works by inducing the Markov Blanket of the variable to be classified or predicted. A wide variety of biomedical tasks with different characteristics were used for an empirical evaluation. Namely, (i) bioactivity prediction for drug discovery, (ii) clinical diagnosis of arrhythmias, (iii) bibliographic text categorization, (iv) lung cancer diagnosis from gene expression array data, and (v) proteomics-based prostate cancer detection. State-of-the-art algorithms for each domain were selected for baseline comparison. Results: (1) HITON reduces the number of variables in the prediction models by three orders of magnitude relative to the original variable set while improving or maintaining accuracy. (2) HITON outperforms the baseline algorithms by selecting more than two orders-ofmagnitude smaller variable sets than the baselines, in the selected tasks and datasets.
---
paper_title: Controlling Selection Bias in Causal Inference
paper_content:
Selection bias, caused by preferential exclusion of units (or samples) from the data, is a major obstacle to valid causal inferences, for it cannot be removed or even detected by randomized experiments on the study population. This paper highlights several algebraic and graphical methods capable of mitigating and sometimes eliminating this bias. These nonparametric methods generalize and improve previously reported results, and identify the type of knowledge that need to be available for reasoning in the presence of selection bias.
---
paper_title: The hidden life of latent variables: Bayesian learning with mixed graph models
paper_content:
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the specification of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modeling such variables explicitly.
---
paper_title: The Directions of Selection Bias
paper_content:
We show that if the exposure and the outcome affect the selection indicator in the same direction and have non-positive interaction on the risk difference, risk ratio or odds ratio scale, the exposure-outcome odds ratio in the selected population is a lower bound for true odds ratio.
---
paper_title: Causation, prediction, and search
paper_content:
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993.
---
paper_title: Learning gaussian graphical models of gene networks with false discovery rate control
paper_content:
In many cases what matters is not whether a false discovery is made or not but the expected proportion of false discoveries among all the discoveries made, i.e. the so-called false discovery rate (FDR). We present an algorithm aiming at controlling the FDR of edges when learning Gaussian graphical models (GGMs). The algorithm is particularly suitable when dealing with more nodes than samples, e.g. when learning GGMs of gene networks from gene expression data.We illustrate this on the Rosetta compendium [8].
---
| Title: A Review on Algorithms for Constraint-based Causal Discovery
Section 1: INTRODUCTION
Description 1: In this section, give a brief overview of the importance of causal discovery and introduce the main types of causal models. Discuss the focus of the paper on constraint-based algorithms and their significance.
Section 2: LEARNING PARADIGM
Description 2: This section includes a discussion of the learning framework, key assumptions, methods for conditional independence tests, evaluation metrics, and significant challenges faced by constraint-based algorithms.
Section 3: Problem Definition and Algorithm Overview
Description 3: Define the problem space for constraint-based algorithms and provide an overview of major algorithms under the assumptions of causal sufficiency and faithfulness.
Section 4: SGS Algorithm
Description 4: Describe the SGS (Spirtes, Glymour, and Scheines) algorithm, its rationale, correctness, and limitations in detail.
Section 5: PC Algorithm
Description 5: Present the PC (Peter Spirtes and Clark Glymour) algorithm, explaining how it improves the efficiency of the SGS algorithm, its rationale, correctness, and consistency.
Section 6: Conservative SGS and PC algorithms
Description 6: Review variants of SGS and PC algorithms that handle violations of the faithfulness assumption, including the CPC (Conservative PC) algorithm and the VCSGS (Very Conservative SGS) algorithm.
Section 7: PC-stable algorithm
Description 7: Discuss the PC-stable algorithm developed to address the order-dependence problem in the PC algorithm, including its soundness and completeness.
Section 8: Parallel PC algorithm
Description 8: Explain the parallel versions of the PC and PC-stable algorithms that improve computational efficiency using parallel computing techniques.
Section 9: P C f dr -skeleton algorithm
Description 9: Describe the P C f dr -skeleton algorithm, which incorporates an FDR-control procedure to handle multiple hypothesis testing in the PC algorithm.
Section 10: AIT Framework
Description 10: Introduce the AIT (Argumentative Independence Test) framework designed to improve reliability in constraint-based algorithms when handling small data sets.
Section 11: TPDA algorithm
Description 11: Discuss the Three Phase Dependency Analysis (TPDA) algorithm and its phases—drafting, thickening, and thinning—along with their applications.
Section 12: Local-to-global approach
Description 12: Review various local-to-global learning approaches aimed at improving the efficiency and accuracy of causal discovery algorithms by localizing the learning process.
Section 13: Active learning-based approaches
Description 13: Outline the methods for active learning of causal networks that utilize both observational and experimental data to fully orient edge directions in causal graphs.
Section 14: Local discovery of direct causes and effects
Description 14: Discuss algorithms designed to identify direct causes and effects of a target variable, focusing on scalability and efficiency.
Section 15: Learning from overlapping variable sets
Description 15: Explain the methods developed for co-analyzing multiple data sets with overlapping variables and learning causal structures over the joint set of variables.
Section 16: LEARNING WITHOUT CAUSAL SUFFICIENCY
Description 16: Discuss methods for causal discovery without assuming causal sufficiency, focusing on using MAG models to represent latent variables.
Section 17: PUBLIC SOFTWARE AND BENCHMARK DATA
Description 17: List and briefly describe public software packages and benchmark data sets available for constraint-based causal discovery methods.
Section 18: CONCLUSION AND DISCUSSION
Description 18: Summarize the key findings of the paper, discuss the opportunities and challenges presented by big data, and suggest future research directions in the field of constraint-based causal discovery. |
A Survey on Multi-Task Learning | 17 | ---
paper_title: A Survey on Transfer Learning
paper_content:
A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.
---
paper_title: A Convex Formulation for Learning Task Relationships in Multi-Task Learning
paper_content:
Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.
---
paper_title: A Regularization Approach to Learning Task Relationships in Multitask Learning
paper_content:
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.
---
paper_title: Multitask Learning
paper_content:
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
---
paper_title: Multi-task learning for classification with dirichlet process priors
paper_content:
Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.
---
paper_title: Multi-task learning for HIV therapy screening
paper_content:
We address the problem of learning classifiers for a large number of tasks. We derive a solution that produces resampling weights which match the pool of all examples to the target distribution of any given task. Our work is motivated by the problem of predicting the outcome of a therapy attempt for a patient who carries an HIV virus with a set of observed genetic properties. Such predictions need to be made for hundreds of possible combinations of drugs, some of which use similar biochemical mechanisms. Multi-task learning enables us to make predictions even for drug combinations with few or no training examples and substantially improves the overall prediction accuracy.
---
paper_title: Heterogeneous multitask learning with joint sparsity constraints
paper_content:
Multitask learning addresses the problem of learning related tasks that presumably share some commonalities on their input-output mapping functions. Previous approaches to multitask learning usually deal with homogeneous tasks, such as purely regression tasks, or entirely classification tasks. In this paper, we consider the problem of learning multiple related tasks of predicting both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional feature space. All of the tasks are related in the sense that they share the same set of relevant input variables, but the amount of influence of each input on different outputs may vary. We formulate this problem as a combination of linear regressions and logistic regressions, and model the joint sparsity as L1/L∞ or L1/L2 norm of the model parameters. Among several possible applications, our approach addresses an important open problem in genetic association mapping, where the goal is to discover genetic markers that influence multiple correlated traits jointly. In our experiments, we demonstrate our method in this setting, using simulated and clinical asthma datasets, and we show that our method can effectively recover the relevant inputs with respect to all of the tasks.
---
paper_title: Convex multi-task feature learning
paper_content:
We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks.
---
paper_title: Radial Basis Function Network for Multi-task Learning
paper_content:
We extend radial basis function (RBF) networks to the scenario in which multiple correlated tasks are learned simultaneously, and present the corresponding learning algorithms. We develop the algorithms for learning the network structure, in either a supervised or unsupervised manner. Training data may also be actively selected to improve the network's generalization to test data. Experimental results based on real data demonstrate the advantage of the proposed algorithms and support our conclusions.
---
paper_title: Infinite Latent SVM for Classification and Multi-task Learning
paper_content:
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes' theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the large-margin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multi-task learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both large-margin learning and Bayesian nonparametrics.
---
paper_title: Multitask Learning
paper_content:
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
---
paper_title: Spike and slab variational inference for multi-task and multiple kernel learning
paper_content:
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.
---
paper_title: Multi-task learning for classification with dirichlet process priors
paper_content:
Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.
---
paper_title: Sparse coding for multitask and transfer learning
paper_content:
We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.
---
paper_title: Handling sparsity via the horseshoe
paper_content:
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justied theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.
---
paper_title: Learning Feature Selection Dependencies in Multi-task Learning
paper_content:
A probabilistic model based on the horseshoe prior is proposed for learning dependencies in the process of identifying relevant features for prediction. Exact inference is intractable in this model. However, expectation propagation offers an approximate alternative. Because the process of estimating feature selection dependencies may suffer from over-fitting in the model proposed, additional data from a multi-task learning scenario are considered for induction. The same model can be used in this setting with few modifications. Furthermore, the assumptions made are less restrictive than in other multi-task methods: The different tasks must share feature selection dependencies, but can have different relevant features and model coefficients. Experiments with real and synthetic data show that this model performs better than other multi-task alternatives from the literature. The experiments also show that the model is able to induce suitable feature selection dependencies for the problems considered, only from the training data.
---
paper_title: Encoding tree sparsity in multi-task learning: a probabilistic framework
paper_content:
Multi-task learning seeks to improve the generalization performance by sharing common information among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not hold in many real-world applications. Existing techniques, which attempt to address this issue, aim to identify groups of related tasks using group sparsity. In this paper, we propose a probabilistic tree sparsity (PTS) model to utilize the tree structure to obtain the sparse solution instead of the group structure. Specifically, each model coefficient in the learning model is decomposed into a product of multiple component coefficients each of which corresponds to a node in the tree. Based on the decomposition, Gaussian and Cauchy distributions are placed on the component coefficients as priors to restrict the model complexity. We devise an efficient expectation maximization algorithm to learn the model parameters. Experiments conducted on both synthetic and real-world problems show the effectiveness of our model compared with state-of-the-art baselines.
---
paper_title: Exclusive Lasso for Multi-task Feature Selection
paper_content:
We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.
---
paper_title: Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis
paper_content:
Multitask learning can be effective when features useful in one task are also useful for other tasks, and the group lasso is a standard method for selecting a common subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks. The main contribution of this paper is a new procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization that automatically selects similar features for related learning tasks. Error bounds are derived for SOSlasso and its consistency is established for squared error loss. In particular, SOSlasso is motivated by multi-subject fMRI studies in which functional activity is classified using brain voxels as features. Experiments with real and synthetic data demonstrate the advantages of SOSlasso compared to the lasso and group lasso.
---
paper_title: A Probabilistic Model for Dirty Multi-task Feature Selection
paper_content:
Multi-task feature selection methods often make the hypothesis that learning tasks share relevant and irrelevant features. However, this hypothesis may be too restrictive in practice. For example, there may be a few tasks with specific relevant and irrelevant features (outlier tasks). Similarly, a few of the features may be relevant for only some of the tasks (outlier features). To account for this, we propose a model for multitask feature selection based on a robust prior distribution that introduces a set of binary latent variables to identify outlier tasks and outlier features. Expectation propagation can be used for efficient approximate inference under the proposed prior. Several experiments show that a model based on the new robust prior provides better predictive performance than other benchmark methods.
---
paper_title: Adaptive Multi-Task Lasso: with application to eQTL detection
paper_content:
To understand the relationship between genomic variations among population and complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects. However, detecting eQTLs remains a challenge due to complex underlying mechanisms and the very large number of genetic loci involved compared to the number of samples. Thus, to address the problem, it is desirable to take advantage of the structure of the data and prior information about genomic locations such as conservation scores and transcription factor binding sites. ::: ::: In this paper, we propose a novel regularized regression approach for detecting eQTLs which takes into account related traits simultaneously while incorporating many regulatory features. We first present a Bayesian network for a multi-task learning problem that includes priors on SNPs, making it possible to estimate the significance of each covariate adaptively. Then we find the maximum a posteriori (MAP) estimation of regression coefficients and estimate weights of covariates jointly. This optimization procedure is efficient since it can be achieved by using a projected gradient descent and a coordinate descent procedure iteratively. Experimental results on simulated and real yeast datasets confirm that our model outperforms previous methods for finding eQTLs.
---
paper_title: Tree-Guided Group Lasso for Multi-Task Regression with Structured Sparsity
paper_content:
We consider the problem of learning a sparse multi-task regression, where the structure in the outputs can be represented as a tree with leaf nodes as outputs and internal nodes as clusters of the outputs at multiple granularity. Our goal is to recover the common set of relevant inputs for each output cluster. Assuming that the tree structure is available as prior knowledge, we formulate this problem as a new multi-task regularized regression called tree-guided group lasso. Our structured regularization is based on a group-lasso penalty, where groups are defined with respect to the tree structure. We describe a systematic weighting scheme for the groups in the penalty such that each output variable is penalized in a balanced manner even if the groups overlap. We present an efficient optimization method that can handle a large-scale problem. Using simulated and yeast datasets, we demonstrate that our method shows a superior performance in terms of both prediction errors and recovery of true sparsity patterns compared to other methods for multi-task learning.
---
paper_title: Radial Basis Function Network for Multi-task Learning
paper_content:
We extend radial basis function (RBF) networks to the scenario in which multiple correlated tasks are learned simultaneously, and present the corresponding learning algorithms. We develop the algorithms for learning the network structure, in either a supervised or unsupervised manner. Training data may also be actively selected to improve the network's generalization to test data. Experimental results based on real data demonstrate the advantage of the proposed algorithms and support our conclusions.
---
paper_title: Multi-level Lasso for Sparse Multi-task Regression
paper_content:
We present a flexible formulation for variable selection in multi-task regression to allow for discrepancies in the estimated sparsity patterns accross the multiple tasks, while leveraging the common structure among them. Our approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity. This decomposition yields the Multi-level Lasso objective that can be solved efficiently via alternating optimization. The analysis of the "orthonormal design" case reveals some interesting insights on the nature of the shrinkage performed by our method, compared to that of related work. Theoretical guarantees are provided on the consistency of Multi-level Lasso. Simulations and empirical study of micro-array data further demonstrate the value of our framework.
---
paper_title: Blockwise coordinate descent procedures for the multi-task lasso, with applications to neural semantic basis discovery
paper_content:
We develop a cyclical blockwise coordinate descent algorithm for the multi-task Lasso that efficiently solves problems with thousands of features and tasks. The main result shows that a closed-form Winsorization operator can be obtained for the sup-norm penalized least squares regression. This allows the algorithm to find solutions to very large-scale problems far more efficiently than existing methods. This result complements the pioneering work of Friedman, et al. (2007) for the single-task Lasso. As a case study, we use the multi-task Lasso as a variable selector to discover a semantic basis for predicting human neural activation. The learned solution outperforms the standard basis for this task on the majority of test participants, while requiring far fewer assumptions about cognitive neuroscience. We demonstrate how this learned basis can yield insights into how the brain represents the meanings of words.
---
paper_title: Multi-Stage Multi-Task Feature Learning
paper_content:
Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel nonconvex regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm; we also provide intuitive interpretations, detailed convergence and reproducibility analysis for the proposed algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.
---
paper_title: Multitask sparsity via maximum entropy discrimination
paper_content:
A multitask learning framework is developed for discriminative classification and regression where multiple large-margin linear classifiers are estimated for different prediction problems. These classifiers operate in a common input space but are coupled as they recover an unknown shared representation. A maximum entropy discrimination (MED) framework is used to derive the multitask algorithm which involves only convex optimization problems that are straightforward to implement. Three multitask scenarios are described. The first multitask method produces multiple support vector machines that learn a shared sparse feature selection over the input space. The second multitask method produces multiple support vector machines that learn a shared conic kernel combination. The third multitask method produces a pooled classifier as well as adaptively specialized individual classifiers. Furthermore, extensions to regression, graphical model structure estimation and other sparse methods are discussed. The maximum entropy optimization problems are implemented via a sequential quadratic programming method which leverages recent progress in fast SVM solvers. Fast monotonic convergence bounds are provided by bounding the MED sparsifying cost function with a quadratic function and ensuring only a constant factor runtime increase above standard independent SVM solvers. Results are shown on multitask data sets and favor multitask learning over single-task or tabula rasa methods.
---
paper_title: Efficient multi-task feature learning with calibration
paper_content:
Multi-task feature learning has been proposed to improve the generalization performance by learning the shared features among multiple related tasks and it has been successfully applied to many real-world problems in machine learning, data mining, computer vision and bioinformatics. Most existing multi-task feature learning models simply assume a common noise level for all tasks, which may not be the case in real applications. Recently, a Calibrated Multivariate Regression (CMR) model has been proposed, which calibrates different tasks with respect to their noise levels and achieves superior prediction performance over the non-calibrated one. A major challenge is how to solve the CMR model efficiently as it is formulated as a composite optimization problem consisting of two non-smooth terms. In this paper, we propose a variant of the calibrated multi-task feature learning formulation by including a squared norm regularizer. We show that the dual problem of the proposed formulation is a smooth optimization problem with a piecewise sphere constraint. The simplicity of the dual problem enables us to develop fast dual optimization algorithms with low per-iteration cost. We also provide a detailed convergence analysis for the proposed dual optimization algorithm. Empirical studies demonstrate that, the dual optimization algorithm quickly converges and it is much more efficient than the primal optimization algorithm. Moreover, the calibrated multi-task feature learning algorithms with and without the squared norm regularizer achieve similar prediction performance and both outperform the non-calibrated ones. Thus, the proposed variant not only enables us to develop fast optimization algorithms, but also keeps the superior prediction performance of the calibrated multi-task feature learning over the non-calibrated one.
---
paper_title: A Regularization Approach to Learning Task Relationships in Multitask Learning
paper_content:
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.
---
paper_title: Probabilistic Multi-Task Feature Selection
paper_content:
Recently, some variants of the l1 norm, particularly matrix norms such as the l1,2 and l1,∞ norms, have been widely used in multi-task learning, compressed sensing and other related areas to enforce sparsity via joint regularization. In this paper, we unify the l1,2 and l1,∞ norms by considering a family of l1,q norms for 1 < q < ∞ and study the problem of determining the most appropriate sparsity enforcing norm to use in the context of multi-task feature selection. Using the generalized normal distribution, we provide a probabilistic interpretation of the general multi-task feature selection problem using the l1,q norm. Based on this probabilistic interpretation, we develop a probabilistic model using the noninformative Jeffreys prior. We also extend the model to learn and exploit more general types of pairwise relationships between tasks. For both versions of the model, we devise expectation-maximization (EM) algorithms to learn all model parameters, including q, automatically. Experiments have been conducted on two cancer classification applications using microarray gene expression data.
---
paper_title: On Multiplicative Multitask Feature Learning
paper_content:
We investigate a general framework of multiplicative multitask feature learning which decomposes each task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods have been proposed as special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effect. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. Empirical studies have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.
---
paper_title: Safe Screening for Multi-Task Feature Learning with Multiple Data Matrices
paper_content:
Multi-task feature learning (MTFL) is a powerful technique in boosting the predictive performance by learning multiple related classification/ regression/clustering tasks simultaneously. However, solving the MTFL problem remains challenging when the feature dimension is extremely large. In this paper, we propose a novel screening rule--that is based on the dual projection onto convex sets (DPC)--to quickly identify the inactive features--that have zero coefficients in the solution vectors across all tasks. One of the appealing features of DPC is that: it is safe in the sense that the detected inactive features are guaranteed to have zero coefficients in the solution vectors across all tasks. Thus, by removing the inactive features from the training phase, we may have substantial savings in the computational cost and memory usage without sacrificing accuracy. To the best of our knowledge, it is the first screening rule that is applicable to sparse models with multiple data matrices. A key challenge in deriving DPC is to solve a nonconvex problem. We show that we can solve for the global optimum efficiently via a properly chosen parametrization of the constraint set. Moreover, DPC has very low computational cost and can be integrated with any existing solvers. We have evaluated the proposed DPC rule on both synthetic and real data sets. The experiments indicate that DPC is very effective in identifying the inactive features--especially for high dimensional data--which leads to a speedup up to several orders of magnitude.
---
paper_title: Joint covariate selection and joint subspace selection for multiple classification problems
paper_content:
We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of ? 2 norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.
---
paper_title: Trace Norm Regularization: Reformulations, Algorithms, and Multi-task Learning
paper_content:
We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis.
---
paper_title: Flexible latent variable models for multi-task learning
paper_content:
Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.
---
paper_title: Multi-stage multi-task learning with reduced rank
paper_content:
Multi-task learning (MTL) seeks to improve the generalization performance by sharing information among multiple tasks. Many existing MTL approaches aim to learn the low-rank structure on the weight matrix, which stores the model parameters of all tasks, to achieve task sharing, and as a consequence the trace norm regularization is widely used in the MTL literature. A major limitation of these approaches based on trace norm regularization is that all the singular values of the weight matrix are penalized simultaneously, leading to impaired estimation on recovering the larger singular values in the weight matrix. To address the issue, we propose a Reduced rAnk MUlti-Stage multi-tAsk learning (RAMUSA) method based on the recently proposed capped norms. Different from existing trace-norm-based MTL approaches which minimize the sum of all the singular values, the RAMUSA method uses a capped trace norm regularizer to minimize only the singular values smaller than some threshold. Due to the non-convexity of the capped trace norm, we develop a simple but well guaranteed multi-stage algorithm to learn the weight matrix iteratively. We theoretically prove that the estimation error at each stage in the proposed algorithm shrinks and finally achieves a lower upper-bound as the number of stages becomes large enough. Empirical studies on synthetic and real-world datasets demonstrate the effectiveness of the RAMUSA method in comparison with the state-of-the-art methods.
---
paper_title: Spectral k-Support Norm Regularization
paper_content:
The k-support norm has successfully been applied to sparse vector prediction problems. We observe that it belongs to a wider class of norms, which we call the box-norms. Within this framework we derive an efficient algorithm to compute the proximity operator of the squared norm, improving upon the original method for the k-support norm. We extend the norms from the vector to the matrix setting and we introduce the spectral k-support norm. We study its properties and show that it is closely related to the multitask learning cluster norm. We apply the norms to real and synthetic matrix completion datasets. Our findings indicate that spectral k-support norm regularization gives state of the art performance, consistently improving over trace norm regularization and the matrix elastic net.
---
paper_title: Learning Multiple Related Tasks using Latent Independent Component Analysis
paper_content:
We propose a probabilistic model based on Independent Component Analysis for learning multiple related tasks. In our model the task parameters are assumed to be generated from independent sources which account for the relatedness of the tasks. We use Laplace distributions to model hidden sources which makes it possible to identify the hidden, independent components instead of just modeling correlations. Furthermore, our model enjoys a sparsity property which makes it both parsimonious and robust. We also propose efficient algorithms for both empirical Bayes method and point estimation. Our experimental results on two multi-label text classification data sets show that the proposed approach is promising.
---
paper_title: A convex formulation for learning shared structures from multiple tasks
paper_content:
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.
---
paper_title: Spike and slab variational inference for multi-task and multiple kernel learning
paper_content:
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.
---
paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data
paper_content:
One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.
---
paper_title: Sparse coding for multitask and transfer learning
paper_content:
We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.
---
paper_title: Discovering Structure in Multiple Learning Tasks: The TC Algorithm
paper_content:
Recently, there has been an increased interest in “lifelong” machine learning methods, that transfer knowledge across multiple learning tasks. Such methods have repeatedly been found to outperform conventional, single-task learning algorithms when the learning tasks are appropriately related. To increase robustness of such approaches, methods are desirable that can reason about the relatedness of individual learning tasks, in order to avoid the danger arising from tasks that are unrelated and thus potentially misleading. This paper describes the task-clustering (TC) algorithm. TC clusters learning tasks into classes of mutually related tasks. When facing a new learning task, TC first determines the most related task cluster, then exploits information selectively from this task cluster only. An empirical study carried out in a mobile robot domain shows that TC outperforms its non-selective counterpart in situations where only a small number of tasks is relevant.
---
paper_title: The matrix stick-breaking process for flexible multi-task learning
paper_content:
In multi-task learning our goal is to design regression or classification models for each of the tasks and appropriately share information between tasks. A Dirichlet process (DP) prior can be used to encourage task clustering. However, the DP prior does not allow local clustering of tasks with respect to a subset of the feature vector without making independence assumptions. Motivated by this problem, we develop a new multitask-learning prior, termed the matrix stick-breaking process (MSBP), which encourages cross-task sharing of data. However, the MSBP allows separate clustering and borrowing of information for the different feature components. This is important when tasks are more closely related for certain features than for others. Bayesian inference proceeds by a Gibbs sampling algorithm and the approach is illustrated using a simulated example and a multi-national application.
---
paper_title: Multi-task learning for sequential data via iHMMs and the nested Dirichlet process
paper_content:
A new hierarchical nonparametric Bayesian model is proposed for the problem of multitask learning (MTL) with sequential data. Sequential data are typically modeled with a hidden Markov model (HMM), for which one often must choose an appropriate model structure (number of states) before learning. Here we model sequential data from each task with an infinite hidden Markov model (iHMM), avoiding the problem of model selection. The MTL for iHMMs is implemented by imposing a nested Dirichlet process (nDP) prior on the base distributions of the iHMMs. The nDP-iHMM MTL method allows us to perform task-level clustering and data-level clustering simultaneously, with which the learning for individual iHMMs is enhanced and between-task similarities are learned. Learning and inference for the nDP-iHMM MTL are based on a Gibbs sampler. The effectiveness of the framework is demonstrated using synthetic data as well as real music data.
---
paper_title: Convex multi-task feature learning
paper_content:
We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select--not learn--a few common variables across the tasks.
---
paper_title: Learning Gaussian processes from multiple tasks
paper_content:
We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems.
---
paper_title: Nonparametric Bayesian feature selection for multi-task learning
paper_content:
We present a nonparametric Bayesian model for multi-task learning, with a focus on feature selection in binary classification. The model jointly identifies groups of similar tasks and selects the subset of features relevant to the tasks within each group. The model employs a Dirchlet process with a beta- Bernoulli hierarchical base measure. The posterior inference is accomplished efficiently using a Gibbs sampler. Experimental results are presented on simulated as well as real data.
---
paper_title: Convex Multi-Task Learning by Clustering
paper_content:
We consider the problem of multi-task learning in which tasks belong to hidden clusters. We formulate the learning problem as a novel convex optimization problem in which linear classiers are combinations of (a small number of) some basis. Our formulation jointly learns both the basis and the linear combination. We propose a scalable optimization algorithm for nding the optimal solution. Our new methods outperform existing stateof-the-art methods on multi-task sentiment classication tasks.
---
paper_title: Adaptive Multi-Task Lasso: with application to eQTL detection
paper_content:
To understand the relationship between genomic variations among population and complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects. However, detecting eQTLs remains a challenge due to complex underlying mechanisms and the very large number of genetic loci involved compared to the number of samples. Thus, to address the problem, it is desirable to take advantage of the structure of the data and prior information about genomic locations such as conservation scores and transcription factor binding sites. ::: ::: In this paper, we propose a novel regularized regression approach for detecting eQTLs which takes into account related traits simultaneously while incorporating many regulatory features. We first present a Bayesian network for a multi-task learning problem that includes priors on SNPs, making it possible to estimate the significance of each covariate adaptively. Then we find the maximum a posteriori (MAP) estimation of regression coefficients and estimate weights of covariates jointly. This optimization procedure is efficient since it can be achieved by using a projected gradient descent and a coordinate descent procedure iteratively. Experimental results on simulated and real yeast datasets confirm that our model outperforms previous methods for finding eQTLs.
---
paper_title: Learning with Whom to Share in Multi-task Feature Learning
paper_content:
In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining "with whom" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.
---
paper_title: Multi-Task Learning for Analyzing and Sorting Large Databases of Sequential Data
paper_content:
A new hierarchical nonparametric Bayesian framework is proposed for the problem of multi-task learning (MTL) with sequential data. The models for multiple tasks, each characterized by sequential data, are learned jointly, and the intertask relationships are obtained simultaneously. This MTL setting is used to analyze and sort large databases composed of sequential data, such as music clips. Within each data set, we represent the sequential data with an infinite hidden Markov model (iHMM), avoiding the problem of model selection (selecting a number of states). Across the data sets, the multiple iHMMs are learned jointly in a MTL setting, employing a nested Dirichlet process (nDP). The nDP-iHMM MTL method allows simultaneous task-level and data-level clustering, with which the individual iHMMs are enhanced and the between-task similarities are learned. Therefore, in addition to improved learning of each of the models via appropriate data sharing, the learned sharing mechanisms are used to infer interdata relationships of interest for data search. Specifically, the MTL-learned task-level sharing mechanisms are used to define the affinity matrix in a graph-diffusion sorting framework. To speed up the MCMC inference for large databases, the nDP-iHMM is truncated to yield a nested Dirichlet-distribution based HMM representation, which accommodates fast variational Bayesian (VB) analysis for large-scale inference, and the effectiveness of the framework is demonstrated using a database composed of 2500 digital music pieces.
---
paper_title: Clustered Multi-Task Learning: A Convex Formulation
paper_content:
In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem.
---
paper_title: Task Clustering and Gating for Bayesian Multitask Learning
paper_content:
Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two real-world problems, one a school problem and the other involving single-copy newspaper sales.
---
paper_title: Multi-task compressive sensing with Dirichlet process priors
paper_content:
Compressive sensing (CS) is an emerging £eld that, under appropriate conditions, can signi£cantly reduce the number of measurements required for a given signal. In many applications, one is interested in multiple signals that may be measured in multiple CS-type measurements, where here each signal corresponds to a sensing "task". In this paper we propose a novel multitask compressive sensing framework based on a Bayesian formalism, where a Dirichlet process (DP) prior is employed, yielding a principled means of simultaneously inferring the appropriate sharing mechanisms as well as CS inversion for each task. A variational Bayesian (VB) inference algorithm is employed to estimate the full posterior on the model parameters.
---
paper_title: A Multitask Point Process Predictive Model
paper_content:
Point process data are commonly observed in fields like healthcare and the social sciences. Designing predictive models for such event streams is an under-explored problem, due to often scarce training data. In this work we propose a multitask point process model, leveraging information from all tasks via a hierarchical Gaussian process (GP). Nonparametric learning functions implemented by a GP, which map from past events to future rates, allow analysis of flexible arrival patterns. To facilitate efficient inference, we propose a sparse construction for this hierarchical model, and derive a variational Bayes method for learning and inference. Experimental results are shown on both synthetic data and as well as real electronic health-records data.
---
paper_title: Learning multi-level task groups in multi-task learning
paper_content:
In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propose a Multi-Level Task Grouping (MeTaG) method to learn the multi-level grouping structure instead of only one level among tasks. Specifically, by assuming the number of levels to be H, we decompose the parameter matrix into a sum of H component matrices, each of which is regularized with a l2 norm on the pairwise difference among parameters of all the tasks to construct level-specific task groups. For optimization, we employ the smoothing proximal gradient method to efficiently solve the objective function of the MeTaG model. Moreover, we provide theoretical analysis to show that under certain conditions the MeTaG model can recover the true parameter matrix and the true task groups in each level with high probability. We experiment our approach on both synthetic and real-world datasets, showing competitive performance over state-of-the-art MTL methods.
---
paper_title: Flexible Clustered Multi-Task Learning by Learning Representative Tasks
paper_content:
Multi-task learning (MTL) methods have shown promising performance by learning multiple relevant tasks simultaneously, which exploits to share useful information across relevant tasks. Among various MTL methods, clustered multi-task learning (CMTL) assumes that all tasks can be clustered into groups and attempts to learn the underlying cluster structure from the training data. In this paper, we present a new approach for CMTL, called flexible clustered multi-task (FCMTL), in which the cluster structure is learned by identifying representative tasks. The new approach allows an arbitrary task to be described by multiple representative tasks, effectively soft-assigning a task to multiple clusters with different weights. Unlike existing counterpart, the proposed approach is more flexible in that (a) it does not require clusters to be disjoint, (b) tasks within one particular cluster do not have to share information to the same extent, and (c) the number of clusters is automatically inferred from data. Computationally, the proposed approach is formulated as a row-sparsity pursuit problem. We validate the proposed FCMTL on both synthetic and real-world data sets, and empirical results demonstrate that it outperforms many existing MTL methods.
---
paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data
paper_content:
One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.
---
paper_title: Multi-task learning for classification with dirichlet process priors
paper_content:
Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Multilabel relationship learning
paper_content:
Multilabel learning problems are commonly found in many applications. A characteristic shared by many multilabel learning problems is that some labels have significant correlations between them. In this article, we propose a novel multilabel learning method, called MultiLabel Relationship Learning (MLRL), which extends the conventional support vector machine by explicitly learning and utilizing the relationships between labels. Specifically, we model the label relationships using a label covariance matrix and use it to define a new regularization term for the optimization problem. MLRL learns the model parameters and the label covariance matrix simultaneously based on a unified convex formulation. To solve the convex optimization problem, we use an alternating method in which each subproblem can be solved efficiently. The relationship between MLRL and two widely used maximum margin methods for multilabel learning is investigated. Moreover, we also propose a semisupervised extension of MLRL, called SSMLRL, to demonstrate how to make use of unlabeled data to help learn the label covariance matrix. Through experiments conducted on some multilabel applications, we find that MLRL not only gives higher classification accuracy but also has better interpretability as revealed by the label covariance matrix.
---
paper_title: Multi-task sparse structure learning with Gaussian copula models
paper_content:
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of tasks relationship. In particular, we consider a joint estimation problem of the tasks relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship revealed by structure learning is founded on recent advances in Gaussian graphical models endowed with sparse estimators of the precision (inverse covariance) matrix. An extension to include exible Gaussian copula models that relaxes the Gaussian marginal assumption is also proposed. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark data sets for regression and classification. We also consider the problem of combining Earth System Model (ESM) outputs for better projections of future climate, with focus on projections of temperature by combining ESMs in South and North America, and show that the proposed model outperforms several existing methods for the problem.
---
paper_title: Learning High-Order Task Relationships in MultiTask Learning
paper_content:
Multi-task learning is a way of bringing inductive transfer studied in human learning to the machine learning community. A central issue in multitask learning is to model the relationships between tasks appropriately and exploit them to aid the simultaneous learning of multiple tasks effectively. While some recent methods model and learn the task relationships from data automatically, only pairwise relationships can be represented by them. In this paper, we propose a new model, called Multi-Task High-Order relationship Learning (MTHOL), which extends in a novel way the use of pairwise task relationships to high-order task relationships. We first propose an alternative formulation of an existing multi-task learning method. Based on the new formulation, we propose a high-order generalization leading to a new prior for the model parameters of different tasks. We then propose a new probabilistic model for multi-task learning and validate it empirically on some benchmark datasets.
---
paper_title: Learning Multiple Tasks with Kernel Methods
paper_content:
We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.
---
paper_title: Multi-Task Boosting by Exploiting Task Relationships
paper_content:
Multi-task learning aims at improving the performance of one learning task with the help of other related tasks. It is particularly useful when each task has very limited labeled data. A central issue in multi-task learning is to learn and exploit the relationships between tasks. In this paper, we generalize boosting to the multi-task learning setting and propose a method called multi-task boosting (MTBoost). Different tasks in MTBoost share the same base learners but with different weights which are related to the estimated task relationships in each iteration. In MTBoost, unlike ordinary boosting methods, the base learners, weights and task covariances are learned together in an integrated fashion using an alternating optimization procedure. We conduct theoretical analysis on the convergence of MTBoost and also empirical analysis comparing it with several related methods.
---
paper_title: Asymmetric multi-task learning based on task relatedness and loss
paper_content:
We propose a novel multi-task learning method that minimizes the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks selected based on the task-wise loss. We present two different algorithms that jointly learn the task predictors as well as the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and existing multitask learning models.
---
paper_title: A Convex Formulation for Learning Task Relationships in Multi-Task Learning
paper_content:
Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.
---
paper_title: Large margin multi-task metric learning
paper_content:
Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (1mnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task 1mnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.
---
paper_title: Multi-Task Learning with Gaussian Matrix Generalized Inverse Gaussian Model
paper_content:
In this paper, we study the multi-task learning problem with a new perspective of considering the structure of the residue error matrix and the low-rank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior and define a Gaussian Matrix Generalized Inverse Gaussian (GMGIG) model for low-rank approximation to the task covariance matrix. Through combining the GMGIG model with the residual error structure assumption, we propose the GMGIG regression model for multi-task learning. To make the computation tractable, we simultaneously use variational inference and sampling techniques. In particular, we propose two sampling strategies for computing the statistics of the MGIG distribution. Experiments show that this model is superior to the peer methods in regression and prediction.
---
paper_title: Simultaneously Leveraging Output and Task Structures for Multiple-Output Regression
paper_content:
Multiple-output regression models require estimating multiple parameters, one for each output. Structural regularization is usually employed to improve parameter estimation in such models. In this paper, we present a multiple-output regression model that leverages the covariance structure of the latent model parameters as well as the conditional covariance structure of the observed outputs. This is in contrast with existing methods that usually take into account only one of these structures. More importantly, unlike some of the other existing methods, none of these structures need be known a priori in our model, and are learned from the data. Several previously proposed structural regularization based multiple-output regression models turn out to be special cases of our model. Moreover, in addition to being a rich model for multiple-output regression, our model can also be used in estimating the graphical model structure of a set of variables (multivariate outputs) conditioned on another set of variables (inputs). Experimental results on both synthetic and real datasets demonstrate the effectiveness of our method.
---
paper_title: Efficient Output Kernel Learning for Multiple Tasks
paper_content:
The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form of an output kernel, recent approaches jointly learn the tasks and the output kernel. As the output kernel is a positive semidefinite matrix, the resulting optimization problems are not scalable in the number of tasks as an eigendecomposition is required in each step. Using the theory of positive semidefinite kernels we show in this paper that for a certain class of regularizers on the output kernel, the constraint of being positive semidefinite can be dropped as it is automatically satisfied for the relaxed problem. This leads to an unconstrained dual problem which can be solved efficiently. Experiments on several multi-task and multi-class data sets illustrate the efficacy of our approach in terms of computational efficiency as well as generalization performance.
---
paper_title: A Kernel Method for the Two-Sample Problem
paper_content:
We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
---
paper_title: Hierarchical Multitask Structured Output Learning for Large-Scale Sequence Segmentation
paper_content:
We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations. Structured output prediction often leads to difficult inference problems and hence requires large amounts of training data to obtain accurate models. We propose to use MTL to exploit additional information from related learning tasks by means of hierarchical regularization. Training SO models on the combined set of examples from multiple tasks can easily become infeasible for real world applications. To be able to solve the optimization problems underlying multitask structured output learning, we propose an efficient algorithm based on bundle-methods. We demonstrate the performance of our approach in applications from the domain of computational biology addressing the key problem of gene finding. We show that 1) our proposed solver achieves much faster convergence than previous methods and 2) that the Hierarchical SO-MTL approach outperforms considered non-MTL methods.
---
paper_title: Multi-Stage Multi-Task Feature Learning
paper_content:
Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel nonconvex regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm; we also provide intuitive interpretations, detailed convergence and reproducibility analysis for the proposed algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.
---
paper_title: Generalization Errors and Learning Curves for Regression with Multi-task Gaussian Processes
paper_content:
We provide some insights into how task correlations in multi-task Gaussian process (GP) regression affect the generalization error and the learning curve. We analyze the asymmetric two-tasks case, where a secondary task is to help the learning of a primary task. Within this setting, we give bounds on the generalization error and the learning curve of the primary task. Our approach admits intuitive understandings of the multi-task GP by relating it to single-task GPs. For the case of one-dimensional input-space under optimal sampling with data only for the secondary task, the limitations of multi-task GP can be quantified explicitly.
---
paper_title: Efficient multi-task feature learning with calibration
paper_content:
Multi-task feature learning has been proposed to improve the generalization performance by learning the shared features among multiple related tasks and it has been successfully applied to many real-world problems in machine learning, data mining, computer vision and bioinformatics. Most existing multi-task feature learning models simply assume a common noise level for all tasks, which may not be the case in real applications. Recently, a Calibrated Multivariate Regression (CMR) model has been proposed, which calibrates different tasks with respect to their noise levels and achieves superior prediction performance over the non-calibrated one. A major challenge is how to solve the CMR model efficiently as it is formulated as a composite optimization problem consisting of two non-smooth terms. In this paper, we propose a variant of the calibrated multi-task feature learning formulation by including a squared norm regularizer. We show that the dual problem of the proposed formulation is a smooth optimization problem with a piecewise sphere constraint. The simplicity of the dual problem enables us to develop fast dual optimization algorithms with low per-iteration cost. We also provide a detailed convergence analysis for the proposed dual optimization algorithm. Empirical studies demonstrate that, the dual optimization algorithm quickly converges and it is much more efficient than the primal optimization algorithm. Moreover, the calibrated multi-task feature learning algorithms with and without the squared norm regularizer achieve similar prediction performance and both outperform the non-calibrated ones. Thus, the proposed variant not only enables us to develop fast optimization algorithms, but also keeps the superior prediction performance of the calibrated multi-task feature learning over the non-calibrated one.
---
paper_title: Convex Learning of Multiple Tasks and their Structure
paper_content:
Reducing the amount of human supervision is a key problem in machine learning and a natural approach is that of exploiting the relations (structure) among different tasks. This is the idea at the core of multi-task learning. In this context a fundamental question is how to incorporate the tasks structure in the learning problem. We tackle this question by studying a general computational framework that allows to encode apriori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches. Within this framework, we show that tasks and their structure can be efficiently learned considering a convex optimization problem that can be approached by means of block coordinate methods such as alternating minimization and for which we prove convergence to the global minimum.
---
paper_title: A Regularization Approach to Learning Task Relationships in Multitask Learning
paper_content:
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.
---
paper_title: Learning Output Kernels with Block Coordinate Descent
paper_content:
We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performance and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space of vector-valued functions. Although the regularized risk functional is non-convex, we show that it is invex, implying that all local minimizers are global minimizers. We derive a block-wise coordinate descent method that efficiently exploits the structure of the objective functional. Then, we empirically demonstrate that the proposed method can improve classification accuracy. Finally, we provide a visual interpretation of the learned kernel matrix for some well known datasets.
---
paper_title: Conic Programming for Multitask Learning
paper_content:
When we have several related tasks, solving them simultaneously has been shown to be more effective than solving them individually. This approach is called multitask learning (MTL). In this paper, we propose a novel MTL algorithm. Our method controls the relatedness among the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines and show that the optimization problem can be cast as a second-order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated in ordinal regression, link prediction, and collaborative filtering, each of which can be formulated as a structured multitask problem.
---
paper_title: Multi-task Gaussian Process Prediction
paper_content:
In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.
---
paper_title: Heterogeneous-Neighborhood-based Multi-Task Local Learning Algorithms
paper_content:
All the existing multi-task local learning methods are defined on homogeneous neighborhood which consists of all data points from only one task. In this paper, different from existing methods, we propose local learning methods for multitask classification and regression problems based on heterogeneous neighborhood which is defined on data points from all tasks. Specifically, we extend the k-nearest-neighbor classifier by formulating the decision function for each data point as a weighted voting among the neighbors from all tasks where the weights are task-specific. By defining a regularizer to enforce the task-specific weight matrix to approach a symmetric one, a regularized objective function is proposed and an efficient coordinate descent method is developed to solve it. For regression problems, we extend the kernel regression to multi-task setting in a similar way to the classification case. Experiments on some toy data and real-world datasets demonstrate the effectiveness of our proposed methods.
---
paper_title: Learning multiple visual tasks while discovering their structure
paper_content:
Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e.g. object detection, classification, tracking of multiple agents, or denoising, to name a few. The key idea is that exploring task relatedness (structure) can lead to improved performances. In this paper, we propose and study a novel sparse, nonparametric approach exploiting the theory of Reproducing Kernel Hilbert Spaces for vector-valued functions. We develop a suitable regularization framework which can be formulated as a convex optimization problem, and is provably solvable using an alternating minimization approach. Empirical tests show that the proposed method compares favorably to state of the art techniques and further allows to recover interpretable structures, a problem of interest in its own right.
---
paper_title: It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals
paper_content:
Multi-task prediction methods are widely used to couple regressors or classification models by sharing information across related tasks. We propose a multi-task Gaussian process approach for modeling both the relatedness between regressors and the task correlations in the residuals, in order to more accurately identify true sharing between regressors. The resulting Gaussian model has a covariance term in form of a sum of Kronecker products, for which efficient parameter inference and out of sample prediction are feasible. On both synthetic examples and applications to phenotype prediction in genetics, we find substantial benefits of modeling structured noise compared to established alternatives.
---
paper_title: Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
paper_content:
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose employing the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of the projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and a Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in detail. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multitask learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.
---
paper_title: Convex Multitask Learning with Flexible Task Clusters
paper_content:
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to negative transfer when tasks are indeed incoherent. Recently, a number of approaches have been proposed that alleviate this problem by discovering the underlying task clusters or relationships. However, they are limited to modeling these relationships at the task level, which may be restrictive in some applications. In this paper, we propose a novel MTL formulation that captures task relationships at the feature-level. Depending on the interactions among tasks and features, the proposed method construct different task clusters for different features, without even the need of pre-specifying the number of clusters. Computationally, the proposed formulation is strongly convex, and can be efficiently solved by accelerated proximal methods. Experiments are performed on a number of synthetic and real-world data sets. Under various degrees of task relationships, the accuracy of the proposed method is consistently among the best. Moreover, the feature-specific task clusters obtained agree with the known/plausible task structures of the data.
---
paper_title: Integrating low-rank and group-sparse structures for robust multi-task learning
paper_content:
Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm.
---
paper_title: A Dirty Model for Multi-task Learning
paper_content:
We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm block-regularizations with q > 1 for such block-sparse structured problems, establishing strong guarantees on recovery even under high-dimensional scaling where the number of features scale with the number of observations. However, these papers also caution that the performance of such block-regularized methods are very dependent on the extent to which the features are shared across tasks. Indeed they show [8] that if the extent of overlap is less than a threshold, or even if parameter values in the shared features are highly uneven, then block l1/lq regularization could actually perform worse than simple separate elementwise l1 regularization. Since these caveats depend on the unknown true parameters, we might not know when and which method to apply. Even otherwise, we are far away from a realistic multi-task setting: not only do the set of relevant features have to be exactly the same across tasks, but their values have to as well. ::: ::: Here, we ask the question: can we leverage parameter overlap when it exists, but not pay a penalty when it does not? Indeed, this falls under a more general question of whether we can model such dirty data which may not fall into a single neat structural bracket (all block-sparse, or all low-rank and so on). With the explosion of such dirty high-dimensional data in modern settings, it is vital to develop tools - dirty models - to perform biased statistical estimation tailored to such data. Here, we take a first step, focusing on developing a dirty model for the multiple regression problem. Our method uses a very simple idea: we estimate a superposition of two sets of parameters and regularize them differently. We show both theoretically and empirically, our method strictly and noticeably outperforms both l1 or l1/lq methods, under high-dimensional scaling and over the entire range of possible overlaps (except at boundary cases, where we match the best method).
---
paper_title: Multi-level Lasso for Sparse Multi-task Regression
paper_content:
We present a flexible formulation for variable selection in multi-task regression to allow for discrepancies in the estimated sparsity patterns accross the multiple tasks, while leveraging the common structure among them. Our approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity. This decomposition yields the Multi-level Lasso objective that can be solved efficiently via alternating optimization. The analysis of the "orthonormal design" case reveals some interesting insights on the nature of the shrinkage performed by our method, compared to that of related work. Theoretical guarantees are provided on the consistency of Multi-level Lasso. Simulations and empirical study of micro-array data further demonstrate the value of our framework.
---
paper_title: Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
paper_content:
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose employing the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of the projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and a Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in detail. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multitask learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.
---
paper_title: Convex Multitask Learning with Flexible Task Clusters
paper_content:
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to negative transfer when tasks are indeed incoherent. Recently, a number of approaches have been proposed that alleviate this problem by discovering the underlying task clusters or relationships. However, they are limited to modeling these relationships at the task level, which may be restrictive in some applications. In this paper, we propose a novel MTL formulation that captures task relationships at the feature-level. Depending on the interactions among tasks and features, the proposed method construct different task clusters for different features, without even the need of pre-specifying the number of clusters. Computationally, the proposed formulation is strongly convex, and can be efficiently solved by accelerated proximal methods. Experiments are performed on a number of synthetic and real-world data sets. Under various degrees of task relationships, the accuracy of the proposed method is consistently among the best. Moreover, the feature-specific task clusters obtained agree with the known/plausible task structures of the data.
---
paper_title: Integrating low-rank and group-sparse structures for robust multi-task learning
paper_content:
Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm.
---
paper_title: A Dirty Model for Multi-task Learning
paper_content:
We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm block-regularizations with q > 1 for such block-sparse structured problems, establishing strong guarantees on recovery even under high-dimensional scaling where the number of features scale with the number of observations. However, these papers also caution that the performance of such block-regularized methods are very dependent on the extent to which the features are shared across tasks. Indeed they show [8] that if the extent of overlap is less than a threshold, or even if parameter values in the shared features are highly uneven, then block l1/lq regularization could actually perform worse than simple separate elementwise l1 regularization. Since these caveats depend on the unknown true parameters, we might not know when and which method to apply. Even otherwise, we are far away from a realistic multi-task setting: not only do the set of relevant features have to be exactly the same across tasks, but their values have to as well. ::: ::: Here, we ask the question: can we leverage parameter overlap when it exists, but not pay a penalty when it does not? Indeed, this falls under a more general question of whether we can model such dirty data which may not fall into a single neat structural bracket (all block-sparse, or all low-rank and so on). With the explosion of such dirty high-dimensional data in modern settings, it is vital to develop tools - dirty models - to perform biased statistical estimation tailored to such data. Here, we take a first step, focusing on developing a dirty model for the multiple regression problem. Our method uses a very simple idea: we estimate a superposition of two sets of parameters and regularize them differently. We show both theoretically and empirically, our method strictly and noticeably outperforms both l1 or l1/lq methods, under high-dimensional scaling and over the entire range of possible overlaps (except at boundary cases, where we match the best method).
---
paper_title: Multi-level Lasso for Sparse Multi-task Regression
paper_content:
We present a flexible formulation for variable selection in multi-task regression to allow for discrepancies in the estimated sparsity patterns accross the multiple tasks, while leveraging the common structure among them. Our approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity. This decomposition yields the Multi-level Lasso objective that can be solved efficiently via alternating optimization. The analysis of the "orthonormal design" case reveals some interesting insights on the nature of the shrinkage performed by our method, compared to that of related work. Theoretical guarantees are provided on the consistency of Multi-level Lasso. Simulations and empirical study of micro-array data further demonstrate the value of our framework.
---
paper_title: Spike and slab variational inference for multi-task and multiple kernel learning
paper_content:
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Learning Tree Structure in Multi-Task Learning
paper_content:
In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information according to task relations. One promising approach is to utilize the given tree structure, which describes the hierarchical relations among tasks, to learn model parameters under the regularization framework. However, such a priori information is rarely available in most applications. To the best of our knowledge, there is no work to learn the tree structure among tasks and model parameters simultaneously under the regularization framework and in this paper, we develop a TAsk Tree (TAT) model for MTL to achieve this. By specifying the number of layers in the tree as H, the TAT method decomposes the parameter matrix into H component matrices, each of which corresponds to the model parameters in each layer of the tree. In order to learn the tree structure, we devise sequential constraints to make the distance between the parameters in the component matrices corresponding to each pair of tasks decrease over layers, and hence the component parameters will keep fused until the topmost layer, once they become fused in a layer. Moreover, to make the component parameters have chance to fuse in different layers, we develop a structural sparsity regularizer, which is the sum of the l2 norm on the pairwise difference among the component parameters, to learn layer-specific task structure. In order to solve the resulting non-convex objective function, we use the general iterative shrinkage and thresholding (GIST) method. By using the alternating direction method of multipliers (ADMM) method, we decompose the proximal problem in the GIST method into three independent subproblems, where a key subproblem with the sequential constraints has an efficient solution as the other two subproblems do. We also provide some theoretical analysis for the TAT model. Experiments on both synthetic and real-world datasets show the effectiveness of the TAT model.
---
paper_title: A Convex Feature Learning Formulation for Latent Task Structure Discovery
paper_content:
This paper considers the multi-task learning problem and in the setting where some relevant features could be shared across few related tasks. Most of the existing methods assume the extent to which the given tasks are related or share a common feature space to be known apriori. In real-world applications however, it is desirable to automatically discover the groups of related tasks that share a feature space. In this paper we aim at searching the exponentially large space of all possible groups of tasks that may share a feature space. The main contribution is a convex formulation that employs a graph-based regularizer and simultaneously discovers few groups of related tasks, having close-by task parameters, as well as the feature space shared within each group. The regularizer encodes an important structure among the groups of tasks leading to an efficient algorithm for solving it: if there is no feature space under which a group of tasks has close-by task parameters, then there does not exist such a feature space for any of its supersets. An efficient active set algorithm that exploits this simplification and performs a clever search in the exponentially large space is presented. The algorithm is guaranteed to solve the proposed formulation (within some precision) in a time polynomial in the number of groups of related tasks discovered. Empirical results on benchmark datasets show that the proposed formulation achieves good generalization and outperforms state-of-the-art multi-task learning algorithms in some cases.
---
paper_title: Hierarchical Regularization Cascade for Joint Learning
paper_content:
As the sheer volume of available benchmark datasets increases, the problem of joint learning of classifiers and knowledge-transfer between classifiers, becomes more and more relevant. We present a hierarchical approach which exploits information sharing among different classification tasks, in multitask and multi-class settings. It engages a top-down iterative method, which begins by posing an optimization problem with an incentive for large scale sharing among all classes. This incentive to share is gradually decreased, until there is no sharing and all tasks are considered separately. The method therefore exploits different levels of sharing within a given group of related tasks, without having to make hard decisions about the grouping of tasks. In order to deal with large scale problems, with many tasks and many classes, we extend our batch approach to an online setting and provide regret analysis of the algorithm. We tested our approach extensively on synthetic and real datasets, showing significant improvement over baseline and state-of-the-art methods.
---
paper_title: Learning multi-level task groups in multi-task learning
paper_content:
In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propose a Multi-Level Task Grouping (MeTaG) method to learn the multi-level grouping structure instead of only one level among tasks. Specifically, by assuming the number of levels to be H, we decompose the parameter matrix into a sum of H component matrices, each of which is regularized with a l2 norm on the pairwise difference among parameters of all the tasks to construct level-specific task groups. For optimization, we employ the smoothing proximal gradient method to efficiently solve the objective function of the MeTaG model. Moreover, we provide theoretical analysis to show that under certain conditions the MeTaG model can recover the true parameter matrix and the true task groups in each level with high probability. We experiment our approach on both synthetic and real-world datasets, showing competitive performance over state-of-the-art MTL methods.
---
paper_title: Probabilistic Multi-Task Feature Selection
paper_content:
Recently, some variants of the l1 norm, particularly matrix norms such as the l1,2 and l1,∞ norms, have been widely used in multi-task learning, compressed sensing and other related areas to enforce sparsity via joint regularization. In this paper, we unify the l1,2 and l1,∞ norms by considering a family of l1,q norms for 1 < q < ∞ and study the problem of determining the most appropriate sparsity enforcing norm to use in the context of multi-task feature selection. Using the generalized normal distribution, we provide a probabilistic interpretation of the general multi-task feature selection problem using the l1,q norm. Based on this probabilistic interpretation, we develop a probabilistic model using the noninformative Jeffreys prior. We also extend the model to learn and exploit more general types of pairwise relationships between tasks. For both versions of the model, we devise expectation-maximization (EM) algorithms to learn all model parameters, including q, automatically. Experiments have been conducted on two cancer classification applications using microarray gene expression data.
---
paper_title: Multi-domain Dialog State Tracking using Recurrent Neural Networks
paper_content:
Dialog state tracking is a key component of many modern dialog systems, most of which are designed with a single, well-defined domain in mind. This paper shows that dialog data drawn from different dialog domains can be used to train a general belief tracking model which can operate across all of these domains, exhibiting superior performance to each of the domain-specific models. We propose a training procedure which uses out-of-domain data to initialise belief tracking models for entirely new domains. This procedure leads to improvements in belief tracking performance regardless of the amount of in-domain data available for training the model.
---
paper_title: Multi-task deep visual-semantic embedding for video thumbnail selection
paper_content:
Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.
---
paper_title: Cross-Stitch Networks for Multi-task Learning
paper_content:
Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.
---
paper_title: Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
paper_content:
A central theme in learning from image data is to develop appropriate image representations for the specific task at hand. Traditional methods used handcrafted local features combined with high-level image representations to generate image-level representations. Thus, a practical challenge is to determine what features are appropriate for specific tasks. For example, in the study of gene expression patterns in Drosophila melanogaster, texture features based on wavelets were particularly effective for determining the developmental stages from in situ hybridization (ISH) images. Such image representation is however not suitable for controlled vocabulary (CV) term annotation because each CV term is often associated with only a part of an image. Here, we developed problem-independent feature extraction methods to generate hierarchical representations for ISH images. Our approach is based on the deep convolutional neural networks (CNNs) that can act on image pixels directly. To make the extracted features generic, the models were trained using a natural image set with millions of labeled examples. These models were transferred to the ISH image domain and used directly as feature extractors to compute image representations. Furthermore, we employed multi-task learning method to fine-tune the pre-trained models with labeled ISH images, and also extracted features from the fine-tuned models. Experimental results showed that feature representations computed by deep models based on transfer and multi-task learning significantly outperformed other methods for annotating gene expression patterns at different stage ranges. We also demonstrated that the intermediate layers of deep models produced the best gene expression pattern representations.
---
paper_title: Facial Landmark Detection by Deep Multi-task Learning
paper_content:
Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].
---
paper_title: Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network
paper_content:
We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network. In particular, we simultaneously learn a pose-joint regressor and a sliding-window body-part detector in a deep network architecture. We show that including the body-part detection task helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several data sets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts.
---
paper_title: Handling sparsity via the horseshoe
paper_content:
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justied theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.
---
paper_title: Trace Norm Regularization: Reformulations, Algorithms, and Multi-task Learning
paper_content:
We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Learning Multiple Tasks with Kernel Methods
paper_content:
We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.
---
paper_title: Integrating low-rank and group-sparse structures for robust multi-task learning
paper_content:
Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm.
---
paper_title: Adaptive Multi-Task Lasso: with application to eQTL detection
paper_content:
To understand the relationship between genomic variations among population and complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects. However, detecting eQTLs remains a challenge due to complex underlying mechanisms and the very large number of genetic loci involved compared to the number of samples. Thus, to address the problem, it is desirable to take advantage of the structure of the data and prior information about genomic locations such as conservation scores and transcription factor binding sites. ::: ::: In this paper, we propose a novel regularized regression approach for detecting eQTLs which takes into account related traits simultaneously while incorporating many regulatory features. We first present a Bayesian network for a multi-task learning problem that includes priors on SNPs, making it possible to estimate the significance of each covariate adaptively. Then we find the maximum a posteriori (MAP) estimation of regression coefficients and estimate weights of covariates jointly. This optimization procedure is efficient since it can be achieved by using a projected gradient descent and a coordinate descent procedure iteratively. Experimental results on simulated and real yeast datasets confirm that our model outperforms previous methods for finding eQTLs.
---
paper_title: Multi-task Regression using Minimal Penalties
paper_content:
In this paper we study the kernel multiple ridge regression framework, which we refer to as multitask regression, using penalization techniques. The theoretical analysis of this problem shows that the key element appearing for an optimal calibration is the covariance matrix of the noise between the different tasks. We present a new algorithm to estimate this covariance matrix, based on the concept of minimal penalty, which was previously used in the single-task regression framework to estimate the variance of the noise. We show, in a non-asymptotic setting and under mild assumptions on the target function, that this estimator converges towards the covariance matrix. Then plugging this estimator into the corresponding ideal penalty leads to an oracle inequality. We illustrate the behavior of our algorithm on synthetic examples.
---
paper_title: Asymmetric multi-task learning based on task relatedness and loss
paper_content:
We propose a novel multi-task learning method that minimizes the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks selected based on the task-wise loss. We present two different algorithms that jointly learn the task predictors as well as the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and existing multitask learning models.
---
paper_title: A Convex Formulation for Learning Task Relationships in Multi-Task Learning
paper_content:
Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.
---
paper_title: Clustered Multi-Task Learning: A Convex Formulation
paper_content:
In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem.
---
paper_title: Infinite Latent SVM for Classification and Multi-task Learning
paper_content:
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes' theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the large-margin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multi-task learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both large-margin learning and Bayesian nonparametrics.
---
paper_title: Large margin multi-task metric learning
paper_content:
Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (1mnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task 1mnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.
---
paper_title: A convex formulation for learning shared structures from multiple tasks
paper_content:
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.
---
paper_title: A Regularization Approach to Learning Task Relationships in Multitask Learning
paper_content:
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.
---
paper_title: Flexible Clustered Multi-Task Learning by Learning Representative Tasks
paper_content:
Multi-task learning (MTL) methods have shown promising performance by learning multiple relevant tasks simultaneously, which exploits to share useful information across relevant tasks. Among various MTL methods, clustered multi-task learning (CMTL) assumes that all tasks can be clustered into groups and attempts to learn the underlying cluster structure from the training data. In this paper, we present a new approach for CMTL, called flexible clustered multi-task (FCMTL), in which the cluster structure is learned by identifying representative tasks. The new approach allows an arbitrary task to be described by multiple representative tasks, effectively soft-assigning a task to multiple clusters with different weights. Unlike existing counterpart, the proposed approach is more flexible in that (a) it does not require clusters to be disjoint, (b) tasks within one particular cluster do not have to share information to the same extent, and (c) the number of clusters is automatically inferred from data. Computationally, the proposed approach is formulated as a row-sparsity pursuit problem. We validate the proposed FCMTL on both synthetic and real-world data sets, and empirical results demonstrate that it outperforms many existing MTL methods.
---
paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data
paper_content:
One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.
---
paper_title: Conic Programming for Multitask Learning
paper_content:
When we have several related tasks, solving them simultaneously has been shown to be more effective than solving them individually. This approach is called multitask learning (MTL). In this paper, we propose a novel MTL algorithm. Our method controls the relatedness among the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines and show that the optimization problem can be cast as a second-order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated in ordinal regression, link prediction, and collaborative filtering, each of which can be formulated as a structured multitask problem.
---
paper_title: Joint covariate selection and joint subspace selection for multiple classification problems
paper_content:
We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of ? 2 norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.
---
paper_title: Learning Sparse Task Relations in Multi-Task Learning
paper_content:
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse patterns since usually a task cannot be helpful to all of the other tasks and moreover, sparse task relations can reduce the risk of overfitting compared with the dense ones. In this paper, we focus on learning sparse task relations. Based on a regularization framework which can learn task relations among multiple tasks, we propose a SParse covAriance based mulTi-taSk (SPATS) model to learn a sparse covariance by using the ??? l regularization. The resulting objective function of the SPATS method is convex, which allows us to devise an alternating method to solve it. Moreover, some theoretical properties of the proposed model are studied. Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed method.
---
paper_title: Multitask learning without label correspondences
paper_content:
We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available. This is in contrast with existing methods which either assume that the label sets shared by different tasks are the same or that there exists a label mapping oracle. Our method directly maximizes the mutual information among the labels, and we show that the resulting objective function can be efficiently optimized using existing algorithms. Our proposed approach has a direct application for data integration with different label spaces, such as integrating Yahoo! and DMOZ web directories.
---
paper_title: Cross-Domain Multitask Learning with Latent Probit Models
paper_content:
Learning multiple tasks across heterogeneous domains is a challenging problem since the feature space may not be the same for different tasks. We assume the data in multiple tasks are generated from a latent common domain via sparse domain transforms and propose a latent probit model (LPM) to jointly learn the domain transforms, and a probit classifier shared in the common domain. To learn meaningful task relatedness and avoid over-fitting in classification, we introduce sparsity in the domain transforms matrices, as well as in the common classifier parameters. We derive theoretical bounds for the estimation error of the classifier parameters in terms of the sparsity of domain transform matrices. An expectation-maximization algorithm is derived for learning the LPM. The effectiveness of the approach is demonstrated on several real datasets.
---
paper_title: Multilinear Multitask Learning
paper_content:
Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.
---
paper_title: Geometry preserving multi-task metric learning
paper_content:
In this paper, we consider the multi-task metric learning problem, i.e., the problem of learning multiple metrics from several correlated tasks simultaneously. Despite the importance, there are only a limited number of approaches in this field. While the existing methods often straightforwardly extend existing vector-based methods, we propose to couple multiple related metric learning tasks with the von Neumann divergence. On one hand, the novel regularized approach extends previous methods from the vector regularization to a general matrix regularization framework; on the other hand and more importantly, by exploiting von Neumann divergence as the regularization, the new multi-task metric learning method has the capability to well preserve the data geometry. This leads to more appropriate propagation of side-information among tasks and provides potential for further improving the performance. We propose the concept of geometry preserving probability and show that our framework encourages a higher geometry preserving probability in theory. In addition, our formulation proves to be jointly convex and the global optimal solution can be guaranteed. We have conducted extensive experiments on six data sets (across very different disciplines), and the results verify that our proposed approach can consistently outperform almost all the current methods.
---
paper_title: Multi-Task Learning in Heterogeneous Feature Spaces
paper_content:
Multi-task learning aims at improving the generalization performance of a learning task with the help of some other related tasks. Although many multi-task learning methods have been proposed, they are all based on the assumption that all tasks share the same data representation. This assumption is too restrictive for general applications. In this paper, we propose a multi-task extension of linear discriminant analysis (LDA), called multi-task discriminant analysis (MTDA), which can deal with learning tasks with different data representations. For each task, MTDA learns a separate transformation which consists of two parts, one specific to the task and one common to all tasks. A by-product of MTDA is that it can alleviate the labeled data deficiency problem of LDA. Moreover, unlike many existing multi-task learning methods, MTDA can handle binary and multi-class problems for each task in a generic way. Experimental results on face recognition show that MTDA consistently outperforms related methods.
---
paper_title: Large margin multi-task metric learning
paper_content:
Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (1mnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task 1mnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.
---
paper_title: Multitask learning meets tensor factorization: task imputation via convex optimization
paper_content:
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.
---
paper_title: Multi-Task Learning of Gaussian Graphical Models
paper_content:
We present multi-task structure learning for Gaussian graphical models. We discuss uniqueness and boundedness of the optimal solution of the maximization problem. A block coordinate descent method leads to a provably convergent algorithm that generates a sequence of positive definite solutions. Thus, we reduce the original problem into a sequence of strictly convex l∞ regularized quadratic minimization subproblems. We further show that this subproblem leads to the continuous quadratic knapsack problem, for which very efficient methods exist. Finally we show promising results in a dataset that captures brain function of cocaine addicted and control subjects under conditions of monetary reward.
---
paper_title: Learning a Kernel for Multi-Task Clustering
paper_content:
Multi-task learning has received increasing attention in the past decade. Many supervised multi-task learning methods have been proposed, while unsupervised multitask learning is still a rarely studied problem. In this paper, we propose to learn a kernel for multi-task clustering. Our goal is to learn a Reproducing Kernel Hilbert Space, in which the geometric structure of the data in each task is preserved, while the data distributions of any two tasks are as close as possible. This is formulated as a unified kernel learning framework, under which we study two types of kernel learning: nonparametric kernel learning and spectral kernel design. Both types of kernel learning can be solved by linear programming. Experiments on several cross-domain text data sets demonstrate that kernel k-means on the learned kernel can achieve better clustering results than traditional single-task clustering methods. It also outperforms the newly proposed multi-task clustering method.
---
paper_title: Learning Multiple Tasks with Kernel Methods
paper_content:
We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.
---
paper_title: Sparse Multi-Task Reinforcement Learning
paper_content:
In multi-task reinforcement learning (MTRL), the objective is to simultaneously learn multiple tasks and exploit their similarity to improve the performance w.r.t. single-task learning. In this paper we investigate the case when all the tasks can be accurately represented in a linear approximation space using the same small subset of the original (large) set of features. This is equivalent to assuming that the weight vectors of the task value functions are jointly sparse, i.e., the set of their non-zero components is small and it is shared across tasks. Building on existing results in multi-task regression, we develop two multi-task extensions of the fitted Q-iteration algorithm. While the first algorithm assumes that the tasks are jointly sparse in the given representation, the second one learns a transformation of the features in the attempt of finding a more sparse representation. For both algorithms we provide a sample complexity analysis and numerical simulations.
---
paper_title: Smart Multitask Bregman Clustering and Multitask Kernel Clustering
paper_content:
Traditional clustering algorithms deal with a single clustering task on a single dataset. However, there are many related tasks in the real world, which motivates multitask clustering. Recently some multitask clustering algorithms have been proposed, and among them multitask Bregman clustering (MBC) is a very applicable method. MBC alternatively updates clusters and learns relationships between clusters of different tasks, and the two phases boost each other. However, the boosting does not always have positive effects on improving the clustering performance, it may also cause negative effects. Another issue of MBC is that it cannot deal with nonlinear separable data. In this article, we show that in MBC, the process of using cluster relationship to boost the cluster updating phase may cause negative effects, that is, cluster centroids may be skewed under some conditions. We propose a smart multitask Bregman clustering (S-MBC) algorithm which can identify the negative effects of the boosting and avoid the negative effects if they occur. We then propose a multitask kernel clustering (MKC) framework for nonlinear separable data by using a similar framework like MBC in the kernel space. We also propose a specific optimization method, which is quite different from that of MBC, to implement the MKC framework. Since MKC can also cause negative effects like MBC, we further extend the framework of MKC to a smart multitask kernel clustering (S-MKC) framework in a similar way that S-MBC is extended from MBC. We conduct experiments on 10 real world multitask clustering datasets to evaluate the performance of S-MBC and S-MKC. The results on clustering accuracy show that: (1) compared with the original MBC algorithm MBC, S-MBC and S-MKC perform much better; (2) compared with the convex discriminative multitask relationship clustering (DMTRC) algorithms DMTRC-L and DMTRC-R which also avoid negative transfer, S-MBC and S-MKC perform worse in the (ideal) case in which different tasks have the same cluster number and the empirical label marginal distribution in each task distributes evenly, but better or comparable in other (more general) cases. Moreover, S-MBC and S-MKC can work on the datasets in which different tasks have different number of clusters, violating the assumptions of DMTRC-L and DMTRC-R. The results on efficiency show that S-MBC and S-MKC consume more computational time than MBC and less computational time than DMTRC-L and DMTRC-R. Overall S-MBC and S-MKC are competitive compared with the state-of-the-art multitask clustering algorithms in synthetical terms of accuracy, efficiency and applicability.
---
paper_title: Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
paper_content:
The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed "Actor-Mimic", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods.
---
paper_title: Convex Discriminative Multitask Clustering
paper_content:
Multitask clustering tries to improve the clustering performance of multiple tasks simultaneously by taking their relationship into account. Most existing multitask clustering algorithms fall into the type of generative clustering, and none are formulated as convex optimization problems. In this paper, we propose two convex Discriminative Multitask Clustering (DMTC) objectives to address the problems. The first one aims to learn a shared feature representation, which can be seen as a technical combination of the convexmultitask feature learning and the convex Multiclass Maximum Margin Clustering (M3C). The second one aims to learn the taskrelationship, which can be seen as a combination of the convex multitask relationship learning and M3C. The objectives of the two algorithms are solved in a uniform procedure by the efficient cutting-plane algorithm and further unified in the Bayesian framework. Experimental results on a toy problem and two benchmark data sets demonstrate the effectiveness of the proposed algorithms.
---
paper_title: Smart multi-task Bregman clustering and multi-task kernel clustering
paper_content:
Multitask Bregman Clustering (MBC) alternatively updates clusters and learns relationship between clusters of different tasks, and the two phases boost each other. However, the boosting does not always have positive effect, it may also cause negative effect. Another issue of MBC is that it cannot deal with nonlinear separable data. In this paper, we show that MBC's process of using cluster relationship to boost the updating clusters phase may cause negative effect, i.e., cluster centroid may be skewed under some conditions. We propose a smart multi-task Bregman clustering (S-MBC) algorithm which identifies negative effect of the boosting and avoids the negative effect if it occurs.We then extend the framework of S-MBC to a smart multi-task kernel clustering (S-MKC) framework to deal with nonlinear separable data.We also propose a specific implementation of the framework which could be applied to any Mercer kernel. Experimental results confirm our analysis, and demonstrate the superiority of our proposed methods.
---
paper_title: Multitask bregman clustering
paper_content:
Traditional clustering methods deal with a single clustering task on a single data set. However, in some newly emerging applications, multiple similar clustering tasks are involved simultaneously. In this case, we not only desire a partition for each task, but also want to discover the relationship among clusters of different tasks. It's also expected that the learnt relationship among tasks can improve performance of each single task. In this paper, we propose a general framework for this problem and further suggest a specific approach. In our approach, we alternatively update clusters and learn relationship between clusters of different tasks, and the two phases boost each other. Our approach is based on the general Bregman divergence, hence it's suitable for a large family of assumptions on data distributions and divergences. Empirical results on several benchmark data sets validate the approach.
---
paper_title: Leveraging domain knowledge in multitask Bayesian network structure learning
paper_content:
Network structure learning algorithms have aided network discovery in fields such as bioinformatics, neuroscience, ecology and social science. However, challenges remain in learning informative networks for related sets of tasks because the search space of Bayesian network structures is characterized by large basins of approximately equivalent solutions. Multitask algorithms select a set of networks that are near each other in the search space, rather than a score-equivalent set of networks chosen from independent regions of the space. This selection preference allows a domain expert to see only differences supported by the data. However, the usefulness of these algorithms for scientific datasets is limited because existing algorithms naively assume that all pairs of tasks are equally related. We introduce a framework that relaxes this assumption by incorporating domain knowledge about task-relatedness into the learning objective. Using our framework, we introduce the first multitask Bayesian network algorithm that leverages domain knowledge about the relatedness of tasks. We use our algorithm to explore the effect of task-relatedness on network discovery and show that our algorithm learns networks that are closer to ground truth than naive algorithms and that our algorithm discovers patterns that are interesting.
---
paper_title: R.: Inductive transfer for Bayesian network structure learning
paper_content:
We study the multi-task Bayesian Network structure learning problem: given data for multiple related problems, learn a Bayesian Network structure for each of them, sharing information among the problems to boost performance. We learn the structures for all the problems simultaneously using a score and search approach that encourages the learned Bayes Net structures to be similar. Encouraging similarity promotes information sharing and prioritizes learning structural features that explain the data from all problems over features that only seem relevant to a single one. This leads to a significant increase in the accuracy of the learned structures, especially when training data is scarce.
---
paper_title: Multi-Task Multi-View Clustering
paper_content:
Multi-task clustering and multi-view clustering have severally found wide applications and received much attention in recent years. Nevertheless, there are many clustering problems that involve both multi-task clustering and multi-view clustering, i.e., the tasks are closely related and each task can be analyzed from multiple views. In this paper, we introduce a multi-task multi-view clustering framework which integrates within-view-task clustering, multi-view relationship learning, and multi-task relationship learning. Under this framework, we propose two multi-task multi-view clustering algorithms, the bipartite graph based multi-task multi-view clustering algorithm, and the semi-nonnegative matrix tri-factorization based multi-task multi-view clustering algorithm. The former one can deal with the multi-task multi-view clustering of nonnegative data, the latter one is a general multi-task multi-view clustering method, i.e., it can deal with the data with negative feature values. Experimental results on publicly available data sets in web page mining and image mining show the superiority of the proposed multi-task multi-view clustering algorithms over either multi-task clustering algorithms or multi-view clustering algorithms for multi-task clustering of multi-view data.
---
paper_title: Multi-Task Active Learning for Linguistic Annotations
paper_content:
We extend the classical single-task active learning (AL) approach. In the multi-task active learning (MTAL) paradigm, we select examples for several annotation tasks rather than for a single one as usually done in the context of AL. We introduce two MTAL metaprotocols, alternating selection and rank combination, and propose a method to implement them in practice. We experiment with a twotask annotation scenario that includes named entity and syntactic parse tree annotations on three different corpora. MTAL outperforms random selection and a stronger baseline, onesided example selection, in which one task is pursued using AL and the selected examples are provided also to the other task.
---
paper_title: Active Multi-task Learning via Bandits.
paper_content:
In multi-task learning, the multiple related tasks allow each one to benefit from the learning of the others, and labeling instances for one task can also affect the other tasks especially when the task has a small number of labeled data. Thus labeling effective instances across different learning tasks is important for improving the generalization error of all tasks. In this paper, we propose a new active multi-task learning paradigm, which selectively samples effective instances for multi-task learning. Inspired by the multi-armed bandits, which can balance the trade-off between the exploitation and exploration, we introduce a new active learning strategy and cast the selection procedure as a bandit framework. We consider both the risk of multi-task learner and the corresponding confidence bounds and our selection tries to balance this trade-off. Our proposed method is a sequential algorithm, which at each round maintains a sampling distribution on the pool of data, queries the label for an instance according to this distribution and updates the distribution based on the newly trained multi-task learner. We provide an implementation of our algorithm based on a popular multi-task learning algorithm that is trace-norm regularization method. Theoretical guarantees are developed by exploiting the Rademacher complexities. Comprehensive experiments show the effectiveness and efficiency of the proposed approach.
---
paper_title: Multi-Task Learning with Labeled and Unlabeled Tasks
paper_content:
In multi-task learning, a learner is given a collection of prediction tasks and needs to solve all of them. In contrast to previous work, which required that annotated training data is available for all tasks, we consider a new setting, in which for some tasks, potentially most of them, only unlabeled training data is provided. Consequently, to solve all tasks, information must be transferred between tasks with labels and tasks without labels. Focusing on an instance-based transfer method we analyze two variants of this setting: when the set of labeled tasks is fixed, and when it can be actively selected by the learner. We state and prove a generalization bound that covers both scenarios and derive from it an algorithm for making the choice of labeled tasks (in the active case) and for transferring information between the tasks in a principled way. We also illustrate the effectiveness of the algorithm by experiments on synthetic and real data.
---
paper_title: Self-adapted multi-task clustering
paper_content:
Multi-task clustering improves the clustering performance of each task by transferring knowledge across related tasks. Most existing multi-task clustering methods are based on the ideal assumption that the tasks are completely related. However, in many real applications, the tasks are usually partially related, and brute-force transfer may cause negative effect which degrades the clustering performance. In this paper, we propose a self-adapted multi-task clustering (SAMTC) method which can automatically identify and transfer reusable instances among the tasks, thus avoiding negative transfer. SAMTC begins with an initialization by performing single-task clustering on each task, then executes the following three steps: first, it finds the reusable instances by measuring related clusters with Jensen-Shannon divergence between each pair of tasks, and obtains a pair of possibly related subtasks; second, it estimates the relatedness between each pair of subtasks with kernel mean matching; third, it constructs the similarity matrix for each task by exploiting useful information from the other tasks through instance transfer, and adopts spectral clustering to get the final clustering result. Experimental results on several real data sets show the superiority of the proposed algorithm over traditional single-task clustering methods and existing multitask clustering methods.
---
paper_title: Semisupervised Multitask Learning
paper_content:
Context plays an important role when performing classification, and in this paper we examine context from two perspectives. First, the classification of items within a single task is placed within the context of distinct concurrent or previous classification tasks (multiple distinct data collections). This is referred to as multi-task learning (MTL), and is implemented here in a statistical manner, using a simplified form of the Dirichlet process. In addition, when performing many classification tasks one has simultaneous access to all unlabeled data that must be classified, and therefore there is an opportunity to place the classification of any one feature vector within the context of all unlabeled feature vectors; this is referred to as semi-supervised learning. In this paper we integrate MTL and semi-supervised learning into a single framework, thereby exploiting two forms of contextual information. Example results are presented on a "toy" example, to demonstrate the concept, and the algorithm is also applied to three real data sets.
---
paper_title: Multi-task reinforcement learning: a hierarchical Bayesian approach
paper_content:
We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks.
---
paper_title: Semi-Supervised Multitask Learning
paper_content:
A semi-supervised multitask learning (MTL) framework is presented, in which M parameterized semi-supervised classifiers, each associated with one of M partially labeled data manifolds, are learned jointly under the constraint of a soft-sharing prior imposed over the parameters of the classifiers. The unlabeled data are utilized by basing classifier learning on neighborhoods, induced by a Markov random walk over a graph representation of each manifold. Experimental results on real data sets demonstrate that semi-supervised MTL yields significant improvements in generalization performance over either semi-supervised single-task learning (STL) or supervised MTL.
---
paper_title: Semi-Supervised Multi-Task Regression
paper_content:
Labeled data are needed for many machine learning applications but the amount available in some applications is scarce. Semi-supervised learning and multi-task learning are two of the approaches that have been proposed to alleviate this problem. In this paper, we seek to integrate these two approaches for regression applications. We first propose a new supervised multi-task regression method called SMTR, which is based on Gaussian processes (GP) with the assumption that the kernel parameters for all tasks share a common prior. We then incorporate unlabeled data into SMTR by changing the kernel function of the GP prior to a data-dependent kernel function, resulting in a semi-supervised extension of SMTR, called SSMTR. Moreover, we incorporate pairwise information into SSMTR to further boost the learning performance for applications in which such information is available. Experiments conducted on two commonly used data sets for multi-task regression demonstrate the effectiveness of our methods.
---
paper_title: Inductive multi-task learning with multiple view data
paper_content:
In many real-world applications, it is becoming common to have data extracted from multiple diverse sources, known as "multi-view" data. Multi-view learning (MVL) has been widely studied in many applications, but existing MVL methods learn a single task individually. In this paper, we study a new direction of multi-view learning where there are multiple related tasks with multi-view data (i.e. multi-view multi-task learning, or MVMT Learning). In our MVMT learning methods, we learn a linear mapping for each view in each task. In a single task, we use co-regularization to obtain functions that are in-agreement with each other on the unlabeled samples and achieve low classification errors on the labeled samples simultaneously. Cross different tasks, additional regularization functions are utilized to ensure the functions that we learn in each view are similar. We also developed two extensions of the MVMT learning algorithm. One extension handles missing views and the other handles non-uniformly related tasks. Experimental studies on three real-world data sets demonstrate that our MVMT methods significantly outperform the existing state-of-the-art methods.
---
paper_title: Bayesian Multi-Task Reinforcement Learning
paper_content:
We consider the problem of multi-task reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary to identify classes of tasks with similar structure and to learn them jointly. We consider the case where the tasks share structure in their value functions, and model this by assuming that the value functions are all sampled from a common prior. We adopt the Gaussian process temporal-difference value function model and use a hierarchical Bayesian approach to model the distribution over the value functions. We study two cases, where all the value functions belong to the same class and where they belong to an undefined number of classes. For each case, we present a hierarchical Bayesian model, and derive inference algorithms for (i) joint learning of the value functions, and (ii) efficient transfer of the information gained in (i) to assist learning the value function of a newly observed task.
---
paper_title: Bayesian Online Multitask Learning of Gaussian Processes
paper_content:
Standard single-task kernel methods have recently been extended to the case of multitask learning in the context of regularization theory. There are experimental results, especially in biomedicine, showing the benefit of the multitask approach compared to the single-task one. However, a possible drawback is computational complexity. For instance, when regularization networks are used, complexity scales as the cube of the overall number of training data, which may be large when several tasks are involved. The aim of this paper is to derive an efficient computational scheme for an important class of multitask kernels. More precisely, a quadratic loss is assumed and each task consists of the sum of a common term and a task-specific one. Within a Bayesian setting, a recursive online algorithm is obtained, which updates both estimates and confidence intervals as new data become available. The algorithm is tested on two simulated problems and a real data set relative to xenobiotics administration in human patients.
---
paper_title: Online Multi-task Learning with Hard Constraints
paper_content:
We discuss multi-task online learning when a decision maker has to deal simultaneously with M tasks. The tasks are related, which is modeled by imposing that the M-tuple of actions taken by the decision maker needs to satisfy certain constraints. We give natural examples of such restrictions and then discuss a general class of tractable constraints, for which we introduce computationally efficient ways of selecting actions, essentially by reducing to an on-line shortest path problem. We briefly discuss "tracking" and "bandit" versions of the problem and extend the model in various ways, including non-additive global losses and uncountably infinite sets of tasks.
---
paper_title: Online learning of multiple tasks with a shared loss
paper_content:
We study the problem of learning multiple tasks in parallel within the online learning framework. On each online round, the algorithm receives an instance for each of the parallel tasks and responds by predicting the label of each instance. We consider the case where the predictions made on each round all contribute toward a common goal. The relationship between the various tasks is defined by a global loss function, which evaluates the overall quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own loss value, and then these multiple loss values are combined into a single number using the global loss function. We focus on the case where the global loss function belongs to the family of absolute norms, and present several online learning algorithms for the induced problem. We prove worst-case relative loss bounds for all of our algorithms, and demonstrate the effectiveness of our approach on a large-scale multiclass-multilabel text categorization problem.
---
paper_title: Feature hashing for large scale multitask learning
paper_content:
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case --- multitask learning with hundreds of thousands of tasks.
---
paper_title: Linear Algorithms for Online Multitask Classification
paper_content:
Stainless steel articles such as support pins for color television tube shadow mask assemblies are rendered resistant to the formation of surface nodules promoting tube breakage by processing through a vacuum-firing heat treatment prior to use.
---
paper_title: Multi-Task Learning in Heterogeneous Feature Spaces
paper_content:
Multi-task learning aims at improving the generalization performance of a learning task with the help of some other related tasks. Although many multi-task learning methods have been proposed, they are all based on the assumption that all tasks share the same data representation. This assumption is too restrictive for general applications. In this paper, we propose a multi-task extension of linear discriminant analysis (LDA), called multi-task discriminant analysis (MTDA), which can deal with learning tasks with different data representations. For each task, MTDA learns a separate transformation which consists of two parts, one specific to the task and one common to all tasks. A by-product of MTDA is that it can alleviate the labeled data deficiency problem of LDA. Moreover, unlike many existing multi-task learning methods, MTDA can handle binary and multi-class problems for each task in a generic way. Experimental results on face recognition show that MTDA consistently outperforms related methods.
---
paper_title: Parallel Multi-task Learning
paper_content:
In this paper, we develop parallel algorithms for a family of regularized multi-task methods which can model task relations under the regularization framework. Since those multi-task methods cannot be parallelized directly, we use the FISTA algorithm, which in each iteration constructs a surrogate function of the original problem by utilizing the Lipschitz structure of the objective function based on the solution in the last iteration, to solve it. Specifically, we investigate the dual form of the objective function in those methods by adopting the hinge, e-insensitive, and square losses to deal with multi-task classification and regression problems, and then utilize the Lipschitz structure to construct the surrogate function for the dual forms. The surrogate functions constructed in the FISTA algorithm are founded to be decomposable, leading to parallel designs for those multi-task methods. Experiments on several benchmark datasets show that the convergence of the proposed algorithms is as fast as that of SMO-style algorithms and the parallel design can speedup the computation.
---
paper_title: Online Multitask Learning
paper_content:
We study the problem of online learning of multiple tasks in parallel. On each online round, the algorithm receives an instance and makes a prediction for each one of the parallel tasks. We consider the case where these tasks all contribute toward a common goal. We capture the relationship between the tasks by using a single global loss function to evaluate the quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own individual loss, and then these loss values are combined using a global loss function. We present several families of online algorithms which can use any absolute norm as a global loss function. We prove worst-case relative loss bounds for all of our algorithms.
---
paper_title: Distributed Multi-Task Learning
paper_content:
We consider the problem of distributed multitask learning, where each machine learns a separate, but related, task. Specifically, each machine learns a linear predictor in high-dimensional space, where all tasks share the same small support. We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.
---
paper_title: Online Learning of Multiple Tasks and Their Relationships
paper_content:
We propose an Online MultiTask Learning (Omtl) framework which simultaneously learns the task weight vectors as well as the task relatedness adaptively from the data. Our work is in contrast with prior work on online multitask learning which assumes fixed task relatedness, a priori. Furthermore, whereas prior work in such settings assume only positively correlated tasks, our framework can capture negative correlations as well. Our proposed framework learns the task relationship matrix by framing the objective function as a Bregman divergence minimization problem for positive definite matrices. Subsequently, we exploit this adaptively learned task-relationship matrix to select the most informative samples in an online multitask active learning setting. Experimental results on a number of real-world datasets and comparisons with numerous baselines establish the efficacy of our proposed approach.
---
paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data
paper_content:
One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.
---
paper_title: Robust Visual Tracking via Structured Multi-Task Sparse Learning
paper_content:
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing $$\ell _{p,q}$$ mixed norms $$(\text{ specifically} p\in \{2,\infty \}$$ and $$q=1),$$ we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular $$L_1$$ tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259---2272, 2011) is a special case of our MTT formulation (denoted as the $$L_{11}$$ tracker) when $$p=q=1.$$ Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.
---
paper_title: Scalable Multitask Representation Learning for Scene Classification
paper_content:
The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually. In particular, if only a few training examples are available for each task, sharing a jointly trained representation improves classification performance. In this paper, we propose a novel multitask learning method that learns a low-dimensional representation jointly with the corresponding classifiers, which are then able to profit from the latent inter-class correlations. Our method scales with respect to the original feature dimension and can be used with high-dimensional image descriptors such as the Fisher Vector. Furthermore, it consistently outperforms the current state of the art on the SUN397 scene classification benchmark with varying amounts of training data.
---
paper_title: Robust visual tracking via multi-task sparse learning
paper_content:
In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing l p, q mixed norms (p Є {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.
---
paper_title: Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection
paper_content:
In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we consider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.
---
paper_title: No Matter Where You Are: Flexible Graph-Guided Multi-task Learning for Multi-view Head Pose Classification under Target Motion
paper_content:
We propose a novel Multi-Task Learning framework (FEGA-MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. As the target (person) moves, distortions in facial appearance owing to camera perspective and scale severely impede performance of traditional head pose classification methods. FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers. Guided by two graphs which a-priori model appearance similarity among (i) grid partitions based on camera geometry and (ii) head pose classes, the learner efficiently clusters appearance wise related grid partitions to derive the optimal partitioning. For pose classification, upon determining the target's position using a person tracker, the appropriate region specific classifier is invoked. Experiments confirm that FEGA-MTL achieves state-of-the-art classification with few training data.
---
paper_title: Multi-Task Learning with Low Rank Attribute Embedding for Person Re-Identification
paper_content:
We propose a novel Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) framework for person re-identification. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information to improve re-identification accuracy. Both low level features and semantic/data-driven attributes are utilized. Since attributes are generally correlated, we introduce a low rank attribute embedding into the MTL formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered to better describe people. The learning objective function consists of a quadratic loss regarding class labels and an attribute embedding error, which is solved by an alternating optimization procedure. Experiments on three person re-identification datasets have demonstrated that MTL-LORAE outperforms existing approaches by a large margin and produces state-of-the-art results.
---
paper_title: Sparse multi-task regression and feature selection to identify brain imaging predictors for memory performance
paper_content:
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions, which makes regression analysis a suitable model to study whether neuroimaging measures can help predict memory performance and track the progression of AD. Existing memory performance prediction methods via regression, however, do not take into account either the interconnected structures within imaging data or those among memory scores, which inevitably restricts their predictive capabilities. To bridge this gap, we propose a novel Sparse Multi-tAsk Regression and feaTure selection (SMART) method to jointly analyze all the imaging and clinical data under a single regression framework and with shared underlying sparse representations. Two convex regularizations are combined and used in the model to enable sparsity as well as facilitate multi-task learning. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performances in all empirical test cases and a compact set of selected RAVLT-relevant MRI predictors that accord with prior studies.
---
paper_title: Multi-task deep visual-semantic embedding for video thumbnail selection
paper_content:
Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.
---
paper_title: Multi-task CNN Model for Attribute Prediction
paper_content:
This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.
---
paper_title: Multi-task warped Gaussian process for personalized age estimation
paper_content:
Automatic age estimation from facial images has aroused research interests in recent years due to its promising potential for some computer vision applications. Among the methods proposed to date, personalized age estimation methods generally outperform global age estimation methods by learning a separate age estimator for each person in the training data set. However, since typical age databases only contain very limited training data for each person, training a separate age estimator using only training data for that person runs a high risk of overfitting the data and hence the prediction performance is limited. In this paper, we propose a novel approach to age estimation by formulating the problem as a multi-task learning problem. Based on a variant of the Gaussian process (GP) called warped Gaussian process (WGP), we propose a multi-task extension called multi-task warped Gaussian process (MTWGP). Age estimation is formulated as a multi-task regression problem in which each learning task refers to estimation of the age function for each person. While MTWGP models common features shared by different tasks (persons), it also allows task-specific (person-specific) features to be learned automatically. Moreover, unlike previous age estimation methods which need to specify the form of the regression functions or determine many parameters in the functions using inefficient methods such as cross validation, the form of the regression functions in MTWGP is implicitly defined by the kernel function and all its model parameters can be learned from data automatically. We have conducted experiments on two publicly available age databases, FG-NET and MORPH. The experimental results are very promising in showing that MTWGP compares favorably with state-of-the-art age estimation methods.
---
paper_title: Multilinear Multitask Learning
paper_content:
Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.
---
paper_title: A Dirty Model for Multi-task Learning
paper_content:
We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Recent research has studied the use of l1/lq norm block-regularizations with q > 1 for such block-sparse structured problems, establishing strong guarantees on recovery even under high-dimensional scaling where the number of features scale with the number of observations. However, these papers also caution that the performance of such block-regularized methods are very dependent on the extent to which the features are shared across tasks. Indeed they show [8] that if the extent of overlap is less than a threshold, or even if parameter values in the shared features are highly uneven, then block l1/lq regularization could actually perform worse than simple separate elementwise l1 regularization. Since these caveats depend on the unknown true parameters, we might not know when and which method to apply. Even otherwise, we are far away from a realistic multi-task setting: not only do the set of relevant features have to be exactly the same across tasks, but their values have to as well. ::: ::: Here, we ask the question: can we leverage parameter overlap when it exists, but not pay a penalty when it does not? Indeed, this falls under a more general question of whether we can model such dirty data which may not fall into a single neat structural bracket (all block-sparse, or all low-rank and so on). With the explosion of such dirty high-dimensional data in modern settings, it is vital to develop tools - dirty models - to perform biased statistical estimation tailored to such data. Here, we take a first step, focusing on developing a dirty model for the multiple regression problem. Our method uses a very simple idea: we estimate a superposition of two sets of parameters and regularize them differently. We show both theoretically and empirically, our method strictly and noticeably outperforms both l1 or l1/lq methods, under high-dimensional scaling and over the entire range of possible overlaps (except at boundary cases, where we match the best method).
---
paper_title: Rotating your face using multi-task deep neural network
paper_content:
Face recognition under viewpoint and illumination changes is a difficult problem, so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature. Zhu et al. [26] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature. In this scheme, preserving identity while rotating pose image is a crucial issue. This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. The target pose can be controlled by the user's intention. This novel type of multi-task model significantly improves identity preservation over the single task model. By using all the synthesized controlled pose images, called Controlled Pose Image (CPI), for the pose-illumination-invariant feature and voting among the multiple face recognition results, we clearly outperform the state-of-the-art algorithms by more than 4∼6% on the MultiPIE dataset.
---
paper_title: Hierarchical kernel stick-breaking process for multi-task image analysis
paper_content:
The kernel stick-breaking process (KSBP) is employed to segment general imagery, imposing the condition that patches (small blocks of pixels) that are spatially proximate are more likely to be associated with the same cluster (segment). The number of clusters is not set a priori and is inferred from the hierarchical Bayesian model. Further, KSBP is integrated with a shared Dirichlet process prior to simultaneously model multiple images, inferring their inter-relationships. This latter application may be useful for sorting and learning relationships between multiple images. The Bayesian inference algorithm is based on a hybrid of variational Bayesian analysis and local sampling. In addition to providing details on the model and associated inference framework, example results are presented for several image-analysis problems.
---
paper_title: Boosted multi-task learning for face verification with applications to web image and video search
paper_content:
Face verification has many potential applications including filtering and ranking image/video search results on celebrities. Since these images/videos are taken under uncontrolled environments, the problem is very challenging due to dramatic lighting and pose variations, low resolutions, compression artifacts, etc. In addition, the available number of training images for each celebrity may be limited, hence learning individual classifiers for each person may cause overfitting. In this paper, we propose two ideas to meet the above challenges. First, we propose to use individual bins, instead of whole histograms, of Local Binary Patterns (LBP) as features for learning, which yields significant performance improvements and computation reduction in our experiments. Second, we present a novel Multi-Task Learning (MTL) framework, called Boosted MTL, for face verification with limited training data. It jointly learns classifiers for multiple people by sharing a few boosting classifiers in order to avoid overfitting. The effectiveness of Boosted MTL and LBP bin features is verified with a large number of celebrity images/videos from the web.
---
paper_title: Saliency Detection by Multitask Sparsity Pursuit
paper_content:
This paper addresses the problem of detecting salient areas within natural images. We shall mainly study the problem under unsupervised setting, i.e., saliency detection without learning from labeled images. A solution of multitask sparsity pursuit is proposed to integrate multiple types of features for detecting saliency collaboratively. Given an image described by multiple features, its saliency map is inferred by seeking the consistently sparse elements from the joint decompositions of multiple-feature matrices into pairs of low-rank and sparse matrices. The inference process is formulated as a constrained nuclear norm and as an l2,1 -norm minimization problem, which is convex and can be solved efficiently with an augmented Lagrange multiplier method. Compared with previous methods, which usually make use of multiple features by combining the saliency maps obtained from individual features, the proposed method seamlessly integrates multiple features to produce jointly the saliency map with a single inference step and thus produces more accurate and reliable results. In addition to the unsupervised setting, the proposed method can be also generalized to incorporate the top-down priors obtained from supervised environment. Extensive experiments well validate its superiority over other state-of-the-art methods.
---
paper_title: Facial Landmark Detection by Deep Multi-task Learning
paper_content:
Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].
---
paper_title: Multi-task Sparse Learning with Beta Process Prior for Action Recognition
paper_content:
In this paper, we formulate human action recognition as a novel Multi-Task Sparse Learning(MTSL) framework which aims to construct a test sample with multiple features from as few bases as possible. Learning the sparse representation under each feature modality is considered as a single task in MTSL. Since the tasks are generated from multiple features associated with the same visual input, they are not independent but inter-related. We introduce a Beta process(BP) prior to the hierarchical MTSL model, which efficiently learns a compact dictionary and infers the sparse structure shared across all the tasks. The MTSL model enforces the robustness in coefficient estimation compared with performing each task independently. Besides, the sparseness is achieved via the Beta process formulation rather than the computationally expensive L1 norm penalty. In terms of non-informative gamma hyper-priors, the sparsity level is totally decided by the data. Finally, the learning problem is solved by Gibbs sampling inference which estimates the full posterior on the model parameters. Experimental results on the KTH and UCF sports datasets demonstrate the effectiveness of the proposed MTSL approach for action recognition.
---
paper_title: Visual classification with multi-task joint sparse representation
paper_content:
We address the problem of computing joint sparse representation of visual signal across multiple kernel-based representations. Such a problem arises naturally in supervised visual recognition applications where one aims to reconstruct a test sample with multiple features from as few training subjects as possible. We cast the linear version of this problem into a multi-task joint covariate selection model [15], which can be very efficiently optimized via ker-nelizable accelerated proximal gradient method. Furthermore, two kernel-view extensions of this method are provided to handle the situations where descriptors and similarity functions are in the form of kernel matrices. We then investigate into two applications of our algorithm to feature combination: 1) fusing gray-level and LBP features for face recognition, and 2) combining multiple kernels for object categorization. Experimental results on challenging real-world datasets show that the feature combination capability of our proposed algorithm is competitive to the state-of-the-art multiple kernel learning methods.
---
paper_title: Multi-task Recurrent Neural Network for Immediacy Prediction
paper_content:
In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset.
---
paper_title: Multitask learning meets tensor factorization: task imputation via convex optimization
paper_content:
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.
---
paper_title: Tracking via Robust Multi-task Multi-view Joint Sparse Representation
paper_content:
Combining multiple observation views has proven beneficial for tracking. In this paper, we cast tracking as a novel multi-task multi-view sparse learning problem and exploit the cues from multiple views including various types of visual features, such as intensity, color, and edge, where each feature observation can be sparsely represented by a linear combination of atoms from an adaptive feature dictionary. The proposed method is integrated in a particle filter framework where every view in each particle is regarded as an individual task. We jointly consider the underlying relationship between tasks across different views and different particles, and tackle it in a unified robust multi-task formulation. In addition, to capture the frequently emerging outlier tasks, we decompose the representation matrix to two collaborative components which enable a more robust and accurate approximation. We show that the proposed formulation can be efficiently solved using the Accelerated Proximal Gradient method with a small number of closed-form updates. The presented tracker is implemented using four types of features and is tested on numerous benchmark video sequences. Both the qualitative and quantitative results demonstrate the superior performance of the proposed approach compared to several state-of-the-art trackers.
---
paper_title: Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network
paper_content:
We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network. In particular, we simultaneously learn a pose-joint regressor and a sliding-window body-part detector in a deep network architecture. We show that including the body-part detection task helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several data sets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Multi-population GWA mapping via multi-task regularized regression
paper_content:
MOTIVATION ::: Population heterogeneity through admixing of different founder populations can produce spurious associations in genome-wide association studies that are linked to the population structure rather than the phenotype. Since samples from the same population generally co-evolve, different populations may or may not share the same genetic underpinnings for the seemingly common phenotype. Our goal is to develop a unified framework for detecting causal genetic markers through a joint association analysis of multiple populations. ::: ::: ::: RESULTS ::: Based on a multi-task regression principle, we present a multi-population group lasso algorithm using L(1)/L(2)-regularized regression for joint association analysis of multiple populations that are stratified either via population survey or computational estimation. Our algorithm combines information from genetic markers across populations, to identify causal markers. It also implicitly accounts for correlations between the genetic markers, thus enabling better control over false positive rates. Joint analysis across populations enables the detection of weak associations common to all populations with greater power than in a separate analysis of each population. At the same time, the regression-based framework allows causal alleles that are unique to a subset of the populations to be correctly identified. We demonstrate the effectiveness of our method on HapMap-simulated and lactase persistence datasets, where we significantly outperform state of the art methods, with greater power for detecting weak associations and reduced spurious associations. ::: ::: ::: AVAILABILITY ::: Software will be available at http://www.sailing.cs.cmu.edu/.
---
paper_title: A multi-task learning formulation for predicting disease progression
paper_content:
Alzheimer's Disease (AD), the most common type of dementia, is a severe neurodegenerative disorder. Identifying markers that can track the progress of the disease has recently received increasing attentions in AD research. A definitive diagnosis of AD requires autopsy confirmation, thus many clinical/cognitive measures including Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale cognitive subscale (ADAS-Cog) have been designed to evaluate the cognitive status of the patients and used as important criteria for clinical diagnosis of probable AD. In this paper, we propose a multi-task learning formulation for predicting the disease progression measured by the cognitive scores and selecting markers predictive of the progression. Specifically, we formulate the prediction problem as a multi-task regression problem by considering the prediction at each time point as a task. We capture the intrinsic relatedness among different tasks by a temporal group Lasso regularizer. The regularizer consists of two components including an L2,1-norm penalty on the regression weight vectors, which ensures that a small subset of features will be selected for the regression models at all time points, and a temporal smoothness term which ensures a small deviation between two regression models at successive time points. We have performed extensive evaluations using various types of data at the baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database for predicting the future MMSE and ADAS-Cog scores. Our experimental studies demonstrate the effectiveness of the proposed algorithm for capturing the progression trend and the cross-sectional group differences of AD severity. Results also show that most markers selected by the proposed algorithm are consistent with findings from existing cross-sectional studies.
---
paper_title: High-Order Multi-Task Feature Learning to Identify Longitudinal Phenotypic Markers for Alzheimer's Disease Progression Prediction
paper_content:
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions. Regression analysis has been studied to relate neuroimaging measures to cognitive status. However, whether these measures have further predictive power to infer a trajectory of cognitive performance over time is still an under-explored but important topic in AD research. We propose a novel high-order multi-task learning model to address this issue. The proposed model explores the temporal correlations existing in imaging and cognitive data by structured sparsity-inducing norms. The sparsity of the model enables the selection of a small number of imaging measures while maintaining high prediction accuracy. The empirical studies, using the longitudinal imaging and cognitive data of the ADNI cohort, have yielded promising results.
---
paper_title: A Multi-Task Learning Formulation for Survival Analysis
paper_content:
Predicting the occurrence of a particular event of interest at future time points is the primary goal of survival analysis. The presence of incomplete observations due to time limitations or loss of data traces is known as censoring which brings unique challenges in this domain and differentiates survival analysis from other standard regression methods. The popularly used survival analysis methods such as Cox proportional hazard model and parametric survival regression suffer from some strict assumptions and hypotheses that are not realistic in most of the real-world applications. To overcome the weaknesses of these two types of methods, in this paper, we reformulate the survival analysis problem as a multi-task learning problem and propose a new multi-task learning based formulation to predict the survival time by estimating the survival status at each time interval during the study duration. We propose an indicator matrix to enable the multi-task learning algorithm to handle censored instances and incorporate some of the important characteristics of survival problems such as non-negative non-increasing list structure into our model through max-heap projection. We employ the L2,1-norm penalty which enables the model to learn a shared representation across related tasks and hence select important features and alleviate over-fitting in high-dimensional feature spaces; thus, reducing the prediction error of each task. To efficiently handle the two non-smooth constraints, in this paper, we propose an optimization method which employs Alternating Direction Method of Multipliers (ADMM) algorithm to solve the proposed multi-task learning problem. We demonstrate the performance of the proposed method using real-world microarray gene expression high-dimensional benchmark datasets and show that our method outperforms state-of-the-art methods.
---
paper_title: Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study
paper_content:
BackgroundGene silencing using exogenous small interfering RNAs (siRNAs) is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC) to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs.ResultsAn elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs) have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy.ConclusionsThe knowledge gained from our study provides useful insights on how to analyze various cross-platform RNAi data for uncovering of their complex mechanism.
---
paper_title: Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction
paper_content:
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. ::: ::: Availability and implementation: The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. ::: ::: Contact: moc.mbi.su@ehd
---
paper_title: ProDiGe: Prioritization Of Disease Genes with multitask machine learning from positive and unlabeled examples
paper_content:
Elucidating the genetic basis of human diseases is a central goal of genetics and molecular biology. While traditional linkage analysis and modern high-throughput techniques often provide long lists of tens or hundreds of disease gene candidates, the identification of disease genes among the candidates remains time-consuming and expensive. Efficient computational methods are therefore needed to prioritize genes within the list of candidates, by exploiting the wealth of information available about the genes in various databases. Here we propose ProDiGe, a novel algorithm for Prioritization of Disease Genes. ProDiGe implements a novel machine learning strategy based on learning from positive and unlabeled examples, which allows to integrate various sources of information about the genes, to share information about known disease genes across diseases, and to perform genome-wide searches for new disease genes. Experiments on real data show that ProDiGe outperforms state-of-the-art methods for the prioritization of genes in human diseases.
---
paper_title: Leveraging sequence classification by taxonomy-based multitask learning
paper_content:
In this work we consider an inference task that biologists are very good at: deciphering biological processes by bringing together knowledge that has been obtained by experiments using various organisms, while respecting the differences and commonalities of these organisms We look at this problem from an sequence analysis point of view, where we aim at solving the same classification task in different organisms We investigate the challenge of combining information from several organisms, whereas we consider the relation between the organisms to be defined by a tree structure derived from their phylogeny Multitask learning, a machine learning technique that recently received considerable attention, considers the problem of learning across tasks that are related to each other We treat each organism as one task and present three novel multitask learning methods to handle situations in which the relationships among tasks can be described by a hierarchy These algorithms are designed for large-scale applications and are therefore applicable to problems with a large number of training examples, which are frequently encountered in sequence analysis We perform experimental analyses on synthetic data sets in order to illustrate the properties of our algorithms Moreover, we consider a problem from genomic sequence analysis, namely splice site recognition, to illustrate the usefulness of our approach We show that intelligently combining data from 15 eukaryotic organisms can indeed significantly improve the prediction performance compared to traditional learning approaches On a broader perspective, we expect that algorithms like the ones presented in this work have the potential to complement and enrich the strategy of homology-based sequence analysis that are currently the quasi-standard in biological sequence analysis.
---
paper_title: Multitask Learning for Brain-Computer Interfaces
paper_content:
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subjectspecific calibration data prior to actual use of the BCI for communication. In this paper, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process. We discuss how this out-of-the-box BCI can be further improved in a computationally efficient manner as subject-specific data becomes available. The feasibility of the approach is demonstrated on two sets of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of 19 healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and combining prior recordings with subjectspecific calibration data substantially outperforms using subject-specific data only. Our results further show that transfer between recordings under slightly different experimental setups is feasible.
---
paper_title: Inferring latent task structure for Multitask Learning by Multiple Kernel Learning
paper_content:
BackgroundThe lack of sufficient training data is the limiting factor for many Machine Learning applications in Computational Biology. If data is available for several different but related problem domains, Multitask Learning algorithms can be used to learn a model based on all available information. In Bioinformatics, many problems can be cast into the Multitask Learning scenario by incorporating data from several organisms. However, combining information from several tasks requires careful consideration of the degree of similarity between tasks. Our proposed method simultaneously learns or refines the similarity between tasks along with the Multitask Learning classifier. This is done by formulating the Multitask Learning problem as Multiple Kernel Learning, using the recently published q-Norm MKL algorithm.ResultsWe demonstrate the performance of our method on two problems from Computational Biology. First, we show that our method is able to improve performance on a splice site dataset with given hierarchical task structure by refining the task relationships. Second, we consider an MHC-I dataset, for which we assume no knowledge about the degree of task relatedness. Here, we are able to learn the task similarities ab initio along with the Multitask classifiers. In both cases, we outperform baseline methods that we compare against.ConclusionsWe present a novel approach to Multitask Learning that is capable of learning task similarity along with the classifiers. The framework is very general as it allows to incorporate prior knowledge about tasks relationships if available, but is also able to identify task similarities in absence of such prior information. Both variants show promising results in applications from Computational Biology.
---
paper_title: Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
paper_content:
A central theme in learning from image data is to develop appropriate image representations for the specific task at hand. Traditional methods used handcrafted local features combined with high-level image representations to generate image-level representations. Thus, a practical challenge is to determine what features are appropriate for specific tasks. For example, in the study of gene expression patterns in Drosophila melanogaster, texture features based on wavelets were particularly effective for determining the developmental stages from in situ hybridization (ISH) images. Such image representation is however not suitable for controlled vocabulary (CV) term annotation because each CV term is often associated with only a part of an image. Here, we developed problem-independent feature extraction methods to generate hierarchical representations for ISH images. Our approach is based on the deep convolutional neural networks (CNNs) that can act on image pixels directly. To make the extracted features generic, the models were trained using a natural image set with millions of labeled examples. These models were transferred to the ISH image domain and used directly as feature extractors to compute image representations. Furthermore, we employed multi-task learning method to fine-tune the pre-trained models with labeled ISH images, and also extracted features from the fine-tuned models. Experimental results showed that feature representations computed by deep models based on transfer and multi-task learning significantly outperformed other methods for annotating gene expression patterns at different stage ranges. We also demonstrated that the intermediate layers of deep models produced the best gene expression pattern representations.
---
paper_title: Sparse multitask regression for identifying common mechanism of response to therapeutic targets
paper_content:
Motivation: Molecular association of phenotypic responses is an important step in hypothesis generation and for initiating design of new experiments. Current practices for associating gene expression data with multidimensional phenotypic data are typically (i) performed one-to-one, i.e. each gene is examined independently with a phenotypic index and (ii) tested with one stress condition at a time, i.e. different perturbations are analyzed separately. As a result, the complex coordination among the genes responsible for a phenotypic profile is potentially lost. More importantly, univariate analysis can potentially hide new insights into common mechanism of response. ::: ::: Results: In this article, we propose a sparse, multitask regression model together with co-clustering analysis to explore the intrinsic grouping in associating the gene expression with phenotypic signatures. The global structure of association is captured by learning an intrinsic template that is shared among experimental conditions, with local perturbations introduced to integrate effects of therapeutic agents. We demonstrate the performance of our approach on both synthetic and experimental data. Synthetic data reveal that the multi-task regression has a superior reduction in the regression error when compared with traditional L1-and L2-regularized regression. On the other hand, experiments with cell cycle inhibitors over a panel of 14 breast cancer cell lines demonstrate the relevance of the computed molecular predictors with the cell cycle machinery, as well as the identification of hidden variables that are not captured by the baseline regression analysis. Accordingly, the system has identified CLCA2 as a hidden transcript and as a common mechanism of response for two therapeutic agents of CI-1040 and Iressa, which are currently in clinical use. ::: ::: Contact: [email protected]
---
paper_title: Multitask Learning for Protein Subcellular Location Prediction
paper_content:
Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational methods. The location information can indicate key functionalities of proteins. Thus, accurate prediction of subcellular localizations of proteins can help the prediction of protein functions and genome annotations, as well as the identification of drug targets. Machine learning methods such as Support Vector Machines (SVMs) have been used in the past for the problem of protein subcellular localization, but have been shown to suffer from a lack of annotated training data in each species under study. To overcome this data sparsity problem, we observe that because some of the organisms may be related to each other, there may be some commonalities across different organisms that can be discovered and used to help boost the data in each localization task. In this paper, we formulate protein subcellular localization problem as one of multitask learning across different organisms. We adapt and compare two specializations of the multitask learning algorithms on 20 different organisms. Our experimental results show that multitask learning performs much better than the traditional single-task methods. Among the different multitask learning methods, we found that the multitask kernels and supertype kernels under multitask learning that share parameters perform slightly better than multitask learning by sharing latent features. The most significant improvement in terms of localization accuracy is about 25 percent. We find that if the organisms are very different or are remotely related from a biological point of view, then jointly training the multiple models cannot lead to significant improvement. However, if they are closely related biologically, the multitask learning can do much better than individual learning.
---
paper_title: Joint covariate selection and joint subspace selection for multiple classification problems
paper_content:
We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of ? 2 norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.
---
paper_title: Sparse Bayesian multi-task learning for predicting cognitive outcomes from neuroimaging measures in Alzheimer's disease
paper_content:
Alzheimer's disease (AD) is the most common form of dementia that causes progressive impairment of memory and other cognitive functions. Multivariate regression models have been studied in AD for revealing relationships between neuroimaging measures and cognitive scores to understand how structural changes in brain can influence cognitive status. Existing regression methods, however, do not explicitly model dependence relation among multiple scores derived from a single cognitive test. It has been found that such dependence can deteriorate the performance of these methods. To overcome this limitation, we propose an efficient sparse Bayesian multi-task learning algorithm, which adaptively learns and exploits the dependence to achieve improved prediction performance. The proposed algorithm is applied to a real world neuroimaging study in AD to predict cognitive performance using MRI scans. The effectiveness of the proposed algorithm is demonstrated by its superior prediction performance over multiple state-of-the-art competing methods and accurate identification of compact sets of cognition-relevant imaging biomarkers that are consistent with prior knowledge.
---
paper_title: Multi-task Sequence to Sequence Learning
paper_content:
Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Furthermore, we have established a new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the MTL context: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.
---
paper_title: Collaborative Multi-domain Sentiment Classification
paper_content:
Sentiment classification is a hot research topic in both industrial and academic fields. The mainstream sentiment classification methods are based on machine learning and treat sentiment classification as a text classification problem. However, sentiment classification is widely recognized as a highly domain-dependent task. The sentiment classifier trained in one domain may not perform well in another domain. A simple solution to this problem is training a domain-specific sentiment classifier for each domain. However, it is difficult to label enough data for every domain since they are in a large quantity. In addition, this method omits the sentiment information in other domains. In this paper, we propose to train sentiment classifiers for multiple domains in a collaborative way based on multi-task learning. Specifically, we decompose the sentiment classifier in each domain into two components, a general one and a domain-specific one. The general sentiment classifier can capture the global sentiment information and is trained across various domains to obtain better generalization ability. The domain-specific sentiment classifier is trained using the labeled data in one domain to capture the domain-specific sentiment information. In addition, we explore two kinds of relations between domains, one based on textual content and the other one based on sentiment word distribution. We build a domain similarity graph using domain relations and encode it into our approach as regularization over the domain-specific sentiment classifiers. Besides, we incorporate the sentiment knowledge extracted from sentiment lexicons to help train the general sentiment classifier more accurately. Moreover, we introduce an accelerated optimization algorithm to train the sentiment classifiers efficiently. Experimental results on two benchmark sentiment datasets show that our method can outperform baseline methods significantly and consistently.
---
paper_title: A unified architecture for natural language processing: deep neural networks with multitask learning
paper_content:
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.
---
paper_title: Multi-domain Dialog State Tracking using Recurrent Neural Networks
paper_content:
Dialog state tracking is a key component of many modern dialog systems, most of which are designed with a single, well-defined domain in mind. This paper shows that dialog data drawn from different dialog domains can be used to train a general belief tracking model which can operate across all of these domains, exhibiting superior performance to each of the domain-specific models. We propose a training procedure which uses out-of-domain data to initialise belief tracking models for entirely new domains. This procedure leads to improvements in belief tracking performance regardless of the amount of in-domain data available for training the model.
---
paper_title: Deep neural networks employing Multi-Task Learning and stacked bottleneck features for speech synthesis
paper_content:
Deep neural networks (DNNs) use a cascade of hidden representations to enable the learning of complex mappings from input to output features. They are able to learn the complex mapping from text-based linguistic features to speech acoustic features, and so perform text-to-speech synthesis. Recent results suggest that DNNs can produce more natural synthetic speech than conventional HMM-based statistical parametric systems. In this paper, we show that the hidden representation used within a DNN can be improved through the use of Multi-Task Learning, and that stacking multiple frames of hidden layer activations (stacked bottleneck features) also leads to improvements. Experimental results confirmed the effectiveness of the proposed methods, and in listening tests we find that stacked bottleneck features in particular offer a significant improvement over both a baseline DNN and a benchmark HMM system.
---
paper_title: Feature Constrained Multi-Task Learning Models for Spatiotemporal Event Forecasting
paper_content:
Spatial event forecasting from social media is potentially extremely useful but suffers from critical challenges, such as the dynamic patterns of features (keywords) and geographic heterogeneity (e.g., spatial correlations, imbalanced samples, and different populations in different locations). Most existing approaches (e.g., LASSO regression, dynamic query expansion, and burst detection) address some, but not all, of these challenges. Here, we propose a novel multi-task learning framework that aims to concurrently address all the challenges involved. Specifically, given a collection of locations (e.g., cities), forecasting models are built for all the locations simultaneously by extracting and utilizing appropriate shared information that effectively increases the sample size for each location, thus improving the forecasting performance. The new model combines both static features derived from a predefined vocabulary by domain experts and dynamic features generated from dynamic query expansion in a multi-task feature learning framework. Different strategies to balance homogeneity and diversity between static and dynamic terms are also investigated. And, efficient algorithms based on Iterative Group Hard Thresholding are developed to achieve efficient and effective model training and prediction. Extensive experimental evaluations on Twitter data from civil unrest and influenza outbreak datasets demonstrate the effectiveness and efficiency of our proposed approach.
---
paper_title: Fusion of multiple parameterisations for DNN-based sinusoidal speech synthesis with multi-task learning.
paper_content:
It has recently been shown that deep neural networks (DNN) can improve the quality of statistical parametric speech synthesis (SPSS) when using a source-filter vocoder. Our own previous work has furthermore shown that a dynamic sinusoidal model (DSM) is also highly suited to DNN-based SPSS, whereby sinusoids may either be used themselves as a “direct parameterisation” (DIR), or they may be encoded using an “intermediate spectral parameterisation” (INT). The approach in that work was effectively to replace a decision tree with a neural network. However, waveform parameterisation and synthesis steps that have been developed to suit HMMs may not fully exploit DNN capabilities. Here, in contrast, we investigate ways to combine INT and DIR at the levels of both DNN modelling and waveform generation. For DNN training, we propose to use multi-task learning to model cepstra (from INT) and log amplitudes (from DIR) as primary and secondary tasks. Our results show combining these improves modelling accuracy for both tasks. Next, during synthesis, instead of discarding parameters from the second task, a fusion method using harmonic amplitudes derived from both tasks is applied. Preference tests show the proposed method gives improved performance, and that this applies to synthesising both with and without global variance parameters.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Multi-task learning for learning to rank in web search
paper_content:
Both the quality and quantity of training data have significant impact on the performance of ranking functions in the context of learning to rank for web search. Due to resource constraints, training data for smaller search engine markets are scarce and we need to leverage existing training data from large markets to enhance the learning of ranking function for smaller markets. In this paper, we present a boosting framework for learning to rank in the multi-task learning context for this purpose. In particular, we propose to learn non-parametric common structures adaptively from multiple tasks in a stage-wise way. An algorithm is developed to iteratively discover super-features that are effective for all the tasks. The estimation of the functions for each task is then learned as a linear combination of those super-features. We evaluate the performance of this multi-task learning method for web search ranking using data from a search engine. Our results demonstrate that multi-task learning methods bring significant relevance improvements over existing baseline methods.
---
paper_title: Scalable hierarchical multitask learning algorithms for conversion optimization in display advertising
paper_content:
Many estimation tasks come in groups and hierarchies of related problems. In this paper we propose a hierarchical model and a scalable algorithm to perform inference for multitask learning. It infers task correlation and subtask structure in a joint sparse setting. Implementation is achieved by a distributed subgradient oracle and the successive application of prox-operators pertaining to groups and subgroups of variables. We apply this algorithm to conversion optimization in display advertising. Experimental results on over 1TB data for up to 1 billion observations and 1 million attributes show that the algorithm provides significantly better prediction accuracy while simultaneously beingefficiently scalable by distributed parameter synchronization.
---
paper_title: Multi-task learning for boosting with application to web search ranking
paper_content:
In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing the specifics of each learning task with task-specific parameters and the commonalities between them through shared parameters. This enables implicit data sharing and regularization. We evaluate our learning method on web-search ranking data sets from several countries. Here, multitask learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.
---
paper_title: A Convex Formulation for Learning Task Relationships in Multi-Task Learning
paper_content:
Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.
---
paper_title: Multi-Domain Collaborative Filtering
paper_content:
Collaborative filtering is an effective recommendation approach in which the preference of a user on an item is predicted based on the preferences of other users with similar interests. A big challenge in using collaborative filtering methods is the data sparsity problem which often arises because each user typically only rates very few items and hence the rating matrix is extremely sparse. In this paper, we address this problem by considering multiple collaborative filtering tasks in different domains simultaneously and exploiting the relationships between domains. We refer to it as a multi-domain collaborative filtering (MCF) problem. To solve the MCF problem, we propose a probabilistic framework which uses probabilistic matrix factorization to model the rating problem in each domain and allows the knowledge to be adaptively transferred across different domains by automatically learning the correlation between domains. We also introduce the link function for different domains to correct their biases. Experiments conducted on several real-world applications demonstrate the effectiveness of our methods when compared with some representative methods.
---
paper_title: Web-scale multi-task feature selection for behavioral targeting
paper_content:
A typical behavioral targeting system optimizing purchase activities, called conversions, faces two main challenges: the web-scale amounts of user histories to process on a daily basis, and the relative sparsity of conversions. In this paper, we try to address these challenges through feature selection. We formulate a multi-task (or group) feature-selection problem among a set of related tasks (sharing a common set of features), namely advertising campaigns. We apply a group-sparse penalty consisting of a combination of an l1 and l2 penalty and an associated fast optimization algorithm for distributed parameter estimation. Our algorithm relies on a variant of the well known Fast Iterative Thresholding Algorithm (FISTA), a closed-form solution for mixed norm programming and a distributed subgradient oracle. To efficiently handle web-scale user histories, we present a distributed inference algorithm for the problem that scales to billions of instances and millions of attributes. We show the superiority of our algorithm in terms of both sparsity and ROC performance over baseline feature selection methods (both single-task -regularization and multi-task mutual-information gain).
---
paper_title: Efficient multi-task feature learning with calibration
paper_content:
Multi-task feature learning has been proposed to improve the generalization performance by learning the shared features among multiple related tasks and it has been successfully applied to many real-world problems in machine learning, data mining, computer vision and bioinformatics. Most existing multi-task feature learning models simply assume a common noise level for all tasks, which may not be the case in real applications. Recently, a Calibrated Multivariate Regression (CMR) model has been proposed, which calibrates different tasks with respect to their noise levels and achieves superior prediction performance over the non-calibrated one. A major challenge is how to solve the CMR model efficiently as it is formulated as a composite optimization problem consisting of two non-smooth terms. In this paper, we propose a variant of the calibrated multi-task feature learning formulation by including a squared norm regularizer. We show that the dual problem of the proposed formulation is a smooth optimization problem with a piecewise sphere constraint. The simplicity of the dual problem enables us to develop fast dual optimization algorithms with low per-iteration cost. We also provide a detailed convergence analysis for the proposed dual optimization algorithm. Empirical studies demonstrate that, the dual optimization algorithm quickly converges and it is much more efficient than the primal optimization algorithm. Moreover, the calibrated multi-task feature learning algorithms with and without the squared norm regularizer achieve similar prediction performance and both outperform the non-calibrated ones. Thus, the proposed variant not only enables us to develop fast optimization algorithms, but also keeps the superior prediction performance of the calibrated multi-task feature learning over the non-calibrated one.
---
paper_title: A Regularization Approach to Learning Task Relationships in Multitask Learning
paper_content:
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.
---
paper_title: Multi � Task Learning for Stock Selection
paper_content:
Artificial Neural Networks can be used to predict future returns of stocks in order to take financial decisions. Should one build a separate network for each stock or share the same network for all the stocks? In this paper we also explore other alternatives, in which some layers are shared and others are not shared. When the prediction of future returns for different stocks are viewed as different tasks, sharing some parameters across stocks is a form of multi-task learning. In a series of experiments with Canadian stocks, we obtain yearly returns that are more than 14% above various benchmarks.
---
paper_title: Multi-task Gaussian process learning of robot inverse dynamics
paper_content:
The inverse dynamics problem for a robotic manipulator is to compute the torques needed at the joints to drive it along a given trajectory; it is beneficial to be able to learn this function for adaptive control. A robotic manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-task learning problem. By placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-task similarity depends on the underlying inertial parameters. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single tasks or pooling the data over all tasks.
---
paper_title: Robust Dynamic Trajectory Regression on Road Networks: A Multi-task Learning Framework
paper_content:
Trajectory regression, which aims to predict the travel time of arbitrary trajectories on road networks, attracts significant attention in various applications of traffic systems these years. In this paper, we tackle this problem with a multitask learning (MTL) framework. To take the temporal nature of the problem into consideration, we divide the regression problem into a set of sub-tasks of distinct time periods, then the problem can be treated in a multi-task learning framework. Further, we propose a novel regularization term in which we exploit the block sparse structure to augment the robustness of the model. In addition, we incorporate the spatial smoothness over road links and thus achieve a spatial-temporal framework. An accelerated proximal algorithm is adopted to solve the convex but non-smooth problem, which will converge to the global optimum. Experiments on both synthetic and real data sets demonstrate the effectiveness of the proposed method.
---
paper_title: Transferring MultiDevice Localization Models Using Latent Multi-Task Learning
paper_content:
In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over different devices, we found this assumption to be invalid in practice. To solve this problem, we treat multiple devices as multiple learning tasks, and propose a multi-task learning algorithm. Different from algorithms assuming that the hypotheses learned from the original data space for related tasks can be similar, we only require the hypotheses learned in a latent feature space are similar. To establish our algorithm, we employ an alternating optimization approach to iteratively learn feature mappings and multi-task regression models for the devices. We apply our latent multi-task learning algorithm to real-world indoor localization data and demonstrate its effectiveness.
---
paper_title: Time-dependent trajectory regression on road networks via multi-task learning
paper_content:
Road travel costs are important knowledge hidden in large-scale GPS trajectory data sets, the discovery of which can benefit many applications such as intelligent route planning and automatic driving navigation. While there are previous studies which tackled this task by modeling it as a regression problem with spatial smoothness taken into account, they unreasonably assumed that the latent cost of each road remains unchanged over time. Other works on route planning and recommendation that have considered temporal factors simply assumed that the temporal dynamics be known in advance as a parametric function over time, which is not faithful to reality. To overcome these limitations, in this paper, we propose an extension to a previous static trajectory regression framework by learning the temporal dynamics of road travel costs in an innovative non-parametric manner which can effectively overcome the temporal sparsity problem. In particular, we unify multiple different trajectory regression problems in a multi-task framework by introducing a novel crosstask regularization which encourages temporal smoothness on the change of road travel costs. We then propose an efficient block coordinate descent method to solve the resulting problem by exploiting its separable structures and prove its convergence to global optimum. Experiments conducted on both synthetic and real data sets demonstrate the effectiveness of our method and its improved accuracy on travel time prediction.
---
paper_title: Traffic Sign Recognition via Multi-Modal Tree-Structure Embedded Multi-Task Learning
paper_content:
Traffic sign recognition is a rather challenging task for intelligent transportation systems since signs in different subsets, e.g., speed limit signs, prohibition signs, and mandatory signs, are very different from each other in color or shape, whereas they share some similarities to the ones in the same subset. Therefore, it is important to integrate different modalities of visual features, such as color and shape, and select discriminative features for better sign description; in addition, it benefits to explore the correlations between the classes of traffic signs to learn the classifiers jointly to improve the generalization performance. In this paper, we propose M ulti- M odal t ree-structure embedded M ulti- T ask L earning called $\text{M}^{2}$ - tMTL to select discriminative visual features both between and within modalities, as well as the correlated features shared by similar classification tasks. Our method simultaneously introduces two structured sparsity-induced norms into a least squares regression. One of the norms can be used not only to select modality of features but also to conduct within-modality feature selection. Moreover, the hierarchical correlations among the classification tasks are well represented by a tree structure, and therefore, the tree-structure sparsity-induced norm is used for learning the regression coefficients jointly to boost the performance of multi-class traffic sign recognition. Alternating direction method of multipliers (ADMM) is used to efficiently solve the proposed model with guaranteed convergence. Extensive experiments on public benchmark data sets demonstrate that the proposed algorithm leads to a quite interpretable model, and it has better or competitive performance with several state-of-the-art methods but with less computational and memory cost.
---
paper_title: Trace Norm Regularization: Reformulations, Algorithms, and Multi-task Learning
paper_content:
We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis.
---
paper_title: Regularized multi--task learning
paper_content:
Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.
---
paper_title: Multi-task and Lifelong Learning of Kernels
paper_content:
We consider a problem of learning kernels for use in SVM classification in the multi-task and lifelong scenarios and provide generalization bounds on the error of a large margin classifier. Our results show that, under mild conditions on the family of kernels used for learning, solving several related tasks simultaneously is beneficial over single task learning. In particular, as the number of observed tasks grows, assuming that in the considered family of kernels there exists one that yields low approximation error on all tasks, the overhead associated with learning such a kernel vanishes and the complexity converges to that of learning when this good kernel is given to the learner.
---
paper_title: Exploiting Task Relatedness for Multiple Task Learning
paper_content:
The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.
---
paper_title: Estimating relatedness via data compression
paper_content:
We show that it is possible to use data compression on independently obtained hypotheses from various tasks to algorithmically provide guarantees that the tasks are sufficiently related to benefit from multitask learning. We give uniform bounds in terms of the empirical average error for the true average error of the n hypotheses provided by deterministic learning algorithms drawing independent samples from a set of n unknown computable task distributions over finite sets.
---
paper_title: Multi-task learning and algorithmic stability
paper_content:
In this paper, we study multi-task algorithms from the perspective of the algorithmic stability. We give a definition of the multi-task uniform stability, a generalization of the conventional uniform stability, which measures the maximum difference between the loss of a multi-task algorithm trained on a data set and that of the multitask algorithm trained on the same data set but with a data point removed in each task. In order to analyze multi-task algorithms based on multi-task uniform stability, we prove a generalized McDiarmid's inequality which assumes the difference bound condition holds by changing multiple input arguments instead of only one in the conventional McDiarmid's inequality. By using the generalized McDiarmid's inequality as a tool, we can analyze the generalization performance of general multitask algorithms in terms of the multi-task uniform stability. Moreover, as applications, we prove generalization bounds of several representative regularized multi-task algorithms.
---
paper_title: Support union recovery in high-dimensional multivariate regression
paper_content:
multivariate group Lasso, in which block regularization based on the ‘1/‘2 norm is used for support union recovery, or recovery of the set of s rows for which B is non-zero. Under high-dimensional scaling, we show that the multivariate group Lasso exhibits a threshold for the recovery of the exact row pattern with high probability over the random design and noise that is specified by the sample complexity parameter (n,p,s) := n/[2 (B )log(p s)]. Here n is the sample size, and (B ) is a sparsity-overlap function measuring a combination of the sparsities and overlaps of the K-regression coecient vectors that constitute the model. We prove that the multivariate group Lasso succeeds for problem sequences (n,p,s) such that (n,p,s) exceeds a critical level u, and fails for sequences such that (n,p,s) lies below a critical level ‘. For the special case of the standard Gaussian ensemble, we show that ‘ = u so that the characterization is sharp. The sparsity-overlap function (B ) reveals that, if the design is uncorrelated on the active rows, ‘1/‘2 regularization for multivariate regression never harms performance relative to an ordinary Lasso approach, and can yield substantial improvements in sample complexity (up to a factor of K) when the regression vectors are suitably orthogonal. For more general designs, it is possible for the ordinary Lasso to outperform the multivariate group Lasso. We complement our analysis with simulations that demonstrate the sharpness of our theoretical results, even for relatively small problems.
---
paper_title: A notion of task relatedness yielding provable multiple-task learning guarantees
paper_content:
The approach of learning multiple "related" tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow "algorithmically related", in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks. ::: ::: We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.
---
paper_title: Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
paper_content:
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose employing the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of the projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and a Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in detail. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multitask learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.
---
paper_title: Taking Advantage of Sparsity in Multi-Task Learning
paper_content:
We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning Argyriou et al. [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in Bickel et al. [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite.
---
paper_title: Learning Multiple Tasks with Kernel Methods
paper_content:
We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.
---
paper_title: Excess risk bounds for multitask learning with trace norm regularization
paper_content:
Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on the expected norm of sums of random positive semidefinite matrices with subexponential moments.
---
paper_title: Bounds for linear multi-task learning
paper_content:
We give dimension-free and data-dependent bounds for linear multi-task learning where a common linear operator is chosen to preprocess data for a vector of task specific linear-thresholding classifiers. The complexity penalty of multi-task learning is bounded by a simple expression involving the margins of the task-specific classifiers, the Hilbert-Schmidt norm of the selected preprocessor and the Hilbert-Schmidt norm of the covariance operator for the total mixture of all task distributions, or, alternatively, the Frobenius norm of the total Gramian matrix for the data-dependent version. The results can be compared to state-of-the-art results on linear single-task learning.
---
paper_title: Multi-task Regression using Minimal Penalties
paper_content:
In this paper we study the kernel multiple ridge regression framework, which we refer to as multitask regression, using penalization techniques. The theoretical analysis of this problem shows that the key element appearing for an optimal calibration is the covariance matrix of the noise between the different tasks. We present a new algorithm to estimate this covariance matrix, based on the concept of minimal penalty, which was previously used in the single-task regression framework to estimate the variance of the noise. We show, in a non-asymptotic setting and under mild assumptions on the target function, that this estimator converges towards the covariance matrix. Then plugging this estimator into the corresponding ideal penalty leads to an oracle inequality. We illustrate the behavior of our algorithm on synthetic examples.
---
paper_title: Learning Multiple Tasks using Shared Hypotheses
paper_content:
In this work we consider a setting where we have a very large number of related tasks with few examples from each individual task. Rather than either learning each task individually (and having a large generalization error) or learning all the tasks together using a single hypothesis (and suffering a potentially large inherent error), we consider learning a small pool of shared hypotheses. Each task is then mapped to a single hypothesis in the pool (hard association). We derive VC dimension generalization bounds for our model, based on the number of tasks, shared hypothesis and the VC dimension of the hypotheses class. We conducted experiments with both synthetic problems and sentiment of reviews, which strongly support our approach.
---
paper_title: On spectral learning
paper_content:
In this paper, we study the problem of learning a matrix W from a set of linear measurements. Our formulation consists in solving an optimization problem which involves regularization with a spectral penalty term. That is, the penalty term is a function of the spectrum of the covariance of W. Instances of this problem in machine learning include multi-task learning, collaborative filtering and multi-view learning, among others. Our goal is to elucidate the form of the optimal solution of spectral learning. The theory of spectral learning relies on the von Neumann characterization of orthogonally invariant norms and their association with symmetric gauge functions. Using this tool we formulate a representer theorem for spectral regularization and specify it to several useful example, such as Schatten p-norms, trace norm and spectral norm, which should proved useful in applications.
---
paper_title: Union Support Recovery in Multi-task Learning
paper_content:
We sharply characterize the performance of different penalization schemes for the problem of selecting the relevant variables in the multi-task setting. Previous work focuses on the regression problem where conditions on the design matrix complicate the analysis. A clearer and simpler picture emerges by studying the Normal means model. This model, often used in the field of statistics, is a simplified model that provides a laboratory for studying complex procedures.
---
paper_title: The Benefit of Multitask Representation Learning
paper_content:
We discuss a general method to learn data representations from multiple tasks. We provide a justification for this method in both settings of multitask learning and learning-to-learn. The method is illustrated in detail in the special case of linear feature learning. Conditions on the theoretical advantage offered by multitask representation learning over independent task learning are established. In particular, focusing on the important example of half-space learning, we derive the regime in which multitask representation learning is beneficial over independent task learning, as a function of the sample size, the number of tasks and the intrinsic data dimensionality. Other potential applications of our results include multitask feature learning in reproducing kernel Hilbert spaces and multilayer, deep networks.
---
paper_title: A theoretical framework for learning from a pool of disparate data sources
paper_content:
Many enterprises incorporate information gathered from a variety of data sources into an integrated input for some learning task. For example, aiming towards the design of an automated diagnostic tool for some disease, one may wish to integrate data gathered in many different hospitals. A major obstacle to such endeavors is that different data sources may vary considerably in the way they choose to represent related data. In practice, the problem is usually solved by a manual construction of semantic mappings and translations between the different sources. Recently there have been attempts to introduce automated algorithms based on machine learning tools for the construction of such translations.In this work we propose a theoretical framework for making classification predictions from a collection of different data sources, without creating explicit translations between them. Our framework allows a precise mathematical analysis of the complexity of such tasks, and it provides a tool for the development and comparison of different learning algorithms. Our main objective, at this stage, is to demonstrate the usefulness of computational learning theory to this practically important area and to stimulate further theoretical and experimental research of questions related to this framework.
---
paper_title: When is there a representer theorem? Vector versus matrix regularizers
paper_content:
We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the {\em representer theorem}, is at the basis of kernel-based methods in machine learning. In this paper, we prove the necessity of the above condition, thereby completing the characterization of kernel methods based on regularization. We further extend our analysis to regularization methods which learn a matrix, a problem which is motivated by the application to multi-task learning. In this context, we study a more general representer theorem, which holds for a larger class of regularizers. We provide a necessary and sufficient condition for these class of matrix regularizers and highlight them with some concrete examples of practical importance. Our analysis uses basic principles from matrix theory, especially the useful notion of matrix nondecreasing function.
---
paper_title: Multi-level Lasso for Sparse Multi-task Regression
paper_content:
We present a flexible formulation for variable selection in multi-task regression to allow for discrepancies in the estimated sparsity patterns accross the multiple tasks, while leveraging the common structure among them. Our approach is based on an intuitive decomposition of the regression coe_cients into a product between a component that is common to all tasks and another component that captures task-specificity. This decomposition yields the Multi-level Lasso objective that can be solved efficiently via alternating optimization. The analysis of the "orthonormal design" case reveals some interesting insights on the nature of the shrinkage performed by our method, compared to that of related work. Theoretical guarantees are provided on the consistency of Multi-level Lasso. Simulations and empirical study of micro-array data further demonstrate the value of our framework.
---
paper_title: The rademacher complexity of linear transformation classes
paper_content:
Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a finite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning.
---
paper_title: Spike and slab variational inference for multi-task and multiple kernel learning
paper_content:
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.
---
paper_title: A framework for learning predictive structures from multiple tasks and unlabeled data
paper_content:
One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.
---
paper_title: Sparse coding for multitask and transfer learning
paper_content:
We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.
---
| Title: A Survey on Multi-Task Learning
Section 1: INTRODUCTION
Description 1: Introduce the importance and challenges of Multi-Task Learning (MTL), its definition, and a comparison with transfer learning.
Section 2: MTL ALGORITHMS
Description 2: Define MTL and categorize different MTL algorithms into feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach, and deep learning approach.
Section 3: Feature Learning Approach
Description 3: Explain the two subcategories of the feature learning approach: feature transformation and feature selection methods.
Section 4: Low-Rank Approach
Description 4: Discuss the low-rank approach in MTL, its assumptions, and its methodology.
Section 5: Task Clustering Approach
Description 5: Describe the task clustering approach which groups similar tasks into clusters and its benefits.
Section 6: Task Relation Learning Approach
Description 6: Explore how task relations are learned automatically from data, and quantify relatedness by task similarity, task correlation, and task covariance.
Section 7: Dirty Approach
Description 7: Explain the dirty approach which decomposes the parameter matrix into two component matrices and its different regularizations.
Section 8: Multi-Level Approach
Description 8: Introduce the multi-level approach which extends the dirty approach and provides more expressive power for modeling complex task structures.
Section 9: Deep Learning Approach
Description 9: Discuss how deep learning models are used in MTL, focusing on shared hidden layers and learning task relations with deep networks.
Section 10: Comparison among Different Approaches
Description 10: Compare different MTL approaches based on their characteristics, robustness, interpretability, and performance.
Section 11: Another Taxonomy for Regularized MTL Methods
Description 11: Present another taxonomy categorizing regularized MTL methods into learning with feature covariance and learning with task relations.
Section 12: Novel Settings in MTL
Description 12: Address novel settings in MTL where conventional assumptions like identical feature representations and binary classification do not hold.
Section 13: MTL WITH OTHER LEARNING PARADIGMS
Description 13: Review the combinations of MTL with unsupervised learning, semi-supervised learning, active learning, reinforcement learning, multi-view learning, and graphical models.
Section 14: HANDLING BIG DATA
Description 14: Discuss how to handle big data in MTL through online, parallel, distributed MTL methods, feature selection, dimensionality reduction, and feature hashing.
Section 15: APPLICATIONS
Description 15: Highlight applications of MTL across various domains such as computer vision, bioinformatics, health informatics, speech and natural language processing, web applications, and ubiquitous computing.
Section 16: THEORETICAL ANALYSES
Description 16: Review the theoretical aspects of MTL, such as generalization bounds, stability, and conditions for recovering true features.
Section 17: CONCLUSIONS
Description 17: Summarize the overall insights of MTL, outlining future directions and addressing the challenges and potential improvements in the field. |
MAIN BARRIERS FOR QUALITY DATA COLLECTION IN EHR - A Review | 9 | ---
paper_title: The use of routinely collected computer data for research in primary care: opportunities and challenges
paper_content:
INTRODUCTION ::: Routinely collected primary care data has underpinned research that has helped define primary care as a specialty. In the early years of the discipline, data were collected manually, but digital data collection now makes large volumes of data readily available. Primary care informatics is emerging as an academic discipline for the scientific study of how to harness these data. This paper reviews how data are stored in primary care computer systems; current use of large primary care research databases; and, the opportunities and challenges for using routinely collected primary care data in research. ::: ::: ::: OPPORTUNITIES ::: (1) Growing volumes of routinely recorded data. (2) Improving data quality. (3) Technological progress enabling large datasets to be processed. (4) The potential to link clinical data in family practice with other data including genetic databases. (5) An established body of know-how within the international health informatics community. ::: ::: ::: CHALLENGES ::: (1) Research methods for working with large primary care datasets are limited. (2) How to infer meaning from data. (3) Pace of change in medicine and technology. (4) Integrating systems where there is often no reliable unique identifier and between health (person-based records) and social care (care-based records-e.g. child protection). (5) Achieving appropriate levels of information security, confidentiality, and privacy. ::: ::: ::: CONCLUSION ::: Routinely collected primary care computer data, aggregated into large databases, is used for audit, quality improvement, health service planning, epidemiological study and research. However, gaps exist in the literature about how to find relevant data, select appropriate research methods and ensure that the correct inferences are drawn.
---
paper_title: Informatics Challenges for the Impending Patient Information Explosion
paper_content:
As we move toward an era when health information is more readily accessible and transferable, there are several issues that will arise. This article addresses the challenges of information filtering, context-sensitive decision support, legal and ethical guidelines regarding obligations to obtain and use the information, aligning patient and health professionals' expectations in regard to the use and usefulness of the information, and enhancing data reliability. The authors discuss the issues and offer suggestions for addressing them.
---
paper_title: Informatics Challenges for the Impending Patient Information Explosion
paper_content:
As we move toward an era when health information is more readily accessible and transferable, there are several issues that will arise. This article addresses the challenges of information filtering, context-sensitive decision support, legal and ethical guidelines regarding obligations to obtain and use the information, aligning patient and health professionals' expectations in regard to the use and usefulness of the information, and enhancing data reliability. The authors discuss the issues and offer suggestions for addressing them.
---
paper_title: Medical Informatics: Improving Health Care Through Information
paper_content:
HEALTH CARE IS AN INFORMATIONbased science. Much of clinical practice involves gathering, synthesizing, and acting on information. Medical informatics is the field concerned with the management and use of information in health and biomedicine. This article focuses on the problems that motivate work in this field, the emerging solutions, and the barriers that remain. It also addresses the core themes that underlie all applications of medical informatics and unify the scientific approaches across the field. There is a growing concern that information is not being used as effectively as possible in health care. Recent reports from the Institute of Medicine have reviewed research findings related to information use and expressed concerns about medical errors and patient safety, the quality of medical records, and the protection of patient privacy and confidentiality. The latest Institute of Medicine report on this topic ties all these problems and potential solutions together in a vision for a health care system that is safe, patient-centered, and evidence-based. A variety of solutions is required to address the informationrelated problems in health care; many solutions involve the use of computers and computer-related technologies.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
paper_title: Definition, structure, content, use and impacts of electronic health records: A review of the research literature
paper_content:
Abstract Purpose This paper reviews the research literature on electronic health record (EHR) systems. The aim is to find out (1) how electronic health records are defined, (2) how the structure of these records is described, (3) in what contexts EHRs are used, (4) who has access to EHRs, (5) which data components of the EHRs are used and studied, (6) what is the purpose of research in this field, (7) what methods of data collection have been used in the studies reviewed and (8) what are the results of these studies. Methods A systematic review was carried out of the research dealing with the content of EHRs. A literature search was conducted on four electronic databases: Pubmed/Medline, Cinalh, Eval and Cochrane. Results The concept of EHR comprised a wide range of information systems, from files compiled in single departments to longitudinal collections of patient data. Only very few papers offered descriptions of the structure of EHRs or the terminologies used. EHRs were used in primary, secondary and tertiary care. Data were recorded in EHRs by different groups of health care professionals. Secretarial staff also recorded data from dictation or nurses’ or physicians’ manual notes. Some information was also recorded by patients themselves; this information is validated by physicians. It is important that the needs and requirements of different users are taken into account in the future development of information systems. Several data components were documented in EHRs: daily charting, medication administration, physical assessment, admission nursing note, nursing care plan, referral, present complaint (e.g. symptoms), past medical history, life style, physical examination, diagnoses, tests, procedures, treatment, medication, discharge, history, diaries, problems, findings and immunization. In the future it will be necessary to incorporate different kinds of standardized instruments, electronic interviews and nursing documentation systems in EHR systems. The aspects of information quality most often explored in the studies reviewed were the completeness and accuracy of different data components. It has been shown in several studies that the use of an information system was conducive to more complete and accurate documentation by health care professionals. The quality of information is particularly important in patient care, but EHRs also provide important information for secondary purposes, such as health policy planning. Conclusion Studies focusing on the content of EHRs are needed, especially studies of nursing documentation or patient self-documentation. One future research area is to compare the documentation of different health care professionals with the core information about EHRs which has been determined in national health projects. The challenge for ongoing national health record projects around the world is to take into account all the different types of EHRs and the needs and requirements of different health care professionals and consumers in the development of EHRs. A further challenge is the use of international terminologies in order to achieve semantic interoperability.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
paper_title: Definition, structure, content, use and impacts of electronic health records: A review of the research literature
paper_content:
Abstract Purpose This paper reviews the research literature on electronic health record (EHR) systems. The aim is to find out (1) how electronic health records are defined, (2) how the structure of these records is described, (3) in what contexts EHRs are used, (4) who has access to EHRs, (5) which data components of the EHRs are used and studied, (6) what is the purpose of research in this field, (7) what methods of data collection have been used in the studies reviewed and (8) what are the results of these studies. Methods A systematic review was carried out of the research dealing with the content of EHRs. A literature search was conducted on four electronic databases: Pubmed/Medline, Cinalh, Eval and Cochrane. Results The concept of EHR comprised a wide range of information systems, from files compiled in single departments to longitudinal collections of patient data. Only very few papers offered descriptions of the structure of EHRs or the terminologies used. EHRs were used in primary, secondary and tertiary care. Data were recorded in EHRs by different groups of health care professionals. Secretarial staff also recorded data from dictation or nurses’ or physicians’ manual notes. Some information was also recorded by patients themselves; this information is validated by physicians. It is important that the needs and requirements of different users are taken into account in the future development of information systems. Several data components were documented in EHRs: daily charting, medication administration, physical assessment, admission nursing note, nursing care plan, referral, present complaint (e.g. symptoms), past medical history, life style, physical examination, diagnoses, tests, procedures, treatment, medication, discharge, history, diaries, problems, findings and immunization. In the future it will be necessary to incorporate different kinds of standardized instruments, electronic interviews and nursing documentation systems in EHR systems. The aspects of information quality most often explored in the studies reviewed were the completeness and accuracy of different data components. It has been shown in several studies that the use of an information system was conducive to more complete and accurate documentation by health care professionals. The quality of information is particularly important in patient care, but EHRs also provide important information for secondary purposes, such as health policy planning. Conclusion Studies focusing on the content of EHRs are needed, especially studies of nursing documentation or patient self-documentation. One future research area is to compare the documentation of different health care professionals with the core information about EHRs which has been determined in national health projects. The challenge for ongoing national health record projects around the world is to take into account all the different types of EHRs and the needs and requirements of different health care professionals and consumers in the development of EHRs. A further challenge is the use of international terminologies in order to achieve semantic interoperability.
---
paper_title: Electronic health records: high-quality electronic data for higher-quality clinical research
paper_content:
In the decades prior to the introduction of electronic health records (EHRs), the best source of electronic information to support clinical research was claims data. The use of claims data in research has been criticised for capturing only demographics, diagnoses and procedures recorded for billing purposes that may not fully reflect the patient's condition. Many important details of the patient's clinical status are not recorded. ::: EHRs can overcome many limitations of claims data in research, by capturing a more complete picture of the observations and actions of a clinician recorded when patients are seen. EHRs can provide important details about vital signs, diagnostic test results, social and family history, prescriptions and physical examination findings. As a result, EHRs present a new opportunity to use data collected through the routine operation of a clinical practice to generate and test hypotheses about the relationships among patients, diseases, practice styles, therapeutic modalities and clinical outcomes. ::: This article describes the clinical research information infrastructure at four institutions: the University of Pennsylvania, Regenstrief Institute/Indiana University, Partners Healthcare System and the University of Virginia. We present models for applying EHR data successfully within the clinical research enterprise.
---
paper_title: Medical Informatics: Improving Health Care Through Information
paper_content:
HEALTH CARE IS AN INFORMATIONbased science. Much of clinical practice involves gathering, synthesizing, and acting on information. Medical informatics is the field concerned with the management and use of information in health and biomedicine. This article focuses on the problems that motivate work in this field, the emerging solutions, and the barriers that remain. It also addresses the core themes that underlie all applications of medical informatics and unify the scientific approaches across the field. There is a growing concern that information is not being used as effectively as possible in health care. Recent reports from the Institute of Medicine have reviewed research findings related to information use and expressed concerns about medical errors and patient safety, the quality of medical records, and the protection of patient privacy and confidentiality. The latest Institute of Medicine report on this topic ties all these problems and potential solutions together in a vision for a health care system that is safe, patient-centered, and evidence-based. A variety of solutions is required to address the informationrelated problems in health care; many solutions involve the use of computers and computer-related technologies.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
paper_title: Definition, structure, content, use and impacts of electronic health records: A review of the research literature
paper_content:
Abstract Purpose This paper reviews the research literature on electronic health record (EHR) systems. The aim is to find out (1) how electronic health records are defined, (2) how the structure of these records is described, (3) in what contexts EHRs are used, (4) who has access to EHRs, (5) which data components of the EHRs are used and studied, (6) what is the purpose of research in this field, (7) what methods of data collection have been used in the studies reviewed and (8) what are the results of these studies. Methods A systematic review was carried out of the research dealing with the content of EHRs. A literature search was conducted on four electronic databases: Pubmed/Medline, Cinalh, Eval and Cochrane. Results The concept of EHR comprised a wide range of information systems, from files compiled in single departments to longitudinal collections of patient data. Only very few papers offered descriptions of the structure of EHRs or the terminologies used. EHRs were used in primary, secondary and tertiary care. Data were recorded in EHRs by different groups of health care professionals. Secretarial staff also recorded data from dictation or nurses’ or physicians’ manual notes. Some information was also recorded by patients themselves; this information is validated by physicians. It is important that the needs and requirements of different users are taken into account in the future development of information systems. Several data components were documented in EHRs: daily charting, medication administration, physical assessment, admission nursing note, nursing care plan, referral, present complaint (e.g. symptoms), past medical history, life style, physical examination, diagnoses, tests, procedures, treatment, medication, discharge, history, diaries, problems, findings and immunization. In the future it will be necessary to incorporate different kinds of standardized instruments, electronic interviews and nursing documentation systems in EHR systems. The aspects of information quality most often explored in the studies reviewed were the completeness and accuracy of different data components. It has been shown in several studies that the use of an information system was conducive to more complete and accurate documentation by health care professionals. The quality of information is particularly important in patient care, but EHRs also provide important information for secondary purposes, such as health policy planning. Conclusion Studies focusing on the content of EHRs are needed, especially studies of nursing documentation or patient self-documentation. One future research area is to compare the documentation of different health care professionals with the core information about EHRs which has been determined in national health projects. The challenge for ongoing national health record projects around the world is to take into account all the different types of EHRs and the needs and requirements of different health care professionals and consumers in the development of EHRs. A further challenge is the use of international terminologies in order to achieve semantic interoperability.
---
paper_title: The use of routinely collected computer data for research in primary care: opportunities and challenges
paper_content:
INTRODUCTION ::: Routinely collected primary care data has underpinned research that has helped define primary care as a specialty. In the early years of the discipline, data were collected manually, but digital data collection now makes large volumes of data readily available. Primary care informatics is emerging as an academic discipline for the scientific study of how to harness these data. This paper reviews how data are stored in primary care computer systems; current use of large primary care research databases; and, the opportunities and challenges for using routinely collected primary care data in research. ::: ::: ::: OPPORTUNITIES ::: (1) Growing volumes of routinely recorded data. (2) Improving data quality. (3) Technological progress enabling large datasets to be processed. (4) The potential to link clinical data in family practice with other data including genetic databases. (5) An established body of know-how within the international health informatics community. ::: ::: ::: CHALLENGES ::: (1) Research methods for working with large primary care datasets are limited. (2) How to infer meaning from data. (3) Pace of change in medicine and technology. (4) Integrating systems where there is often no reliable unique identifier and between health (person-based records) and social care (care-based records-e.g. child protection). (5) Achieving appropriate levels of information security, confidentiality, and privacy. ::: ::: ::: CONCLUSION ::: Routinely collected primary care computer data, aggregated into large databases, is used for audit, quality improvement, health service planning, epidemiological study and research. However, gaps exist in the literature about how to find relevant data, select appropriate research methods and ensure that the correct inferences are drawn.
---
paper_title: Data Quality Issues in Electronic Health Records: An Adaptation Framework for the Greek Health System
paper_content:
This article address data quality issues relating to electronic health records (EHRs). It discusses the nature of the problem of supporting EHRs at national and international levels, and examines t...
---
paper_title: Electronic health records: high-quality electronic data for higher-quality clinical research
paper_content:
In the decades prior to the introduction of electronic health records (EHRs), the best source of electronic information to support clinical research was claims data. The use of claims data in research has been criticised for capturing only demographics, diagnoses and procedures recorded for billing purposes that may not fully reflect the patient's condition. Many important details of the patient's clinical status are not recorded. ::: EHRs can overcome many limitations of claims data in research, by capturing a more complete picture of the observations and actions of a clinician recorded when patients are seen. EHRs can provide important details about vital signs, diagnostic test results, social and family history, prescriptions and physical examination findings. As a result, EHRs present a new opportunity to use data collected through the routine operation of a clinical practice to generate and test hypotheses about the relationships among patients, diseases, practice styles, therapeutic modalities and clinical outcomes. ::: This article describes the clinical research information infrastructure at four institutions: the University of Pennsylvania, Regenstrief Institute/Indiana University, Partners Healthcare System and the University of Virginia. We present models for applying EHR data successfully within the clinical research enterprise.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
paper_title: Definition, structure, content, use and impacts of electronic health records: A review of the research literature
paper_content:
Abstract Purpose This paper reviews the research literature on electronic health record (EHR) systems. The aim is to find out (1) how electronic health records are defined, (2) how the structure of these records is described, (3) in what contexts EHRs are used, (4) who has access to EHRs, (5) which data components of the EHRs are used and studied, (6) what is the purpose of research in this field, (7) what methods of data collection have been used in the studies reviewed and (8) what are the results of these studies. Methods A systematic review was carried out of the research dealing with the content of EHRs. A literature search was conducted on four electronic databases: Pubmed/Medline, Cinalh, Eval and Cochrane. Results The concept of EHR comprised a wide range of information systems, from files compiled in single departments to longitudinal collections of patient data. Only very few papers offered descriptions of the structure of EHRs or the terminologies used. EHRs were used in primary, secondary and tertiary care. Data were recorded in EHRs by different groups of health care professionals. Secretarial staff also recorded data from dictation or nurses’ or physicians’ manual notes. Some information was also recorded by patients themselves; this information is validated by physicians. It is important that the needs and requirements of different users are taken into account in the future development of information systems. Several data components were documented in EHRs: daily charting, medication administration, physical assessment, admission nursing note, nursing care plan, referral, present complaint (e.g. symptoms), past medical history, life style, physical examination, diagnoses, tests, procedures, treatment, medication, discharge, history, diaries, problems, findings and immunization. In the future it will be necessary to incorporate different kinds of standardized instruments, electronic interviews and nursing documentation systems in EHR systems. The aspects of information quality most often explored in the studies reviewed were the completeness and accuracy of different data components. It has been shown in several studies that the use of an information system was conducive to more complete and accurate documentation by health care professionals. The quality of information is particularly important in patient care, but EHRs also provide important information for secondary purposes, such as health policy planning. Conclusion Studies focusing on the content of EHRs are needed, especially studies of nursing documentation or patient self-documentation. One future research area is to compare the documentation of different health care professionals with the core information about EHRs which has been determined in national health projects. The challenge for ongoing national health record projects around the world is to take into account all the different types of EHRs and the needs and requirements of different health care professionals and consumers in the development of EHRs. A further challenge is the use of international terminologies in order to achieve semantic interoperability.
---
paper_title: The use of routinely collected computer data for research in primary care: opportunities and challenges
paper_content:
INTRODUCTION ::: Routinely collected primary care data has underpinned research that has helped define primary care as a specialty. In the early years of the discipline, data were collected manually, but digital data collection now makes large volumes of data readily available. Primary care informatics is emerging as an academic discipline for the scientific study of how to harness these data. This paper reviews how data are stored in primary care computer systems; current use of large primary care research databases; and, the opportunities and challenges for using routinely collected primary care data in research. ::: ::: ::: OPPORTUNITIES ::: (1) Growing volumes of routinely recorded data. (2) Improving data quality. (3) Technological progress enabling large datasets to be processed. (4) The potential to link clinical data in family practice with other data including genetic databases. (5) An established body of know-how within the international health informatics community. ::: ::: ::: CHALLENGES ::: (1) Research methods for working with large primary care datasets are limited. (2) How to infer meaning from data. (3) Pace of change in medicine and technology. (4) Integrating systems where there is often no reliable unique identifier and between health (person-based records) and social care (care-based records-e.g. child protection). (5) Achieving appropriate levels of information security, confidentiality, and privacy. ::: ::: ::: CONCLUSION ::: Routinely collected primary care computer data, aggregated into large databases, is used for audit, quality improvement, health service planning, epidemiological study and research. However, gaps exist in the literature about how to find relevant data, select appropriate research methods and ensure that the correct inferences are drawn.
---
paper_title: Informatics Challenges for the Impending Patient Information Explosion
paper_content:
As we move toward an era when health information is more readily accessible and transferable, there are several issues that will arise. This article addresses the challenges of information filtering, context-sensitive decision support, legal and ethical guidelines regarding obligations to obtain and use the information, aligning patient and health professionals' expectations in regard to the use and usefulness of the information, and enhancing data reliability. The authors discuss the issues and offer suggestions for addressing them.
---
paper_title: Electronic health records: high-quality electronic data for higher-quality clinical research
paper_content:
In the decades prior to the introduction of electronic health records (EHRs), the best source of electronic information to support clinical research was claims data. The use of claims data in research has been criticised for capturing only demographics, diagnoses and procedures recorded for billing purposes that may not fully reflect the patient's condition. Many important details of the patient's clinical status are not recorded. ::: EHRs can overcome many limitations of claims data in research, by capturing a more complete picture of the observations and actions of a clinician recorded when patients are seen. EHRs can provide important details about vital signs, diagnostic test results, social and family history, prescriptions and physical examination findings. As a result, EHRs present a new opportunity to use data collected through the routine operation of a clinical practice to generate and test hypotheses about the relationships among patients, diseases, practice styles, therapeutic modalities and clinical outcomes. ::: This article describes the clinical research information infrastructure at four institutions: the University of Pennsylvania, Regenstrief Institute/Indiana University, Partners Healthcare System and the University of Virginia. We present models for applying EHR data successfully within the clinical research enterprise.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
paper_title: Definition, structure, content, use and impacts of electronic health records: A review of the research literature
paper_content:
Abstract Purpose This paper reviews the research literature on electronic health record (EHR) systems. The aim is to find out (1) how electronic health records are defined, (2) how the structure of these records is described, (3) in what contexts EHRs are used, (4) who has access to EHRs, (5) which data components of the EHRs are used and studied, (6) what is the purpose of research in this field, (7) what methods of data collection have been used in the studies reviewed and (8) what are the results of these studies. Methods A systematic review was carried out of the research dealing with the content of EHRs. A literature search was conducted on four electronic databases: Pubmed/Medline, Cinalh, Eval and Cochrane. Results The concept of EHR comprised a wide range of information systems, from files compiled in single departments to longitudinal collections of patient data. Only very few papers offered descriptions of the structure of EHRs or the terminologies used. EHRs were used in primary, secondary and tertiary care. Data were recorded in EHRs by different groups of health care professionals. Secretarial staff also recorded data from dictation or nurses’ or physicians’ manual notes. Some information was also recorded by patients themselves; this information is validated by physicians. It is important that the needs and requirements of different users are taken into account in the future development of information systems. Several data components were documented in EHRs: daily charting, medication administration, physical assessment, admission nursing note, nursing care plan, referral, present complaint (e.g. symptoms), past medical history, life style, physical examination, diagnoses, tests, procedures, treatment, medication, discharge, history, diaries, problems, findings and immunization. In the future it will be necessary to incorporate different kinds of standardized instruments, electronic interviews and nursing documentation systems in EHR systems. The aspects of information quality most often explored in the studies reviewed were the completeness and accuracy of different data components. It has been shown in several studies that the use of an information system was conducive to more complete and accurate documentation by health care professionals. The quality of information is particularly important in patient care, but EHRs also provide important information for secondary purposes, such as health policy planning. Conclusion Studies focusing on the content of EHRs are needed, especially studies of nursing documentation or patient self-documentation. One future research area is to compare the documentation of different health care professionals with the core information about EHRs which has been determined in national health projects. The challenge for ongoing national health record projects around the world is to take into account all the different types of EHRs and the needs and requirements of different health care professionals and consumers in the development of EHRs. A further challenge is the use of international terminologies in order to achieve semantic interoperability.
---
paper_title: The use of routinely collected computer data for research in primary care: opportunities and challenges
paper_content:
INTRODUCTION ::: Routinely collected primary care data has underpinned research that has helped define primary care as a specialty. In the early years of the discipline, data were collected manually, but digital data collection now makes large volumes of data readily available. Primary care informatics is emerging as an academic discipline for the scientific study of how to harness these data. This paper reviews how data are stored in primary care computer systems; current use of large primary care research databases; and, the opportunities and challenges for using routinely collected primary care data in research. ::: ::: ::: OPPORTUNITIES ::: (1) Growing volumes of routinely recorded data. (2) Improving data quality. (3) Technological progress enabling large datasets to be processed. (4) The potential to link clinical data in family practice with other data including genetic databases. (5) An established body of know-how within the international health informatics community. ::: ::: ::: CHALLENGES ::: (1) Research methods for working with large primary care datasets are limited. (2) How to infer meaning from data. (3) Pace of change in medicine and technology. (4) Integrating systems where there is often no reliable unique identifier and between health (person-based records) and social care (care-based records-e.g. child protection). (5) Achieving appropriate levels of information security, confidentiality, and privacy. ::: ::: ::: CONCLUSION ::: Routinely collected primary care computer data, aggregated into large databases, is used for audit, quality improvement, health service planning, epidemiological study and research. However, gaps exist in the literature about how to find relevant data, select appropriate research methods and ensure that the correct inferences are drawn.
---
paper_title: 10 potholes in the road to information quality
paper_content:
Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences.
---
| Title: MAIN BARRIERS FOR QUALITY DATA COLLECTION IN EHR - A Review
Section 1: INTRODUCTION
Description 1: Discuss technological progress in health services, the role of Electronic Health Records (EHRs), and the purpose of the review.
Section 2: SEARCH STRATEGY AND STUDY SELECTION
Description 2: Outline the methodology used to conduct the qualitative review study, including the search databases, key-words, and selection criteria for the literature reviewed.
Section 3: MAIN BARRIERS TO HIGH QUALITY DATA COLLECTION
Description 3: Identify and explain the primary barriers to high-quality data collection from EHRs, emphasizing the issues encountered in the information quality process.
Section 4: Electronic Health Records
Description 4: Define what EHRs are according to international standards and detail the roles of information producers, custodians, and consumers within the information manufacturing process.
Section 5: Data Sources/Availability
Description 5: Describe the challenges related to data storage, format inconsistencies, and the need for standardized infrastructure and terminology to improve data aggregation and transfer.
Section 6: Data Format
Description 6: Discuss the different methods of data capture in EHRs, their respective advantages and disadvantages, and the need for standardized coding systems to ensure accuracy.
Section 7: Data Accuracy
Description 7: Explain the significance of data accuracy, common causes of inaccuracies, and potential solutions to improve data accuracy in EHRs.
Section 8: Data Accessibility
Description 8: Analyze the ethical and logistical barriers to data accessibility, highlighting the impact on research and decision-making and suggesting ways to improve data access while maintaining privacy and security.
Section 9: MAIN FINDINGS AND RECOMMENDATIONS
Description 9: Summarize the main findings of the review and provide recommendations to overcome the identified barriers, including solutions such as structured encounter forms, early problem recognition, and improved research methods. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.